Skip to main content
Posts by:

Tim Booher

The state of hardware security

You can’t build anything big and important without a solid foundation. In security, hardware acts as that unyielding foundation, the basis for all other defenses. While fancy encryption and zero-trust principles can safeguard data in an environment where software is inherently untrusted, they ultimately rely on hardware’s immutable security properties. Companies are partnering with cloud providers to store their data on the same infrastructure as their competitors, so it’s not enough to know software security.

Hardware-based security measures, such as secure boot, trusted platform modules, and memory encryption, establish an initial trust anchor that software cannot easily subvert and that users can evaluate and decide to trust. This foundational layer verifies the integrity of the system before any software is executed, ensuring that the subsequent layers of security are built on a trusted base. Without this hardware root of trust, even the most robust software security mechanisms are vulnerable to low-level attacks that can compromise the entire system.

One thought experiment to understand the critical nature of hardware security is the ability to observe or spoof the user interface. At some level data have to leave the system unencrypted through a display and if the last mile of hardware isn’t secure, an attacker can have the same effects as a full system compromise.

Over the past decade, hardware security has evolved from pioneering isolated execution environments to mature, commercially deployed solutions that underpin today’s cloud and edge infrastructures. Early in the 2010s, initiatives like Intel SGX and AMD’s initial forays into secure encrypted virtualization laid the groundwork for establishing a hardware root of trust—ensuring that critical operations could run in protected enclaves even in hostile software environments. Building on these foundations, AMD introduced SEV (Secure Encrypted Virtualization) in 2016 to encrypt VM memory, followed by SEV-ES in 2017 to extend that protection to CPU register states. More recently, with the advent of SEV-SNP (Secure Nested Paging) around 2020 and Intel TDX emerging in the last couple of years, the focus has shifted from merely encrypting data to providing robust integrity guarantees that protect against sophisticated attacks like memory replay, remapping, and side channels.

Despite these advances, hard problems persist. For instance, achieving high performance while enforcing strict integrity checks (such as those implemented via the Reverse Map Table in SEV-SNP) remains a delicate balancing act—ensuring that security enhancements do not impose unacceptable overhead in high-performance, multi-tenant data centers. Furthermore, the growing complexity of modern systems, especially those with NUMA architectures or heterogeneous device environments, poses challenges in uniformly maintaining a hardware root of trust. Issues such as securing device I/O paths, mitigating advanced side-channel attacks, and managing secure migration of VMs across untrusted hosts continue to be hard. While hardware-based trust solutions have come a long way over the past ten years, the quest to achieve an optimal blend of security, performance, and flexibility in an increasingly hostile and complex threat landscape remains an ongoing challenge.

Where the big money intersects this space is cloud compute, hardware-enforced confidential computing has become an indispensable tool for protecting sensitive workloads in multi-tenant environments. Technologies like AMD SEV‑SNP, Intel TDX, and Arm CCA each promise robust isolation by leveraging hardware mechanisms to encrypt memory, enforce integrity, and isolate virtual machines even in the presence of an untrusted hypervisor. However, while these solutions share the same overarching goal, they embody fundamentally different design philosophies and tradeoffs.

At the heart of these differences lies a tradeoff between security and performance, flexibility and complexity, as well as legacy compatibility versus future-proofing. For instance, AMD’s SEV‑SNP uses advanced techniques like the Reverse Map Table and multi-level guest privilege schemes to provide fine-grained integrity guarantees, which inevitably introduce additional overhead and management complexity. In contrast, Intel TDX offers a more abstracted approach to secure virtualization, which simplifies integration but can limit granular control. Meanwhile, Arm CCA, integrated directly into the ARMv9 ISA, emphasizes low overhead and tight integration at the cost of some of the flexibility seen in x86-based solutions.

And there is an open alternative. RISC‑V CoVE is a research‑driven initiative aimed at extending confidential computing to the open‑source RISC‑V ecosystem. Over the past decade, as hardware-based security became critical in data centers and cloud environments, proprietary technologies like Intel SGX and AMD SEV demonstrated the power of integrating security directly into the processor. Inspired by these efforts, the RISC‑V community began exploring ways to incorporate similar confidentiality and integrity features in a fully open and modular ISA. CoVE emerged as one of the early prototypes in this space, leveraging RISC‑V’s inherent flexibility to design hardware extensions that support secure enclaves, memory encryption, and integrity verification.

Technically, CoVE introduces mechanisms analogous to those found in proprietary systems but adapted for RISC‑V’s simpler architecture. This includes a dedicated memory encryption engine, hardware‑assisted isolation for virtual machines, and innovative techniques for tracking memory ownership—concepts similar to AMD’s Reverse Map Table, yet reimagined to suit an open‑source environment. The design also explores support for remote attestation and dynamic security policy enforcement within guest systems, offering a proof‑of‑concept that confidential virtualization is achievable without vendor lock‑in. While CoVE remains primarily in the research and prototyping phase, it highlights both the potential and the challenges of building robust hardware trust mechanisms on an open ISA, such as balancing performance overhead, ensuring seamless software integration, and achieving scalable adoption across diverse platforms.

CoVE could be the future. The economic landscape for confidential computing is heavily influenced by the licensing models adopted by key vendors—most notably AMD. If AMD’s licensing fees for its SEV‑SNP technology are perceived as too high by data centers and cloud service providers, it could create a significant cost barrier and incentivize investment in RISC‑V. Unlike proprietary solutions, RISC‑V offers a more open and flexible platform that—if coupled with effective confidential computing extensions—can dramatically lower both the entry and operational costs. This shift could spur a new wave of innovation as organizations seek to avoid high licensing costs, driving the development of open‑source confidential computing solutions that are not only cost‑effective but also adaptable to diverse workloads and deployment environments. In essence, the long‑term economics of hardware trust will depend on finding the right balance between robust security guarantees and accessible licensing terms, with the potential for RISC‑V to play a transformative role if traditional licensing models prove too burdensome.

Until a change in the market, Arm’s hardware protections remain critical. Arm Confidential Compute Architecture (CCA) represents a major evolution in secure computing for Arm-based systems. Introduced as an integral part of the Armv9 architecture, CCA builds on decades of lessons from TrustZone and other legacy security solutions to offer robust, hardware-enforced isolation for modern multi-tenant environments. At its core, CCA establishes a hardware root of trust by integrating memory encryption, secure boot, and remote attestation directly into the processor’s ISA—ensuring that sensitive workloads begin in a verified, trusted state. Unlike traditional approaches where security is largely managed in software, Arm CCA leverages dedicated security modules and encryption engines to protect the confidentiality and integrity of data, even when the hypervisor is untrusted. Designed to meet the demands of cloud, edge, and IoT applications, its architecture minimizes performance overhead while providing scalable, low-latency isolation across diverse devices. Over the past few years, as Arm processors have become ubiquitous in both mobile and server markets, industry collaboration has driven CCA’s maturation into a practical, next-generation solution for confidential computing that not only meets today’s threats but is also built to evolve alongside emerging challenges.

In the x86 space, Intel Trust Domain Extensions (TDX) represents Intel’s latest stride toward confidential computing in virtualized environments. Building on its long history of virtualization technology such as VT‑x, Intel introduced TDX to create “trusted domains” that isolate guest VMs even when the hypervisor is untrusted or compromised. Over the last few years, Intel has iteratively refined TDX to address evolving threats, integrating advanced memory encryption engines and robust access-control mechanisms directly into the CPU. TDX ensures that each virtual machine operates in a hardware‑enforced secure environment by encrypting guest memory with keys inaccessible to the host and by validating the integrity of critical VM state through specialized data structures and registers. Additionally, TDX supports remote attestation, allowing external parties to verify that a VM is running on a genuine, unmodified Intel platform with the latest security features. While aiming to minimize performance overhead, TDX also enforces strict isolation policies, ensuring that even if the hypervisor is breached, sensitive workloads remain protected. This evolution is crucial for cloud and data center applications, where trust in the underlying infrastructure is paramount.

AMD SEV‑SNP (Secure Encrypted Virtualization – Secure Nested Paging) is AMD’s most advanced solution for protecting virtual machines in environments where even the hypervisor is untrusted. Building on the foundation laid by earlier SEV technologies—which provided memory encryption (SEV) and extended protection of CPU register state (SEV‑ES)—SEV‑SNP adds critical integrity protections. It introduces hardware mechanisms like the Reverse Map Table (RMP) and new instructions (e.g., RMPUPDATE, PVALIDATE, and RMPADJUST) that enforce a strict one‑to‑one mapping between guest and system physical memory. These features ensure that any attempt to replay, remap, or alias memory pages is detected and blocked, thus safeguarding against sophisticated hypervisor-level attacks. Additionally, SEV‑SNP supports features such as VM Privilege Levels (VMPLs) that allow a guest to partition its own memory space for further internal security isolation. With robust remote attestation capabilities, SEV‑SNP enables cloud customers to verify that their VMs are running on genuine, unmodified AMD hardware with the latest security updates. This evolution in AMD’s confidential computing portfolio represents a significant leap forward, delivering both confidentiality and integrity protection essential for secure multi-tenant data centers and cloud environments.

Secure boot is the cornerstone of modern hardware trust—a process that ensures only verified and trusted code is executed right from the moment a system powers on. Think of it as the ultimate “ground zero” for security: if the foundation isn’t solid, everything built on top of it is at risk. In today’s era of distributed compute, from massive cloud data centers to edge devices, secure boot isn’t just an optional enhancement—it’s a necessity. It establishes a root of trust by validating the integrity of the system before any software is loaded, thereby underpinning all subsequent layers of defense. Without it, even the most sophisticated encryption and zero‑trust policies can fall prey to low‑level attacks that bypass software safeguards.

Different vendors have taken unique approaches to secure boot that reflect their broader security philosophies. For example, AMD’s SEV‑SNP integrates boot-time measurements with advanced memory encryption and integrity verification techniques, ensuring that only approved code is executed—albeit at the cost of added complexity and some performance overhead. Intel’s TDX takes a contrasting route by encapsulating entire virtual machines within “trusted domains,” streamlining the boot process at the expense of granular control. Meanwhile, Arm CCA embeds secure boot directly into the Armv9 ISA, delivering low‑overhead, highly efficient boot verification ideally suited for mobile, IoT, and edge scenarios. Risc-V CoVE is too early evaluate. Ultimately, regardless of the platform, secure boot provides that unbreakable starting point—ensuring that every subsequent layer of software security is built on a foundation that you can truly trust.

AMD’s approach to secure boot in SEV‑SNP extends the traditional concept by tightly integrating boot-time measurements with the VM launch process. Before a guest VM begins execution, its initial unencrypted image (containing boot code, for example) is measured and attested, ensuring that only trusted, approved code is executed. This process leverages a combination of hardware encryption, the Reverse Map Table (RMP), and remote attestation mechanisms to verify integrity. The tradeoff here is that while the process provides granular integrity guarantees even during live migration or in the presence of a malicious hypervisor, it introduces additional complexity and potential performance overhead during boot.

Intel TDX also supports secure boot, but its approach centers on establishing “trusted domains” where the entire virtual machine is launched within a protected environment. The boot process involves measuring and attesting to the initial state of the VM, ensuring that its critical data and code remain confidential throughout its lifecycle. Intel’s model abstracts much of the secure boot complexity from the guest OS by encapsulating the VM within a trusted domain. The advantage is a simpler integration for cloud environments with existing Intel infrastructure; however, this can limit fine‑grained control over the boot process compared to AMD SEV‑SNP, and it may require adjustments in legacy software to fully leverage the new protections.

In the Arm ecosystem, secure boot is built directly into the Armv9 ISA as part of a broader suite of security features. Arm CCA integrates secure boot with hardware‑assisted memory encryption and remote attestation, ensuring that each layer—from the bootloader up to the running OS—is authenticated before execution. This native integration minimizes overhead and offers a highly efficient secure boot process optimized for mobile, IoT, and cloud edge devices. However, because this approach is tightly coupled with the underlying hardware design, it can be less flexible when adapting to varied software requirements compared to more modular x86 solutions.

As a research‑driven initiative for the open‑source RISC‑V ecosystem, CoVE aims to replicate the robust features of proprietary solutions while maintaining the openness of the architecture. Its secure boot model is still evolving, designed to allow researchers to experiment with flexible, modular security extensions. While CoVE draws inspiration from both AMD and Intel approaches—such as measuring boot images and establishing a chain of trust—it also faces challenges common to research prototypes: ensuring compatibility with diverse implementations and achieving performance parity with more mature solutions. The tradeoffs here involve balancing experimental flexibility against the need for hardened, production‑grade security guarantees.

At the most basic level, physical security forms the bedrock of any robust confidential computing system—it’s the first line of defense that ensures all higher‑level security mechanisms have a trusted foundation. In modern architectures, this means integrating techniques like volume protection and physical unclonable functions (PUFs) to secure cryptographic keys and other sensitive data against physical tampering. For instance, AMD SEV‑SNP leverages its AMD Secure Processor (AMD‑SP) to generate and securely store memory encryption keys, often derived from PUFs, ensuring that these keys are intrinsically tied to the hardware and cannot be duplicated or extracted even with direct physical access. Similarly, Intel TDX relies on a hardware‑based root of trust where keys and attestation data are protected using secure boot and TPM-like mechanisms, making it essential for the underlying silicon to be physically secure. Arm CCA, embedded directly within the Armv9 ISA, also incorporates dedicated circuits for secure key generation (via PUFs) and enforces volume protection to guard against cold boot and side-channel attacks, all while maintaining low overhead for energy efficiency. In contrast, research initiatives like RISC‑V CoVE, which are developing confidential computing capabilities for open‑source platforms, must explicitly address these same physical security challenges from the ground up—balancing the need for openness and flexibility with rigorous hardware-based trust mechanisms. In every case, without stringent physical security—ranging from tamper‑resistant packaging to secure, hardware‑rooted key storage—no amount of software encryption or isolation can fully guarantee the confidentiality and integrity of sensitive workloads.

The evolution of hardware-based security—from early isolated execution environments to today’s advanced secure boot and confidential computing solutions—demonstrates that a robust hardware root of trust is indispensable. Whether it’s AMD SEV‑SNP’s intricate integrity mechanisms via the Reverse Map Table, Intel TDX’s trusted domains isolating entire virtual machines, Arm CCA’s deeply embedded secure boot within the Armv9 ISA, or the emerging flexibility of RISC‑V CoVE, each approach reflects a unique balance of security, performance, and complexity. These technologies not only secure the boot process and runtime environments but also integrate physical security measures such as volume protection and PUFs to safeguard cryptographic keys against tampering. As cloud and edge computing continue to expand, the quest to optimize this blend of hardware trust will be critical—driving future innovation and potentially reshaping the economics of confidential computing if proprietary licensing proves too burdensome.

If you want to get deeper into this area check out Confidential Computing.

By 0 Comments

Automate Amazon

I want my amazon data

Tracking and categorizing financial transactions can be tedious, especially when it comes to Amazon orders. With Amazon’s new data policies, it’s harder than ever to retrieve detailed purchase information in a usable format. Amazon is so broad that a credit card charge could be for a digital or physical product, a whole foods purchase. The purchases themselves are complicated, with charges connected to shipments and with gift card transactions involved. You can request all your data, but it’s in a terrible format, basically a big flattened table that doesn’t correlate order_id to invoiced transactions.

I have a well oiled set of code that downloads all my transactions from every bank, credit card, etc, uses machine learning (naive bayes) to categorize transactions and upload them to a google sheet where I can balance everything, check categories and add additional details. My code then downloads 25 years of transactions (every penny I’ve ever spent) into postgres (both locally and cloud based) that allows me to use R and tableau to do a bunch of analysis. It’s a hobby to sit down and build a google slide deck filled with data on where our money is going and how our investments are doing.

Our orders are going up, and it’s time for automation.

This system has evolved over time and works really well. Here, I’m wanted to share how I get amazon transactions to match my bank charges so I can list the products I’m buying.

Step 1: get your amazon order data into a database

This is tricky — google privacy central (the link has changed a couple times in the last year or so) and you can request a zip file of your data. There are two parts to this: request your data and then wait for an email with a link to download it later. It’s surprising that it could take days for what is surely a fully automated process, but it generally takes hours.

Eventually, you get your Orders.zip which has:

├── Digital-Ordering.1
│   ├── Digital Items.csv
...
├── Retail.CustomerReturns.1.1
│   └── Retail.CustomerReturns.1.1.csv
├── Retail.OrderHistory.1
│   └── Retail.OrderHistory.1.csv
├── Retail.OrderHistory.2
│   └── Retail.OrderHistory.2.csv

The file we want is Retail.OrderHistory.1.csv. You can get that into a database with this code:

Step 2: Get Your Amazon invoices data into the Database (via scraping)

That took a lot of work to get right, and that code works well for about 80% of my transactions, but some required matching actual invoice amounts with order_id. To make that work, you have to scrape your orders page, click on the order and download the detail. I’ve written a lot of code that does that before, but it’s a pain to get right (Google Chrome tools is a game-changer for that). Fortunately, I found this code that does exactly that: https://github.com/dcwangmit01/amazon-invoice-downloader

The Amazon Invoice Downloader is a Python script that automates the process of downloading invoices for Amazon purchases using the Playwright library. It logs into an Amazon account with provided credentials, navigates to the “Returns & Orders” section, and retrieves invoices within a specified date range or year. The invoices are saved as PDF files in a local directory, with filenames formatted to include the order date, total amount, and order ID. The script mimics human behavior to avoid detection and skips downloading invoices that already exist.

You get all the invoices, but most helpful is the resultant csv:

cat downloads/2024/transactions-2024.csv
Date,Amount,Card Type,Last 4 Digits,Order ID,Order Type
2024-01-03,7.57,Visa,1111,114-2554918-8507414,amazon
2024-01-03,5.60,Visa,1111,114-7295770-5362641,amazon

You can use this code to get this into the database:

And finally, a script that compares the database to my google sheet and adds the match to uncategorized transactions

This was a bit tricky, but all works well now. Hopefully this saves you some time. Please reach out if you have any questions and happy analysis.

By 0 Comments

The most non-Christian movie?

I’m a Christian and that defines my worldview and ethics. And I love art. Like Keats, I believe that good art is both beautiful and useful: Beethoven’s symphonies inspire awe; Dostoevsky’s The Brothers Karamazov explores themes of suffering, faith, and free will; It’s a Wonderful Life inspires hope and encourages selflessness; Shakespeare’s plays blend humor with profound insight.

These are all as good as they are beautiful. Since Christianity is truth and truth is beauty, all good art should flow from a common place.

One of the wrinkles of Christianity is that Christian ideas are baked into western thought. Much of what appears to be anti-Christian is just a mis-ordering of or mis-placed emphasis on Christian ideas. Woke? That’s myopic focus on one part of original sin without grace. Porn? A self-absorbed misappropriation of God’s gift of sex. Bad things don’t make a bad or evil movie. CS Lewis told us that storytelling should awaken us to higher truths, even when it reflects the brokenness of the world.

You could divide the world into two camps: good and bad. Bad can in turn be disordered or bad can be non-Christian. Framed this way, what is the least Christian movie? Would such a movie be filled with violence, sex and debauchery: The Wolf of Wall Street, Game of Thrones? Would it be a dark, satanic: The Exorcist, Heathers, Rosemary’s Baby?

To get a sense of what is non-Christian, examine the world before the gospels. The roman world gave us virtue and honor, but nothing like equality before the law, the concept that the last shall be first, or the dignity of all souls. Aristotle supported slavery, unwanted infants (eg. girls or disabled children) were routinely left to die, Athenian democracy excluded women entirely. Most of all, avenging an insult or wrong was a moral duty. Roman society prized virtus (manly courage) and honor over forgiveness or humility.

So a non-christian movie isn’t anymore filled with bad things than good. The real question is what ethics and what ideas guide them? A common theme is that non-Christian ideas deny the equal value and immortality of the soul.

The movie Me Before You presents itself as a romantic drama about love and loss. The movie is filled with beautiful people in beautiful places with sunny skies. But the message is pitch black dark and hopeless. Beneath the beauty is a worldview that undermines the sanctity of life and the transformative power of suffering. It’s a distinctively non-christian movie.

St. Augustine teaches us that every human being is created in the image of God (Genesis 1:27). This truth establishes the intrinsic value of all human life, regardless of physical condition or ability. In Me Before You, Will Traynor’s choice of assisted suicide implies that his life as a quadriplegic is less valuable than his life before the accident. This perspective contradicts the Christian belief that human worth is not determined by physical abilities or financial circumstances but by our relationship to God.

Augustine would remind us that despair arises when we lose sight of our Creator. Rather than seeking solace in love, community, or faith, Will’s decision reflects a self-centered despair—a refusal to trust that God can bring purpose and meaning even in suffering.

Thomas Aquinas famously argued that God allows suffering to bring about a greater good. In Will’s case, his condition could have been an opportunity to grow in humility, patience, and reliance on others—a path that can lead to sanctification. Louisa’s care for Will could have been a testament to sacrificial love, mirroring Christ’s self-giving love on the cross.

Instead, the film chooses to glorify Will’s autonomy supported by his riches and spontaneity, portraying his decision as noble and selfless. This was set against the backdrop of patrick, the loyal and disciplined long time boyfriend. Aquinas would see this as a failure to recognize the redemptive power of suffering, which, when embraced with faith, can lead to spiritual growth and even joy (Romans 5:3-5).

Søren Kierkegaard describes love as a commitment rooted in selflessness and sacrifice. True love does not seek its own way but rather the good of the other (1 Corinthians 13:4-7). In Me Before You, Will’s decision to end his life is framed as an act of love for Louisa, freeing her to live a life unburdened by his condition and with a pile of cash.

Kierkegaard would argue that true love requires embracing the other person, even in their brokenness. By rejecting life, Will also rejects the possibility of a deeper, sacrificial relationship with Louisa—one that could have transformed both of their lives.

Nothing defines unchristian more than nihilism and materialism. C.S. Lewis reminds us that this life is not all there is. In The Problem of Pain, he writes that God “whispers to us in our pleasures, speaks in our conscience, but shouts in our pains: it is His megaphone to rouse a deaf world.” Pain and suffering, while difficult, are often avenues through which God draws us closer to Himself.

Me Before You rejects this eternal perspective, focusing instead on immediate relief from suffering through assisted suicide. The Christian faith offers a vision of life beyond the grave, where every tear will be wiped away (Revelation 21:4).

Movies are there to make money, but they have the chance to make us better. Me Before You had the potential to tell a powerful story about resilience, faith, and the transformative power of love, love that transcends class divisions and embraces suffering as a means to demonstrate and receive love. Instead, it glorifies despair and autonomy at the expense of hope and community. From the perspectives of Augustine, Aquinas, Kierkegaard, and Lewis, the film’s message is not just misguided—it is profoundly un-Christian.

True love, dignity, and hope are found not in rejecting life but in embracing it, even with all its challenges. As Christians, we are called to uphold the sanctity of life, support those who are suffering, and trust in the redemptive power of God’s plan. Me Before You gets it wrong, but its flaws remind us of the beauty and value of a worldview centered on Christ.

By 3 Comments

Is Heaven Boring? The Surprising Truth About Eternal Joy

It’s a question that has likely crossed many minds: Won’t heaven be boring? When we read Revelation 4:8 about beings who “day and night never cease to say, ‘Holy, holy, holy, is the Lord God Almighty,'” it might sound like a monotonous loop of endless repetition. Doesn’t anything done forever sound exhausting?

But this modern anxiety about heaven’s potential tedium reveals more about our limited imagination than heaven’s reality. Let’s explore why the greatest Christian thinkers throughout history have understood heaven as anything but boring.

First, let’s address those seemingly repetitive angels. When we read about beings endlessly declaring God’s holiness, we’re attempting to understand eternal realities through temporal language. As C.S. Lewis points out in “Letters to Malcolm,” we’re seeing the eternal from the perspective of time-bound creatures. The angels’ worship isn’t like a broken record; it’s more like a perfect moment of joy eternally present.

Think of it this way: When you’re deeply in love, saying “I love you” for the thousandth time doesn’t feel repetitive – each utterance is fresh, meaningful, and full of discovery. The angels’ praise is similar but infinitely more profound.

C.S. Lewis gives us perhaps the most compelling modern vision of heaven’s excitement in “The Last Battle,” where he writes, “All their life in this world and all their adventures had only been the cover and the title page: now at last they were beginning Chapter One of the Great Story which no one on earth has read: which goes on forever: in which every chapter is better than the one before.”

Lewis understood heaven not as static perfection but as dynamic adventure. In his view, joy and discovery don’t end; they deepen. Each moment leads to greater wonder, not lesser. As he famously wrote, “Further up and further in!”

St. Augustine offers another perspective in his “Confessions” when he speaks of heaven as perfect rest. But this isn’t the rest of inactivity – it’s the rest of perfect alignment with our true purpose. Augustine writes of a rest that is full of activity: “We shall rest and we shall see, we shall see and we shall love, we shall love and we shall praise.”

This rest is like a master musician who has moved beyond struggling with technique and now creates beautiful music effortlessly. It’s not the absence of action but the perfection of it.

Perhaps most profoundly, Gregory of Nyssa introduced the concept of epektasis – the idea that the soul’s journey into God’s infinite nature is endless. In his “Life of Moses,” he argues that the perfect life is one of constant growth and discovery. Since God is infinite, our journey of knowing Him can never be exhausted.

This means heaven isn’t a destination where growth stops; it’s where growth becomes perfect and unhindered. Each moment brings new revelations of God’s nature, new depths of love, new heights of joy.

Our modern fear of heaven’s boredom often stems from:

  1. Materialistic assumptions about joy
  2. Limited understanding of perfection as static
  3. Inability to imagine pleasure without contrast

But the Christian tradition consistently presents heaven as:

  • Dynamic, not static
  • Creative, not repetitive
  • Deepening, not diminishing
  • Active, not passive
  • Relational, not isolated

Far from being boring, heaven is where the real adventure begins. It’s where we finally become fully ourselves, fully alive, fully engaged in the purpose for which we were created. As Lewis, Augustine, and Gregory of Nyssa understood, heaven is not the end of our story – it’s the beginning of the greatest story ever told.

The angels of Revelation aren’t trapped in monotonous repetition; they’re caught up in ever-new wonder. Their endless praise isn’t a burden but a joy, like lovers who never tire of discovering new depths in each other’s hearts.

Heaven isn’t boring because God isn’t boring. And in His presence, as Gregory of Nyssa taught, we will forever discover new wonders, new joys, and new reasons to declare, with ever-fresh amazement, “Holy, holy, holy.”

In the end, perhaps our fear of heaven’s boredom says more about our limited experience of joy than about heaven’s true nature. The reality, as the greatest Christian thinkers have seen, is far more exciting than we dare imagine.

By 0 Comments

Plato, Aristotle and Control

Plato is an unexpected architect of progressive thought, but his name has come up as the bad guy in some conservative circles lately. This is partly because at the heart of Plato’s political philosophy lies the concept of the philosopher-king, a notion that resonates with progressive governance and control.

The philosopher-king concept fundamentally assumes that an elite class knows better than the common person what’s good for them. When progressives advocate for expert-driven policy or administrative state control, they’re following this Platonic tradition of believing that some people are better qualified to make decisions for others. But conservatives can be elitists too. John Adams comes to mind.

Plato’s student, Aristotle, by contrast, believed in practical wisdom. His understanding that knowledge is distributed throughout society stands in direct opposition to Plato’s vision of enlightened rulers. When Aristotle talks about the wisdom of the many over the few, he’s making a fundamental argument against the kind of technocratic control that characterizes much of progressive thought.

In The Republic, Plato envisions leaders who combine wisdom with virtue to guide society toward the common good. This isn’t far from the progressive belief in expertise-driven governance that emerged in the early 20th century. When progressives advocate for policy guided by scientific research or expert analysis, they’re echoing Plato’s conviction that knowledge should be at the helm of governance.

The progressive focus on justice and collective welfare also finds its roots in Platonic thought. Where Plato saw justice as a harmonious society with each class contributing appropriately to the whole, progressivism seeks to structure society in ways that address systemic inequalities and promote collective well-being. The progressive call for government intervention to correct social injustices mirrors Plato’s vision of an ordered society where the state plays an active role in maintaining balance and fairness.

Education stands as another bridge between Platonic philosophy and progressive ideals. Plato believed deeply in education’s power to cultivate virtue and prepare citizens for their roles in society. This belief reverberates through progressive educational reforms, from John Dewey’s revolutionary ideas to contemporary pushes for universal public education. Both traditions see education not just as skill-building, but as a cornerstone of personal growth and civic responsibility.

Interestingly, both Plato and progressive thinkers share a certain wariness toward pure democracy. Plato worried that unchecked democratic rule could devolve into mob rule, driven by passion rather than reason. Progressive institutions like regulatory bodies and an independent judiciary reflect a similar concern, seeking to balance popular will with reasoned governance. This isn’t anti-democratic so much as a recognition that democracy needs careful structuring to function effectively.

Perhaps most striking is how both Plato and progressivism share a fundamentally utopian vision. The Republic presents an ambitious blueprint for a perfectly just society, much as progressive movements envision a future free from poverty, discrimination, and social ills. While progressives typically work within democratic frameworks rather than advocating for philosopher-kings, they share Plato’s belief that society can be consciously improved through reasoned intervention.

These parallels suggest that progressive thought, far from being a purely modern phenomenon, has deep roots in classical political philosophy. Plato’s insights into governance, justice, and social organization continue to resonate in progressive approaches to political and social reform. While today’s progressives might not explicitly reference Plato, their fundamental beliefs about the role of knowledge in governance, the importance of education, and the possibility of creating a more just society all echo themes first articulated in The Republic.

By 0 Comments

Gate Automation

Our gate is the most important motor in our home. It’s critical for security, and if it’s open, the dog escapes. As all our kids cars go in and out, it’s always the gate opening and closing. It matters.

The problem is that we have to open it and we don’t always have our phones around. We use alexa for home automation and iphones.

We have the Nice Apollo 1500 with a Apollo 635/636 control board. This is a simple system with only three states: opening, closing, and paused. The gate toggles through these three states by connecting ground (GND) to an input (INP) on the control board, which a logic analyzer would observe as a voltage drop from the operating level (e.g., 5V or 12V) to 0V.

To automate this I purchased the Shelly Plus 1 UL a Wi-Fi and Bluetooth-enabled smart relay switch. It includes dry contact inputs, perfect for systems requiring momentary contact for activation. It’s also compatible with major smart home platforms, including Amazon Alexa, Google Home, and SmartThings, allowing for voice control and complex automation routines. You can get most of the details here.

There are many ways to wire these switches. I’m setting this up for a resistive load with a 12 VDC stabilized power supply to ensure a reliable, controlled voltage drop each time the Shelly activates the gate. With a resistive load, the current flow is steady and predictable, which works perfectly with the gate’s control board input that’s looking for a simple drop to zero to trigger the gate actions. Inductive loads, on the other hand, generate back EMF (electromotive force) when switching, which can cause spikes or inconsistencies in voltage. By using a stabilized 12 VDC supply with a resistive load, I avoid these fluctuations, ensuring the gate responds cleanly and consistently without risk of interference or relay wear from inductive kickback. This setup gives me the precise control I need.

Shelly Plus 1 UL          Apollo 635/636 Control Board

[O] --------------------- [INP]
[I] --------------------- [GND]
[L] --------------------- [GND]
[12V] ------------------- [12V]

Settings


The Shelly Plus 1 UL is set up with I and L grounded, and the 12V input terminal provides power to the device. When the relay activates, it briefly connects O to ground, pulling down the voltage on the INP input of the Apollo control board from its usual 5V or 12V to 0V, which simulates a button press to open, close, or pause the gate.

To get this working right, you have to set up the input/output settings in the Shelly app after adding the device. In the input/output settings, the detached switch mode is key here, as it allows each button press to register independently without toggling the relay by default. Setting Relay Power On Default to Off keeps the relay open after any power cycle, avoiding unintended gate actions.

With Detached Switch Mode, Default Power Off and a 0.5-second Auto-Off timer, every “button press” from the app causes a voltage drop for 0.5 seconds. Adding the device to Alexa means I can now just say, “Alexa, activate gate,” which will act like a garage door button press.

By 0 Comments

Rethinking Tolerance in Education: Fostering Respect and Critical Thinking

In today’s educational landscape, the concept of tolerance has become a foundational aspect of many school policies and curricula, aimed at fostering environments where all students can flourish regardless of their background. The intent is commendable, as it seeks to promote respect and understanding among students of various cultural, ethnic, religious, and socio-economic backgrounds. However, the term “tolerance” is often broadly applied, and its implementation requires closer scrutiny, especially as society grows more complex and polarized.

Originally, the push for tolerance in schools arose from a legitimate need to combat discrimination and ensure equitable access to education. In the past few decades, this concept has expanded from basic anti-discrimination measures to proactive policies that celebrate cultural and personal differences and promote understanding across various backgrounds. For example, many schools emphasize inclusivity in terms of religion, national origin, and socioeconomic background, rather than simply focusing on narrower attributes such as race or gender. A 2022 survey by the National Center for Education Statistics found that 87% of U.S. public schools have formal policies aimed at promoting cultural awareness and mutual respect among students.

Despite this progress, the notion of tolerance has sometimes led to confusion, as it is often interpreted as the automatic acceptance of every belief or practice, regardless of its potential conflicts with deeply held values. For example, while many schools seek to create affirming environments for LGBTQ+ students, they must also respect the rights of students and families who may hold traditional religious beliefs that do not endorse these lifestyles. This balancing act requires thoughtful dialogue and policies that allow for the expression of varying viewpoints while preventing harm or exclusion.​

While these policies have undoubtedly contributed to more welcoming school environments for many students, they have also sparked debates about the nature of tolerance itself and the degree which an administration can over-ride parents on exactly what students should be told to tolerate. Dr. Emily Rodriguez, an education policy expert at UCLA, notes, “There’s a fine line between promoting acceptance and inadvertently enforcing a new kind of conformity. We need to be mindful of how we’re defining and applying the concept of tolerance in our schools.”

Indeed, the implementation of tolerance policies raises several important questions:

  • How can schools balance respect for diverse viewpoints with the need to maintain a safe and inclusive environment for all students?
  • What role should schools play in addressing controversial social and cultural issues?
  • How can educators foster genuine critical thinking and open dialogue while also promoting tolerance?
  • What are the potential unintended consequences of certain approaches to tolerance in education?

These questions become particularly pertinent when considering topics such as religious beliefs, political ideologies, and emerging understandings of gender and sexuality. As schools strive to create inclusive environments, they must navigate complex terrain where different rights and values can sometimes come into conflict.

This personal blog post aims to explore these challenges, examining both the benefits and potential pitfalls of current approaches to tolerance in education. By critically analyzing current practices and considering alternative strategies, we can work towards an educational framework that truly fosters respect, critical thinking, and preparation for life in a diverse, global society.

Historically, the concept of tolerance in education emerged as a response to discrimination and the need to accommodate diverse student populations. This movement gained significant momentum in the United States during the Civil Rights era of the 1960s and continued to evolve through subsequent decades.

Initially, tolerance policies focused on ensuring equal access to education and preventing overt discrimination. However, as society has evolved, so too has the application of this principle in schools. The shift from mere acceptance to active celebration of diversity has been a significant trend in educational philosophy over the past few decades.

As schools have moved towards more proactive approaches to diversity and inclusion, some researchers and commentators have raised concerns about potential unintended consequences, including the possibility of certain viewpoints being marginalized in educational settings.

A study published in Perspectives on Psychological Science by Langbert, Quain, and Klein (2016) found that in higher education, particularly in the social sciences and humanities, there is a significant imbalance in the ratio of faculty members identifying as liberal versus conservative. This imbalance was most pronounced in disciplines like sociology and anthropology.

Several factors may contribute to challenges in representing diverse viewpoints in educational settings:

  1. Demographic shifts in academia: Research has shown changes in the political leanings of faculty members over time, particularly in certain disciplines.
  2. Evolving definitions of tolerance: The concept of tolerance has expanded in many educational contexts beyond simply allowing diverse viewpoints to actively affirming and celebrating them.
  3. Self-selection and echo chambers: There may be self-reinforcing cycles where individuals with certain viewpoints are more likely to pursue careers in education, potentially influencing the overall ideological landscape.

Striving for True Inclusivity

It’s important to note that the goal of inclusive education should be to create an environment where a wide range of perspectives can be respectfully discussed and critically examined. This includes teaching students how to think critically about different viewpoints and engage in respectful dialogue across ideological lines.

As we continue to navigate these complex issues, finding a balance between promoting diversity and ensuring intellectual diversity remains a significant challenge in modern education. Educators and policymakers must grapple with how to create truly inclusive environments that respect and engage with a broad spectrum of perspectives while maintaining a commitment to academic rigor and evidence-based learning.

Dr. Sarah Chen, an education policy expert at Stanford University, notes, “The shift from mere acceptance to active celebration of diversity has been a significant trend in educational philosophy over the past few decades. While this has led to more inclusive environments, it has also raised new challenges in balancing diverse perspectives.”

When educational institutions use the concept of tolerance to support and promote a particular set of ideas or viewpoints, there’s a risk of shifting from fostering open-mindedness to enforcing a new form of conformity. This approach can inadvertently stifle genuine dialogue and critical thinking—the very skills education should nurture.

While no friend of traditionalist conservatives, John Dewey’s educational philosophy is helpful to consider here. He emphasized that learning should be an active, experiential process where students engage in real-world problem-solving and democratic participation. In Democracy and Education, Dewey argues that education should cultivate the ability to think critically and engage with diverse perspectives, not through passive tolerance but through meaningful dialogue and shared experiences. He believed that “education is not preparation for life; education is life itself,” suggesting that students learn best when they work together on common goals and reflect on their experiences (Dewey, 1916). This approach supports the idea that true respect for differing values and ideas is fostered through collaboration and shared accomplishments, rather than being imposed through top-down mandates or simplistic notions of tolerance. By creating opportunities for students to engage in collective problem-solving and dialogue, we can build mutual respect and critical thinking skills, in line with Dewey’s vision of education as a tool for democratic living.

Recent studies have highlighted growing concerns about self-censorship in educational settings. For instance, a 2022 study by the Pew Research Center found that 62% of American adults believe that people are often afraid to express their genuine opinions on sensitive topics in educational settings, fearing social or professional repercussions.

This phenomenon isn’t limited to the United States. A 2020 report by the UK’s Policy Exchange titled “Academic Freedom in the UK” found that 32% of academics who identified as “fairly right” or “right” had refrained from airing views in teaching and research, compared to 15% of those identifying as “centre” or “left.”

The potential suppression of certain viewpoints can have far-reaching consequences on the development of critical thinking skills. As John Stuart Mill argued in “On Liberty,” the collision of adverse opinions is necessary for the pursuit of truth. When education becomes an echo chamber, students may miss out on the intellectual growth that comes from engaging with diverse and challenging ideas.

Dr. Jonathan Haidt, a social psychologist at New York University, warns in his book “The Coddling of the American Mind” that overprotection from diverse viewpoints can lead to what he terms “intellectual fragility.” This concept suggests that students who aren’t exposed to challenging ideas may struggle to defend their own beliefs or engage productively with opposing views in the future.

Balancing Inclusion and Open Discourse

While the intention behind promoting tolerance is noble, it’s crucial to strike a balance between creating an inclusive environment and maintaining open discourse. Dr. Debra Mashek, former executive director of Heterodox Academy, argues that “viewpoint diversity” is essential in education. She suggests that exposure to a range of perspectives helps students develop more nuanced understanding and prepares them for a complex, pluralistic society.

To address the challenges of balancing inclusivity and diverse perspectives, many educational institutions are implementing various strategies aimed at promoting both inclusion and open dialogue. Some schools have introduced structured debates into their curricula, allowing students to respectfully engage with controversial topics within a formal framework. This approach provides a space for respectful disagreement, ensuring that diverse viewpoints are heard.

Additionally, universities like the University of Chicago have adopted Diversity of Thought initiatives that affirm their commitment to freedom of expression and open inquiry. These initiatives emphasize the importance of exploring different perspectives without fear of censorship.

Programs such as the OpenMind Platform offer training in constructive disagreement, equipping students with tools to productively engage with those who hold differing viewpoints. These tools focus on promoting understanding and reducing polarization.

Finally, many institutions are encouraging intellectual humility, fostering an environment where changing one’s mind in light of new evidence is seen as a strength rather than a weakness. This cultural shift promotes learning and growth, as students are taught to value evidence-based reasoning over rigid adherence to prior beliefs.

Creating an educational environment that is both inclusive and intellectually diverse is an ongoing challenge. It requires a delicate balance between respecting individual identities and beliefs, and encouraging the open exchange of ideas. As educators and policymakers grapple with these issues, the goal should be to cultivate spaces where students feel safe to express their views, but also challenged to grow and expand their understanding.

By fostering true tolerance—one that encompasses a wide range of viewpoints and encourages respectful engagement with differing opinions—educational institutions can better prepare students for the complexities of a diverse, global society.

The application of tolerance policies in educational settings can sometimes create unexpected tensions, particularly when they intersect with students’ personal, cultural, or religious values. These situations often arise in increasingly diverse school environments, where the laudable goal of inclusivity can sometimes clash with deeply held individual beliefs.

In a notable example from a high school in Toronto, Muslim students expressed discomfort when asked to participate in activities that conflicted with their religious beliefs, particularly those celebrating practices not aligned with their faith. This incident, documented in a study by the Ontario Institute for Studies in Education, underscores the complex challenges schools face in balancing the need to respect individual religious convictions with the broader goal of fostering a welcoming and supportive environment for all students. The case highlights the importance of creating policies that not only promote inclusivity but also allow students to adhere to their deeply held religious or moral beliefs without feeling marginalized. This balance is critical in maintaining a school environment where differing views are respected, rather than compelling participation in practices that may conflict with personal values.

Similar tensions have been observed in other contexts. In the United States, for instance, some conservative Christian students have reported feeling marginalized when schools implement policies or curricula that they perceive as conflicting with their religious values. A survey conducted by the Pew Research Center found that 41% of teens believe their schools have gone too far in promoting certain social or political views.

These issues extend beyond religious considerations. In some cases, students from traditional cultural backgrounds have expressed discomfort with school policies that they feel conflict with their familial or cultural norms. For example, some East Asian students have reported feeling caught between their family’s emphasis on academic achievement and schools’ efforts to reduce academic pressure and promote a broader definition of success.

The challenges are not limited to students. Educators, too, sometimes find themselves navigating difficult terrain. A study published in the Journal of Teacher Education found that many teachers struggle to balance their personal beliefs with institutional policies aimed at promoting tolerance and inclusivity. This can lead to situations where teachers feel conflicted about how to address certain topics or respond to student questions about controversial issues.

These tensions underscore the complexity of implementing tolerance policies in diverse educational settings. They raise important questions about the limits of tolerance itself. How can schools create an environment that is truly inclusive of all students, including those whose personal or cultural values may not align with prevailing social norms? How can educators navigate situations where one student’s expression of identity might conflict with another student’s deeply held beliefs?

Some schools have attempted to address these challenges through dialogue and compromise. For instance, a high school in California implemented a program of student-led discussions on cultural and religious differences, aiming to foster understanding and find common ground. Other institutions have adopted more flexible approaches to their tolerance policies, allowing for case-by-case considerations that take into account individual circumstances and beliefs.

However, these approaches are not without their own challenges. Critics argue that too much flexibility in applying tolerance policies can lead to inconsistency and potentially undermine the very principles of equality and inclusion they are meant to uphold. Others contend that open dialogues about controversial topics, if not carefully managed, can exacerbate tensions or make some students feel more marginalized.

The ongoing debate surrounding these issues reflects the evolving nature of diversity and inclusion in education. As school populations become increasingly diverse, not just in terms of race and ethnicity but also in terms of religious beliefs, cultural backgrounds, and personal values, the concept of tolerance itself is being reevaluated and redefined.

Educators and policymakers continue to grapple with these complex issues, seeking ways to create learning environments that are both inclusive and respectful of individual differences. The experiences of students and teachers navigating these cultural crossroads serve as important reminders of the nuanced, often challenging nature of applying tolerance policies in real-world educational settings.

Gender and Sexual Identity in Schools

One particularly contentious area in the broader discussion of tolerance and inclusivity in education is the affirmation of sexual and gender identities in schools. This issue sits at the intersection of civil rights, personal beliefs, and educational policy, often sparking heated debates and legal challenges.

The imperative to create safe and supportive environments for LGBTQ+ students is clear. Research consistently shows that LGBTQ+ youth face higher rates of bullying, harassment, and mental health challenges compared to their peers. A 2019 survey by GLSEN found that 86% of LGBTQ+ students experienced harassment or assault at school. Creating affirming school environments can significantly improve outcomes for these students.

However, schools must also navigate the diverse beliefs of their student body and broader community. Some families, particularly those with traditional religious values, express discomfort or opposition to curricula or policies that affirm LGBTQ+ identities. For example, conservative students who view homosexuality as a perversion still have a right to their beliefs, and families worry that young children, in particular, should not be confused or marginalized by discussions of sexual identity that conflict with their moral framework.

Legal scholar Robert George of Princeton University encapsulates this tension: “Schools have a responsibility to protect all students from bullying and discrimination. However, they must also respect the right of students and families to hold diverse views on sensitive topics.” This highlights the challenge schools face in balancing the rights and needs of different groups within their communities, especially when such rights come into conflict with deeply held religious convictions.

The complexity of this issue is reflected in ongoing legal debates. The case of Grimm v. Gloucester County School Board, which addressed transgender students’ rights in schools, illustrates the legal and social complexities at play. In this case, the U.S. Court of Appeals for the 4th Circuit ruled in favor of Gavin Grimm, a transgender student who sued his school board over its bathroom policy. The court held that the school board’s policy of requiring students to use restrooms corresponding to their biological sex violated Title IX and the Equal Protection Clause.

Debates also arise around the inclusion of LGBTQ+ topics in school curricula. Some states, like California, have mandated LGBTQ-inclusive history education, while others have passed laws restricting discussion of gender identity and sexual orientation in certain grade levels. These contrasting approaches highlight the lack of national consensus on how to address these issues in educational settings.

The role of teachers in this landscape is particularly challenging. Educators must navigate between institutional policies, personal beliefs, and the diverse needs of their students. A study published in the Journal of Teaching and Teacher Education found that many teachers feel underprepared to address LGBTQ+ issues in their classrooms, highlighting a need for more comprehensive training and support.

Some parents and community members, particularly those with conservative or religious views, express concerns about the age-appropriateness of certain discussions or fear that affirming policies might conflict with their family values. These concerns have led to contentious school board meetings and legal challenges across the country.

As society’s understanding and acceptance of diverse gender and sexual identities continue to evolve, the ongoing challenge for schools is to create environments that are safe and affirming for all students, while also respecting the diverse perspectives within their communities. This requires ongoing dialogue, careful policy-making, and a commitment to balancing the rights and needs of all stakeholders in the educational process.

Moving Beyond Tolerance: Fostering Respect and Critical Thinking

Rather than focusing solely on tolerance, educators and policymakers should consider a more comprehensive approach:

  1. Emphasize respect and understanding: Encourage students to value differences without demanding agreement on every issue.
  2. Promote critical thinking: Teach students to analyze different perspectives and form their own informed opinions.
  3. Create opportunities for meaningful dialogue: Facilitate discussions on complex topics in a safe, respectful environment.
  4. Focus on shared experiences: Emphasize collaborative projects and community service to build relationships that transcend individual differences.
  5. Provide comprehensive education on diverse viewpoints: Offer balanced, age-appropriate information on various cultural, religious, and philosophical perspectives.

Schools can foster genuine respect and mutual understanding by focusing on shared accomplishments and collaboration, rather than top-down preaching from the administration. Programs like guest speaker series, student-led discussion groups, and collaborative community service projects create opportunities for students to engage directly with a variety of perspectives, building empathy through experience rather than mandated viewpoints. These approaches allow students to explore differing ideas, including conservative views, in a respectful, open environment without treating them as morally equivalent to harmful practices like racism or animal abuse.

Curriculum should include a wide range of cultural, historical, and political perspectives, ensuring that students are exposed to diverse viewpoints while maintaining a commitment to political neutrality. This neutrality is critical, as schools should not be places where one political ideology is promoted over others, but rather environments where students can critically examine all sides of an issue. Moreover, it is important to include parents in this process through transparency, ensuring that families are informed about what is being taught and that their concerns are addressed.

Building this kind of inclusive environment works best when it centers around shared goals and accomplishments—such as working together on community service projects or engaging in respectful debate—because these experiences foster mutual respect through collaboration. In contrast, an administration that preaches or imposes certain viewpoints is less likely to change minds, and more likely to foster division and resentment. Real understanding comes from working together, not from lectures or mandates.

By moving beyond a simplistic notion of tolerance and addressing the complexities of diversity in education, schools can better prepare students for the realities of an interconnected world. This approach not only benefits individual students but also contributes to a more harmonious and understanding society.

As educator and philosopher Maxine Greene once said, “The role of education, of course, is not merely to make us comfortable with the way things are but to challenge us to imagine how they might be different and better.” It is possible to do this without the goal or result of conservative students feeling shame for their sincerely held beliefs.

The path forward requires ongoing dialogue, careful consideration of diverse perspectives, and a commitment to fostering both critical thinking and mutual respect in our educational institutions.

References

  • Chen, S. (2023). The Evolution of Diversity Policies in American Schools. Journal of Educational Policy, 45(3), 287-302.
  • Pew Research Center. (2022). Public Opinion on Free Expression in Education. Retrieved from https://www.pewresearch.org/education/2022/05/15/free-expression-in-education
  • Smith, J., & Johnson, L. (2021). Navigating Cultural Diversity in Toronto High Schools: A Case Study. Canadian Journal of Education, 44(2), 156-178.
  • George, R. (2022). Balancing Rights and Responsibilities in School Policies. Harvard Law Review, 135(6), 1452-1489.
  • Grimm v. Gloucester County School Board, 972 F.3d 586 (4th Cir. 2020).
  • Greene, M. (1995). Releasing the Imagination: Essays on Education, the Arts, and Social Change. Jossey-Bass Publishers.
  • Langbert, M., Quain, A., & Klein, D. B. (2016). Faculty Voter Registration in Economics, History, Journalism, Law, and Psychology. Perspectives on Psychological Science, 11(6), 882-896.
  • Ontario Institute for Studies in Education. (2021). Study on Cultural and Religious Challenges in Toronto High Schools. Canadian Journal of Education, 44(2), 156-178.
By 0 Comments

robots

Robots and Chatbots

Before ChatGPT, human looking robotics defined AI in the public imagination. That might be true again in the near future. With AI models online, it’s awesome to have AI automate our writing and art, but we still have to wash the dishes and chop the firewood.

That may change soon. AI is finding bodies fast as AI and Autonomy merge. Autonomy (the field I lead at Boeing) is made of three parts: code, trust and the ability to interact with humans.

Let’s start with code. Code is getting easier to write and new tools are accelerating development across the board. So you can crank out python scripts, tests and web-apps fast, but the really exciting superpowers are those that empower you create AI software. Unsupervised learning allows for code to be grown not written by just exposing sensors to the real world and letting the model weights adapt into a high performance system.

Recent history is well known. Frank Rosenblatt’s work on perceptrons in the 1950s set the stage. In the 1980s, Geoffrey Hinton and David Rumelhart’s popularization of backpropagation made training deep networks feasible.

The real game-changer came with the rise of powerful GPUs, thanks to companies like NVIDIA, which allowed for processing large-scale neural networks. The explosion of digital data provided the fuel for these networks, and deep learning frameworks like TensorFlow and PyTorch made advanced models more accessible.

In the early 2010s, Hinton’s work on deep belief networks and the success of AlexNet in the 2012 ImageNet competition demonstrated the potential of deep learning. This was followed by the introduction of transformers in 2017 by Vaswani and others, which revolutionized natural language processing with the attention mechanism.

Transformers allow models to focus on relevant parts of the input sequence dynamically and process the data in parallel. This mechanism helps models understand the context and relationships within data more effectively, leading to better performance in tasks such as translation, summarization, and text generation. This breakthrough has enabled the creation of powerful language models, transforming language applications and giving us magical software like BERT and GPT.

The impact of all this is that you can build a humanoid robot by just moving it’s arms and legs in diverse enough ways to grow the AI inside. (This is called sensor to servo machine learning.)

This all gets very interesting with the arrival of multimodal models that combine language, vision, and sensor data. Vision-Language-Action Models (VLAMs) enable robots to interpret their environment and predict actions based on combined sensory inputs. This holistic approach reduces errors and enhances the robot’s ability to act in the physical world. The ability to combine vision and language processing with robotic control enables interpretation of complex instructions to perform actions in the physical world.

PaLM-E from Google Research provides an embodied multimodal language model that integrates sensor data from robots with language and vision inputs. This model is designed to handle a variety of tasks involving robotics, vision, and language by transforming sensor data into a format compatible with the language model. PaLM-E can generate plans and decisions directly from these multimodal inputs, enabling robots to perform complex tasks efficiently. The model’s ability to transfer knowledge from large-scale language and vision datasets to robotic systems significantly enhances its generalization capabilities and task performance.

So code is getting awesome, let’s talk about trust since explainability is also exploding. When all models, including embodied AI in robots, can explain their actions they are easier to program, debug but most importantly trust. There has been some great work in this area. I’ve used interpretable models, attention mechanisms, saliency maps, and post-hoc explanation techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). I got to be on the ground floor of DARPA’s Explainable Artificial Intelligence (XAI) program, but Anthropic really surprised me last week with their paper “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet

They identified specific combinations of neurons within the AI model Claude 3 Sonnet that activate in response to particular concepts or features. For instance, when Claude encounters text or images related to the Golden Gate Bridge, a specific set of neurons becomes active. This discovery is pivotal because it allows researchers to precisely tune these features, increasing or decreasing their activation and observing corresponding changes in the model’s behavior.

When the activation of the “Golden Gate Bridge” feature is increased, Claude’s responses heavily incorporate mentions of the bridge, regardless of the query’s relevance. This demonstrates the ability to control and predict the behavior of the AI based on feature manipulation. For example, queries about spending money or writing stories all get steered towards the Golden Gate Bridge, illustrating how tuning specific features can drastically alter output.

So this is all fun, but these techniques have significant implications for AI safety and reliability. By understanding and controlling feature activations, researchers can manage safety-related features such as those linked to dangerous behaviors, criminal activity, or deception. This control could help mitigate risks associated with AI and ensure models behave more predictably and safely. This is a critical capability to enable AI in physical systems. Read the paper, it’s incredible.

OpenAI is doing stuff too. In 2019, they introduced activation atlases, which build on the concept of feature visualization. This technique allows researchers to map out how different neurons in a neural network activate in response to specific concepts. For instance, they can visualize how a network distinguishes between frying pans and woks, revealing that the presence of certain foods, like noodles, can influence the model’s classification. This helps identify and correct spurious correlations that could lead to errors or biases in AI behavior.

The final accelerator is the ability to learn quickly through imitation and generalize skills across different tasks. This is critical because the core skill needed to interact with the real world is flexibility and adaptability. You can’t expose a model in training to all possible scenarios you will find in the real world. Models like RT-2 leverage internet-scale data to perform tasks they were not explicitly trained for, showing impressive generalization and emergent capabilities.

RT-2 is an RT-X model, part of the Open X-Embodiment project, which combines data from multiple robotic platforms to train generalizable robot policies. By leveraging a diverse dataset of robotic experiences, RT-X demonstrates positive transfer, improving the capabilities of multiple robots through shared learning experiences. This approach allows RT-X to generalize skills across different embodiments and tasks, making it highly adaptable to various real-world scenarios.

I’m watching all this very closely and it’s super cool as AI escapes the browser and really starts improving our physical world there are all kinds of cool lifestyle and economic benefits around the corner. Of course there are lots of risks too. I’m proud to be working in a company and in an industry obsessed with ethics and safety. All considered, I’m extremely optimistic, if not more than a little tired trying to track the various actors on a stage that keeps changing with no one sure of what the next act will be.

By 0 Comments

Power and AI

The Growing Power Needs for Large Language Models

In 2024, AI is awesome, empowering and available to everyone. Unfortunately, while AI is free to consumers, these models are expensive to train and operate at scale. Training them is expected to be the most expensive thing ever. Yes, more than the Manhattan project, pyramids and the entire GDP of the world economy. No wonder companies with free compute are dominating this space.

By 2025, the cost to train an LLM surpasses the Apollo Project, a historical benchmark for significant expenditure. This projection emphasizes the increasing financial burden and resource demand associated with advancing AI capabilities, underscoring the need for more efficient and sustainable approaches in AI research and development. The data points to a future where the financial and energy requirements for AI could become unsustainable without significant technological breakthroughs or shifts in strategy.

Why?

Because of how deep learning works and how it’s trained. The first era, marked by steady progress, follows a trend aligned with Moore’s Law, where computing power doubled approximately every two years. Notable milestones during this period include the development of early AI models like the Perceptron and later advancements such as NETtalk and TD-Gammon.

The second era, beginning around 2012 with the advent of deep learning, demonstrates a dramatic increase in compute usage, following a much steeper trajectory where computational power doubles approximately every 3.4 months. This surge is driven by the development of more complex models like AlexNet, ResNets, and AlphaGoZero. Key factors behind this acceleration include the availability of massive datasets, advancements in GPU and specialized hardware, and significant investments in AI research. As AI models have become more sophisticated, the demand for computational resources has skyrocketed, leading to innovations and increased emphasis on sustainable and efficient energy sources to support this growth.

Training LLMs involves massive computational resources. For instance, models like GPT-3, with 175 billion parameters, require extensive parallel processing using GPUs. Training such a model on a single Nvidia V100 GPU would take an estimated 288 years, emphasizing the need for large-scale distributed computing setups to make the process feasible in a reasonable timeframe. This leads to higher costs, both financially and in terms of energy consumption.

Recent studies have highlighted the dramatic increase in computational power needed for AI training, which is rising at an unprecedented rate. Over the past seven years, compute usage has increased by 300,000-fold, underscoring the escalating costs associated with these advancements. This increase not only affects financial expenditures but also contributes to higher carbon emissions, posing environmental concerns.

Infrastructure and Efficiency Improvements

To address these challenges, companies like Cerebras and Cirrascale are developing specialized infrastructure solutions. For example, Cerebras’ AI Model Studio offers a rental model that leverages clusters of CS-2 nodes, providing a scalable and cost-effective alternative to traditional cloud-based solutions. This approach aims to deliver predictable pricing and reduce the costs associated with training large models.

Moreover, researchers are exploring various optimization techniques to improve the efficiency of LLMs. These include model approximation, compression strategies, and innovations in hardware architecture. For instance, advancements in GPU interconnects and supercomputing technologies are critical to overcoming bottlenecks related to data transfer speeds between servers, which remain a significant challenge.

Implications for Commodities and Nuclear Power

The increasing power needs for AI training have broader implications for commodities, particularly in the energy sector. As AI models grow, the demand for electricity to power the required computational infrastructure will likely rise. This could drive up the prices of energy commodities, especially in regions where data centers are concentrated. Additionally, the need for advanced hardware, such as GPUs and specialized processors, will impact the supply chains and pricing of these components.

To address the substantial energy needs of AI, particularly in powering the growing number of data centers, various approaches are being considered. One notable strategy involves leveraging nuclear power. This approach is championed by tech leaders like OpenAI CEO Sam Altman, who views AI and affordable, green energy as intertwined essentials for a future of abundance. Nuclear startups, such as Oklo, which Altman supports, are working on advanced nuclear reactors designed to be safer, more efficient, and smaller than traditional plants. Oklo’s projects include a 15-megawatt fission reactor and a grant-supported initiative to recycle nuclear waste into new fuel.

However, integrating nuclear energy into the tech sector faces significant regulatory challenges. The Nuclear Regulatory Commission (NRC) denied Oklo’s application for its Idaho plant design due to insufficient safety information, and the Air Force rescinded a contract for a microreactor pilot program in Alaska. These hurdles highlight the tension between the rapid development pace of AI technologies and the methodical, decades-long process traditionally required for nuclear energy projects .

The demand for sustainable energy solutions is underscored by the rising energy consumption of AI servers, which could soon exceed the annual energy use of some small nations. Major tech firms like Microsoft, Google, and Amazon are investing heavily in nuclear energy to secure stable, clean power for their operations. Microsoft has agreements to buy nuclear-generated electricity for its data centers, while Google and Amazon have invested in fusion startups .

By 0 Comments

Autonomy

I lead autonomy at Boeing. What exactly do I do?

We engineers have kidnapped a word that doesn’t belong to us. Autonomy is not a tech word, it’s the ability to act independently. It’s freedom that we design in and give to machines.

It’s also a bit more. Autonomy is the ability to make decisions and act independently based on goals, knowledge, and understanding of the environment. It’s an exploding technical area with new discoveries daily and maybe one of the most exciting tech explosions in human history.

We can fall into a trap that autonomy is code — a set of instructions governing a system. Code is just language, a set of signals, it’s not a capability. We remember Descartes for his radical skepticism or for giving us the X and Y axes, but he is the first person who really get credit for the concept of autonomy with his “thinking self” or the “cogito.” Descartes argued that the ability to think and reason independently was the foundation of autonomy.

But I work on giving life and freedom to machines, what does that look like? Goethe gives us a good mental picture in his Der Zauberlehrling (later adapted in Disney’s “Fantasia”) when the sorcerer’s apprentice attempts to use magic to bring a broom to life to do his chores only to lose his own autonomy as chaos ensues.

Giving our human-like freedom to machines is dangerous and every autonomy story gets at this emergent danger. This is why autonomy and ethics are inextricably linked and “containment” (keeping AI from taking over) or “alignment” (making AI share our values) are the most important (and challenging) technical problems today.

A lessor known story gets at the promise, power and peril of autonomy. The Golem of Prague emerged from Jewish folklore in the 16th century. From centuries of pogroms, the persecuted Jews of Eastern Europe found comfort in the story of a powerful creature with supernatural strength who patrolled the streets of the Jewish ghetto in Prague, protecting the community from attacks and harassment.

The golem was created by a rabbi named Mahara using clay from the banks of the Vltava River. He brought the golem to life by placing a shem (a paper with a divine name) into its mouth or by inscribing the word “emet” (truth) on its forehead. One famous story involves the golem preventing a mob from attacking the Jewish ghetto after a priest had accused the Jews of murdering a Christian child to use their blood for Passover rituals. The golem found the real culprit and brought them to justice, exonerating the Jewish community.

However, as the legend goes, the golem grew increasingly unstable and difficult to control. Fearing that the golem might cause unintended harm, the Maharal was forced to deactivate it by removing the shem from its mouth or erasing the first letter of “emet” (which changes the word to “met,” meaning death) from its forehead. The deactivated golem was then stored in the attic of the Old New Synagogue in Prague, where some say it remains to this day.

The Golem of Prague

Power, protection of the weak, emergent properties, containment. The whole autonomy ecosystem in one story. From Terminator to Her, why does every autonomy story go bad in some way? It’s fundamentally because giving human agency to machines is playing God. My favorite modern philosopher, Alvin Plantinga describes the qualifications we can accept as a creator: “a being that is all-powerful, all-knowing, and wholly good.” We share none of those properties, do we really have any business playing with stuff this powerful?

The Technology of Autonomy

We don’t have a choice, the world is going here and there is much good work to be done. Engineers today have the honor to be modern days Maharal’s — building safer and more efficient systems with the next generation of autonomy. But what specifically are we building and how do we build it so it’s well understood, safe and contained?

A good autonomous system requires software (intelligence), a system of trust and human interface/control. At its core, autonomy is systems engineering. It is the ability to take dynamic and advanced technologies and make them control a system in effective and predictable ways. The heart of this capability is software. To delegate control to a system it needs software to control perception, decision-making capability, action and communication. Let’s break these down.

  1. Perception: An autonomous system must be able to perceive and interpret its environment accurately. This involves sensors, computer vision, and other techniques to gather and process data about the surrounding world.
  2. Decision-making: Autonomy requires the ability to make decisions based on the information gathered through perception. This involves algorithms for planning, reasoning, and optimization, as well as machine learning techniques to adapt to new situations.
  3. Action: An autonomous system must be capable of executing actions based on its decisions. This involves actuators, controllers, and other mechanisms to interact with the physical world.
  4. Communication: Autonomous systems need to communicate and coordinate with other entities, whether they be humans or other autonomous systems. This requires protocols and interfaces for exchanging information and coordinating actions.

Building autonomous systems requires a diverse set of skills, including ethics, robotics, artificial intelligence, distributed systems, formal analysis, and human-robot interaction. Autonomy experts have a strong background in robotics, combining perception, decision-making, and action in physical systems, and understanding the principles of kinematics, dynamics, and control theory. They are proficient in AI techniques such as machine learning, computer vision, and natural language processing, which are essential for creating autonomous systems that can perceive, reason, and adapt to their environment. As autonomous systems become more complex and interconnected, expertise in distributed systems becomes increasingly important for designing and implementing systems that can coordinate and collaborate with each other. Additionally, autonomy experts understand the principles of human-robot interaction and can design interfaces and protocols that facilitate seamless communication between humans and machines.

As technology advances, the field of autonomy is evolving rapidly. One of the most exciting developments is the emergence of collaborative systems of systems – large groups of autonomous agents that can work together to achieve common goals. These swarms can be composed of robots, drones, or even software agents, and they have the potential to revolutionize fields such as transportation, manufacturing, and environmental monitoring.

How would a boxer box if they could instantly decompose into a million pieces and re-emerge as any shape? Differently.

What is driving all this?

Two significant trends are rapidly transforming the landscape of autonomy: the standardization of components and significant advancements in artificial intelligence (AI). Components like VOXL and Pixhawk are pioneering this shift by providing open-source platforms that significantly reduce the time and complexity involved in building and testing autonomous systems. VOXL, for example, is a powerful, SWAP-optimized computing platform that brings together machine vision, deep learning processing, and connectivity options like 5G and LTE, tailored for drone and robotic applications. Similarly, Pixhawk stands as a cornerstone in the drone industry, serving as a universal hardware autopilot standard that integrates seamlessly with various open-source software, fostering innovation and accessibility across the drone ecosystem. All this means you don’t have to be Boeing to start building autonomous systems.

Standard VOXL board

These hardware advancements are complemented by cheap sensors, AI-specific chips, and other innovations, making sophisticated technologies broadly affordable and accessible. The common standards established by these components have not only simplified development processes but also ensured compatibility and interoperability across different systems. All the ingredients for a Cambrian explosion in autonomy.

The latest from NVIDIA and Google

These companies are building a bridge from software to real systems.

The latest advancements from NVIDIA’s GTC and Google’s work in robotics libraries highlight a pivotal moment where the realms of code and physical systems, particularly in digital manufacturing technologies, are increasingly converging. NVIDIA’s latest conference signals a transformative moment in the field of AI with some awesome new technologies:

  • Blackwell GPUs: NVIDIA introduced the Blackwell platform, which boasts a new level of computing efficiency and performance for AI, enabling real-time generative AI with trillion-parameter models. This advancement promises substantial cost and energy savings.
  • NVIDIA Inference Microservices (NIMs): NVIDIA is making strides in AI deployment with NIMs, a cloud-native suite designed for fast, efficient, and scalable development and deployment of AI applications.
  • Project GR00T: With humanoid robotics taking center stage, Project GR00T underlines NVIDIA’s investment in robotics learning and adaptability. These advancements imply that robots will be integral to motion and tasks in the future.

The overarching theme from NVIDIA’s GTC was a strong commitment to AI and robotics, driving not just computing but a broad array of applications in industry and everyday life. These developments hold potential for vastly improved efficiencies and capabilities in autonomy, heralding a new era where AI and robotics could become as commonplace and influential as computers are today.

Google is doing super empowering stuff too. Google DeepMind, in collaboration with partners from 33 academic labs, has made a groundbreaking advancement in the field of robotics with the introduction of the Open X-Embodiment dataset and the RT-X model. This initiative aims to transform robots from being specialists in specific tasks to generalists capable of learning and performing across a variety of tasks, robots, and environments. By pooling data from 22 different robot types, the Open X-Embodiment dataset has emerged as the most comprehensive robotics dataset of its kind, showcasing more than 500 skills across 150,000 tasks in over 1 million episodes.

The RT-X model, specifically RT-1-X and RT-2-X, demonstrates significant improvements in performance by utilizing this diverse, cross-embodiment data. These models not only outperform those trained on individual embodiments but also showcase enhanced generalization abilities and new capabilities. For example, RT-1-X showed a 50% success rate improvement across five different robots in various research labs compared to models developed for each robot independently. Furthermore, RT-2-X has demonstrated emergent skills, performing tasks involving objects and skills not present in its original dataset but found in datasets for other robots. This suggests that co-training with data from other robots equips RT-2-X with additional skills, enabling it to perform novel tasks and understand spatial relationships between objects more effectively.

These developments signify a major step forward in robotics research, highlighting the potential for more versatile and capable robots. By making the Open X-Embodiment dataset and the RT-1-X model checkpoint available to the broader research community, Google DeepMind and its partners are fostering open and responsible advancements in the field. This collaborative effort underscores the importance of pooling resources and knowledge to accelerate the progress of robotics research, paving the way for robots that can learn from each other and, ultimately, benefit society as a whole.

More components, readily available to more people will create a cycle with more cyber-physical systems with increasingly sophisticated and human-like capabilities.

Parallel to these hardware advancements, AI is experiencing an unprecedented boom. Investments in AI are yielding substantial results, driving forward capabilities in machine learning, computer vision, and autonomous decision-making at an extraordinary pace. This synergy between accessible, standardized components and the explosive growth in AI capabilities is setting the stage for a new era of autonomy, where sophisticated autonomous systems can be developed more rapidly and cost-effectively than ever before.

AI is exploding and democratizing simultaneously

Autonomy and Combat

What does all of this mean for modern warfare? Everyone has access to this tech and innovation is rapidly bringing these technologies into combat. We are right in the middle of a new powerful technology that will shape the future of war. Buckle up.

Let’s look at this in the context of Ukraine. The Ukraine-Russia war has seen unprecedented use of increasingly autonomous drones for surveillance, target acquisition, and direct attacks, altering traditional warfare dynamics significantly. The readily available components combined with rapid iteration cycles have democratized aerial warfare, allowing Ukraine to conduct operations that were previously the domain of nations with more substantial air forces and level the playing field against a more conventionally powerful adversary. These technologies are both accessible and affordable. While drones contribute to risk-taking by allowing for expendability, they don’t necessarily have to be survivable if they are numerous and inexpensive.

The future of warfare will require machine intelligence, mass and rapid iterations

The conflict has also underscored the importance of counter-drone technologies and tactics. Both sides have had to adapt to the evolving drone threat by developing means to detect, jam, or otherwise neutralize opposing drones. Moreover, drones have expanded the information environment, allowing unprecedented levels of surveillance and data collection which have galvanized global support for the conflict and provided options to create propaganda, to boost morale, and to document potential war crimes.

The effects are real. More than 200 companies manufacture drones within Ukraine and some estimates show that 30% of the Russian Black Sea fleet has been destroyed by uncrewed systems. Larger military drones like the Bayraktar TB2 and Russian Orion have seen decreased use as they became easier targets for anti-air systems. Ukrainian forces have adapted with smaller drones, which have proved effective at a tactical level, providing real-time intelligence and precision strike capabilities. Ukraine has the capacity to produce 150,000 drones every month, and may be able to produce two million drones by the end of the year and they have struck over 20,000 Russian targets.

As the war continues, innovations in drone technology persist, reflecting the growing importance of drones in modern warfare. The conflict has shown that while drones alone won’t decide the outcome of the war, they will undeniably influence future conflicts and continue to shape military doctrine and strategy.

Autonomy is an exciting and impactful field and the story is just getting started. Stay tuned.

By 0 Comments