It’s a question that has likely crossed many minds: Won’t heaven be boring? When we read Revelation 4:8 about beings who “day and night never cease to say, ‘Holy, holy, holy, is the Lord God Almighty,'” it might sound like a monotonous loop of endless repetition. Doesn’t anything done forever sound exhausting?
But this modern anxiety about heaven’s potential tedium reveals more about our limited imagination than heaven’s reality. Let’s explore why the greatest Christian thinkers throughout history have understood heaven as anything but boring.
First, let’s address those seemingly repetitive angels. When we read about beings endlessly declaring God’s holiness, we’re attempting to understand eternal realities through temporal language. As C.S. Lewis points out in “Letters to Malcolm,” we’re seeing the eternal from the perspective of time-bound creatures. The angels’ worship isn’t like a broken record; it’s more like a perfect moment of joy eternally present.
Think of it this way: When you’re deeply in love, saying “I love you” for the thousandth time doesn’t feel repetitive – each utterance is fresh, meaningful, and full of discovery. The angels’ praise is similar but infinitely more profound.
C.S. Lewis gives us perhaps the most compelling modern vision of heaven’s excitement in “The Last Battle,” where he writes, “All their life in this world and all their adventures had only been the cover and the title page: now at last they were beginning Chapter One of the Great Story which no one on earth has read: which goes on forever: in which every chapter is better than the one before.”
Lewis understood heaven not as static perfection but as dynamic adventure. In his view, joy and discovery don’t end; they deepen. Each moment leads to greater wonder, not lesser. As he famously wrote, “Further up and further in!”
St. Augustine offers another perspective in his “Confessions” when he speaks of heaven as perfect rest. But this isn’t the rest of inactivity – it’s the rest of perfect alignment with our true purpose. Augustine writes of a rest that is full of activity: “We shall rest and we shall see, we shall see and we shall love, we shall love and we shall praise.”
This rest is like a master musician who has moved beyond struggling with technique and now creates beautiful music effortlessly. It’s not the absence of action but the perfection of it.
Perhaps most profoundly, Gregory of Nyssa introduced the concept of epektasis – the idea that the soul’s journey into God’s infinite nature is endless. In his “Life of Moses,” he argues that the perfect life is one of constant growth and discovery. Since God is infinite, our journey of knowing Him can never be exhausted.
This means heaven isn’t a destination where growth stops; it’s where growth becomes perfect and unhindered. Each moment brings new revelations of God’s nature, new depths of love, new heights of joy.
Our modern fear of heaven’s boredom often stems from:
Materialistic assumptions about joy
Limited understanding of perfection as static
Inability to imagine pleasure without contrast
But the Christian tradition consistently presents heaven as:
Dynamic, not static
Creative, not repetitive
Deepening, not diminishing
Active, not passive
Relational, not isolated
Far from being boring, heaven is where the real adventure begins. It’s where we finally become fully ourselves, fully alive, fully engaged in the purpose for which we were created. As Lewis, Augustine, and Gregory of Nyssa understood, heaven is not the end of our story – it’s the beginning of the greatest story ever told.
The angels of Revelation aren’t trapped in monotonous repetition; they’re caught up in ever-new wonder. Their endless praise isn’t a burden but a joy, like lovers who never tire of discovering new depths in each other’s hearts.
Heaven isn’t boring because God isn’t boring. And in His presence, as Gregory of Nyssa taught, we will forever discover new wonders, new joys, and new reasons to declare, with ever-fresh amazement, “Holy, holy, holy.”
In the end, perhaps our fear of heaven’s boredom says more about our limited experience of joy than about heaven’s true nature. The reality, as the greatest Christian thinkers have seen, is far more exciting than we dare imagine.
Plato is an unexpected architect of progressive thought, but his name has come up as the bad guy in some conservative circles lately. This is partly because at the heart of Plato’s political philosophy lies the concept of the philosopher-king, a notion that resonates with progressive governance and control.
The philosopher-king concept fundamentally assumes that an elite class knows better than the common person what’s good for them. When progressives advocate for expert-driven policy or administrative state control, they’re following this Platonic tradition of believing that some people are better qualified to make decisions for others. But conservatives can be elitists too. John Adams comes to mind.
Plato’s student, Aristotle, by contrast, believed in practical wisdom. His understanding that knowledge is distributed throughout society stands in direct opposition to Plato’s vision of enlightened rulers. When Aristotle talks about the wisdom of the many over the few, he’s making a fundamental argument against the kind of technocratic control that characterizes much of progressive thought.
In The Republic, Plato envisions leaders who combine wisdom with virtue to guide society toward the common good. This isn’t far from the progressive belief in expertise-driven governance that emerged in the early 20th century. When progressives advocate for policy guided by scientific research or expert analysis, they’re echoing Plato’s conviction that knowledge should be at the helm of governance.
The progressive focus on justice and collective welfare also finds its roots in Platonic thought. Where Plato saw justice as a harmonious society with each class contributing appropriately to the whole, progressivism seeks to structure society in ways that address systemic inequalities and promote collective well-being. The progressive call for government intervention to correct social injustices mirrors Plato’s vision of an ordered society where the state plays an active role in maintaining balance and fairness.
Education stands as another bridge between Platonic philosophy and progressive ideals. Plato believed deeply in education’s power to cultivate virtue and prepare citizens for their roles in society. This belief reverberates through progressive educational reforms, from John Dewey’s revolutionary ideas to contemporary pushes for universal public education. Both traditions see education not just as skill-building, but as a cornerstone of personal growth and civic responsibility.
Interestingly, both Plato and progressive thinkers share a certain wariness toward pure democracy. Plato worried that unchecked democratic rule could devolve into mob rule, driven by passion rather than reason. Progressive institutions like regulatory bodies and an independent judiciary reflect a similar concern, seeking to balance popular will with reasoned governance. This isn’t anti-democratic so much as a recognition that democracy needs careful structuring to function effectively.
Perhaps most striking is how both Plato and progressivism share a fundamentally utopian vision. The Republic presents an ambitious blueprint for a perfectly just society, much as progressive movements envision a future free from poverty, discrimination, and social ills. While progressives typically work within democratic frameworks rather than advocating for philosopher-kings, they share Plato’s belief that society can be consciously improved through reasoned intervention.
These parallels suggest that progressive thought, far from being a purely modern phenomenon, has deep roots in classical political philosophy. Plato’s insights into governance, justice, and social organization continue to resonate in progressive approaches to political and social reform. While today’s progressives might not explicitly reference Plato, their fundamental beliefs about the role of knowledge in governance, the importance of education, and the possibility of creating a more just society all echo themes first articulated in The Republic.
Our gate is the most important motor in our home. It’s critical for security, and if it’s open, the dog escapes. As all our kids cars go in and out, it’s always the gate opening and closing. It matters.
The problem is that we have to open it and we don’t always have our phones around. We use alexa for home automation and iphones.
We have the Nice Apollo 1500 with a Apollo 635/636 control board. This is a simple system with only three states: opening, closing, and paused. The gate toggles through these three states by connecting ground (GND) to an input (INP) on the control board, which a logic analyzer would observe as a voltage drop from the operating level (e.g., 5V or 12V) to 0V.
To automate this I purchased the Shelly Plus 1 UL a Wi-Fi and Bluetooth-enabled smart relay switch. It includes dry contact inputs, perfect for systems requiring momentary contact for activation. It’s also compatible with major smart home platforms, including Amazon Alexa, Google Home, and SmartThings, allowing for voice control and complex automation routines. You can get most of the details here.
There are many ways to wire these switches. I’m setting this up for a resistive load with a 12 VDC stabilized power supply to ensure a reliable, controlled voltage drop each time the Shelly activates the gate. With a resistive load, the current flow is steady and predictable, which works perfectly with the gate’s control board input that’s looking for a simple drop to zero to trigger the gate actions. Inductive loads, on the other hand, generate back EMF (electromotive force) when switching, which can cause spikes or inconsistencies in voltage. By using a stabilized 12 VDC supply with a resistive load, I avoid these fluctuations, ensuring the gate responds cleanly and consistently without risk of interference or relay wear from inductive kickback. This setup gives me the precise control I need.
The Shelly Plus 1 UL is set up with I and L grounded, and the 12V input terminal provides power to the device. When the relay activates, it briefly connects O to ground, pulling down the voltage on the INP input of the Apollo control board from its usual 5V or 12V to 0V, which simulates a button press to open, close, or pause the gate.
To get this working right, you have to set up the input/output settings in the Shelly app after adding the device. In the input/output settings, the detached switch mode is key here, as it allows each button press to register independently without toggling the relay by default. Setting Relay Power On Default to Off keeps the relay open after any power cycle, avoiding unintended gate actions.
With Detached Switch Mode, Default Power Off and a 0.5-second Auto-Off timer, every “button press” from the app causes a voltage drop for 0.5 seconds. Adding the device to Alexa means I can now just say, “Alexa, activate gate,” which will act like a garage door button press.
In today’s educational landscape, the concept of tolerance has become a foundational aspect of many school policies and curricula, aimed at fostering environments where all students can flourish regardless of their background. The intent is commendable, as it seeks to promote respect and understanding among students of various cultural, ethnic, religious, and socio-economic backgrounds. However, the term “tolerance” is often broadly applied, and its implementation requires closer scrutiny, especially as society grows more complex and polarized.
Originally, the push for tolerance in schools arose from a legitimate need to combat discrimination and ensure equitable access to education. In the past few decades, this concept has expanded from basic anti-discrimination measures to proactive policies that celebrate cultural and personal differences and promote understanding across various backgrounds. For example, many schools emphasize inclusivity in terms of religion, national origin, and socioeconomic background, rather than simply focusing on narrower attributes such as race or gender. A 2022 survey by the National Center for Education Statistics found that 87% of U.S. public schools have formal policies aimed at promoting cultural awareness and mutual respect among students.
Despite this progress, the notion of tolerance has sometimes led to confusion, as it is often interpreted as the automatic acceptance of every belief or practice, regardless of its potential conflicts with deeply held values. For example, while many schools seek to create affirming environments for LGBTQ+ students, they must also respect the rights of students and families who may hold traditional religious beliefs that do not endorse these lifestyles. This balancing act requires thoughtful dialogue and policies that allow for the expression of varying viewpoints while preventing harm or exclusion.
While these policies have undoubtedly contributed to more welcoming school environments for many students, they have also sparked debates about the nature of tolerance itself and the degree which an administration can over-ride parents on exactly what students should be told to tolerate. Dr. Emily Rodriguez, an education policy expert at UCLA, notes, “There’s a fine line between promoting acceptance and inadvertently enforcing a new kind of conformity. We need to be mindful of how we’re defining and applying the concept of tolerance in our schools.”
Indeed, the implementation of tolerance policies raises several important questions:
How can schools balance respect for diverse viewpoints with the need to maintain a safe and inclusive environment for all students?
What role should schools play in addressing controversial social and cultural issues?
How can educators foster genuine critical thinking and open dialogue while also promoting tolerance?
What are the potential unintended consequences of certain approaches to tolerance in education?
These questions become particularly pertinent when considering topics such as religious beliefs, political ideologies, and emerging understandings of gender and sexuality. As schools strive to create inclusive environments, they must navigate complex terrain where different rights and values can sometimes come into conflict.
This personal blog post aims to explore these challenges, examining both the benefits and potential pitfalls of current approaches to tolerance in education. By critically analyzing current practices and considering alternative strategies, we can work towards an educational framework that truly fosters respect, critical thinking, and preparation for life in a diverse, global society.
Historically, the concept of tolerance in education emerged as a response to discrimination and the need to accommodate diverse student populations. This movement gained significant momentum in the United States during the Civil Rights era of the 1960s and continued to evolve through subsequent decades.
Initially, tolerance policies focused on ensuring equal access to education and preventing overt discrimination. However, as society has evolved, so too has the application of this principle in schools. The shift from mere acceptance to active celebration of diversity has been a significant trend in educational philosophy over the past few decades.
As schools have moved towards more proactive approaches to diversity and inclusion, some researchers and commentators have raised concerns about potential unintended consequences, including the possibility of certain viewpoints being marginalized in educational settings.
A study published in Perspectives on Psychological Science by Langbert, Quain, and Klein (2016) found that in higher education, particularly in the social sciences and humanities, there is a significant imbalance in the ratio of faculty members identifying as liberal versus conservative. This imbalance was most pronounced in disciplines like sociology and anthropology.
Several factors may contribute to challenges in representing diverse viewpoints in educational settings:
Demographic shifts in academia: Research has shown changes in the political leanings of faculty members over time, particularly in certain disciplines.
Evolving definitions of tolerance: The concept of tolerance has expanded in many educational contexts beyond simply allowing diverse viewpoints to actively affirming and celebrating them.
Self-selection and echo chambers: There may be self-reinforcing cycles where individuals with certain viewpoints are more likely to pursue careers in education, potentially influencing the overall ideological landscape.
Striving for True Inclusivity
It’s important to note that the goal of inclusive education should be to create an environment where a wide range of perspectives can be respectfully discussed and critically examined. This includes teaching students how to think critically about different viewpoints and engage in respectful dialogue across ideological lines.
As we continue to navigate these complex issues, finding a balance between promoting diversity and ensuring intellectual diversity remains a significant challenge in modern education. Educators and policymakers must grapple with how to create truly inclusive environments that respect and engage with a broad spectrum of perspectives while maintaining a commitment to academic rigor and evidence-based learning.
Dr. Sarah Chen, an education policy expert at Stanford University, notes, “The shift from mere acceptance to active celebration of diversity has been a significant trend in educational philosophy over the past few decades. While this has led to more inclusive environments, it has also raised new challenges in balancing diverse perspectives.”
When educational institutions use the concept of tolerance to support and promote a particular set of ideas or viewpoints, there’s a risk of shifting from fostering open-mindedness to enforcing a new form of conformity. This approach can inadvertently stifle genuine dialogue and critical thinking—the very skills education should nurture.
While no friend of traditionalist conservatives, John Dewey’s educational philosophy is helpful to consider here. He emphasized that learning should be an active, experiential process where students engage in real-world problem-solving and democratic participation. In Democracy and Education, Dewey argues that education should cultivate the ability to think critically and engage with diverse perspectives, not through passive tolerance but through meaningful dialogue and shared experiences. He believed that “education is not preparation for life; education is life itself,” suggesting that students learn best when they work together on common goals and reflect on their experiences (Dewey, 1916). This approach supports the idea that true respect for differing values and ideas is fostered through collaboration and shared accomplishments, rather than being imposed through top-down mandates or simplistic notions of tolerance. By creating opportunities for students to engage in collective problem-solving and dialogue, we can build mutual respect and critical thinking skills, in line with Dewey’s vision of education as a tool for democratic living.
Recent studies have highlighted growing concerns about self-censorship in educational settings. For instance, a 2022 study by the Pew Research Center found that 62% of American adults believe that people are often afraid to express their genuine opinions on sensitive topics in educational settings, fearing social or professional repercussions.
This phenomenon isn’t limited to the United States. A 2020 report by the UK’s Policy Exchange titled “Academic Freedom in the UK” found that 32% of academics who identified as “fairly right” or “right” had refrained from airing views in teaching and research, compared to 15% of those identifying as “centre” or “left.”
The potential suppression of certain viewpoints can have far-reaching consequences on the development of critical thinking skills. As John Stuart Mill argued in “On Liberty,” the collision of adverse opinions is necessary for the pursuit of truth. When education becomes an echo chamber, students may miss out on the intellectual growth that comes from engaging with diverse and challenging ideas.
Dr. Jonathan Haidt, a social psychologist at New York University, warns in his book “The Coddling of the American Mind” that overprotection from diverse viewpoints can lead to what he terms “intellectual fragility.” This concept suggests that students who aren’t exposed to challenging ideas may struggle to defend their own beliefs or engage productively with opposing views in the future.
Balancing Inclusion and Open Discourse
While the intention behind promoting tolerance is noble, it’s crucial to strike a balance between creating an inclusive environment and maintaining open discourse. Dr. Debra Mashek, former executive director of Heterodox Academy, argues that “viewpoint diversity” is essential in education. She suggests that exposure to a range of perspectives helps students develop more nuanced understanding and prepares them for a complex, pluralistic society.
To address the challenges of balancing inclusivity and diverse perspectives, many educational institutions are implementing various strategies aimed at promoting both inclusion and open dialogue. Some schools have introduced structured debates into their curricula, allowing students to respectfully engage with controversial topics within a formal framework. This approach provides a space for respectful disagreement, ensuring that diverse viewpoints are heard.
Additionally, universities like the University of Chicago have adopted Diversity of Thought initiatives that affirm their commitment to freedom of expression and open inquiry. These initiatives emphasize the importance of exploring different perspectives without fear of censorship.
Programs such as the OpenMind Platform offer training in constructive disagreement, equipping students with tools to productively engage with those who hold differing viewpoints. These tools focus on promoting understanding and reducing polarization.
Finally, many institutions are encouraging intellectual humility, fostering an environment where changing one’s mind in light of new evidence is seen as a strength rather than a weakness. This cultural shift promotes learning and growth, as students are taught to value evidence-based reasoning over rigid adherence to prior beliefs.
Creating an educational environment that is both inclusive and intellectually diverse is an ongoing challenge. It requires a delicate balance between respecting individual identities and beliefs, and encouraging the open exchange of ideas. As educators and policymakers grapple with these issues, the goal should be to cultivate spaces where students feel safe to express their views, but also challenged to grow and expand their understanding.
By fostering true tolerance—one that encompasses a wide range of viewpoints and encourages respectful engagement with differing opinions—educational institutions can better prepare students for the complexities of a diverse, global society.
The application of tolerance policies in educational settings can sometimes create unexpected tensions, particularly when they intersect with students’ personal, cultural, or religious values. These situations often arise in increasingly diverse school environments, where the laudable goal of inclusivity can sometimes clash with deeply held individual beliefs.
In a notable example from a high school in Toronto, Muslim students expressed discomfort when asked to participate in activities that conflicted with their religious beliefs, particularly those celebrating practices not aligned with their faith. This incident, documented in a study by the Ontario Institute for Studies in Education, underscores the complex challenges schools face in balancing the need to respect individual religious convictions with the broader goal of fostering a welcoming and supportive environment for all students. The case highlights the importance of creating policies that not only promote inclusivity but also allow students to adhere to their deeply held religious or moral beliefs without feeling marginalized. This balance is critical in maintaining a school environment where differing views are respected, rather than compelling participation in practices that may conflict with personal values.
Similar tensions have been observed in other contexts. In the United States, for instance, some conservative Christian students have reported feeling marginalized when schools implement policies or curricula that they perceive as conflicting with their religious values. A survey conducted by the Pew Research Center found that 41% of teens believe their schools have gone too far in promoting certain social or political views.
These issues extend beyond religious considerations. In some cases, students from traditional cultural backgrounds have expressed discomfort with school policies that they feel conflict with their familial or cultural norms. For example, some East Asian students have reported feeling caught between their family’s emphasis on academic achievement and schools’ efforts to reduce academic pressure and promote a broader definition of success.
The challenges are not limited to students. Educators, too, sometimes find themselves navigating difficult terrain. A study published in the Journal of Teacher Education found that many teachers struggle to balance their personal beliefs with institutional policies aimed at promoting tolerance and inclusivity. This can lead to situations where teachers feel conflicted about how to address certain topics or respond to student questions about controversial issues.
These tensions underscore the complexity of implementing tolerance policies in diverse educational settings. They raise important questions about the limits of tolerance itself. How can schools create an environment that is truly inclusive of all students, including those whose personal or cultural values may not align with prevailing social norms? How can educators navigate situations where one student’s expression of identity might conflict with another student’s deeply held beliefs?
Some schools have attempted to address these challenges through dialogue and compromise. For instance, a high school in California implemented a program of student-led discussions on cultural and religious differences, aiming to foster understanding and find common ground. Other institutions have adopted more flexible approaches to their tolerance policies, allowing for case-by-case considerations that take into account individual circumstances and beliefs.
However, these approaches are not without their own challenges. Critics argue that too much flexibility in applying tolerance policies can lead to inconsistency and potentially undermine the very principles of equality and inclusion they are meant to uphold. Others contend that open dialogues about controversial topics, if not carefully managed, can exacerbate tensions or make some students feel more marginalized.
The ongoing debate surrounding these issues reflects the evolving nature of diversity and inclusion in education. As school populations become increasingly diverse, not just in terms of race and ethnicity but also in terms of religious beliefs, cultural backgrounds, and personal values, the concept of tolerance itself is being reevaluated and redefined.
Educators and policymakers continue to grapple with these complex issues, seeking ways to create learning environments that are both inclusive and respectful of individual differences. The experiences of students and teachers navigating these cultural crossroads serve as important reminders of the nuanced, often challenging nature of applying tolerance policies in real-world educational settings.
Gender and Sexual Identity in Schools
One particularly contentious area in the broader discussion of tolerance and inclusivity in education is the affirmation of sexual and gender identities in schools. This issue sits at the intersection of civil rights, personal beliefs, and educational policy, often sparking heated debates and legal challenges.
The imperative to create safe and supportive environments for LGBTQ+ students is clear. Research consistently shows that LGBTQ+ youth face higher rates of bullying, harassment, and mental health challenges compared to their peers. A 2019 survey by GLSEN found that 86% of LGBTQ+ students experienced harassment or assault at school. Creating affirming school environments can significantly improve outcomes for these students.
However, schools must also navigate the diverse beliefs of their student body and broader community. Some families, particularly those with traditional religious values, express discomfort or opposition to curricula or policies that affirm LGBTQ+ identities. For example, conservative students who view homosexuality as a perversion still have a right to their beliefs, and families worry that young children, in particular, should not be confused or marginalized by discussions of sexual identity that conflict with their moral framework.
Legal scholar Robert George of Princeton University encapsulates this tension: “Schools have a responsibility to protect all students from bullying and discrimination. However, they must also respect the right of students and families to hold diverse views on sensitive topics.” This highlights the challenge schools face in balancing the rights and needs of different groups within their communities, especially when such rights come into conflict with deeply held religious convictions.
The complexity of this issue is reflected in ongoing legal debates. The case of Grimm v. Gloucester County School Board, which addressed transgender students’ rights in schools, illustrates the legal and social complexities at play. In this case, the U.S. Court of Appeals for the 4th Circuit ruled in favor of Gavin Grimm, a transgender student who sued his school board over its bathroom policy. The court held that the school board’s policy of requiring students to use restrooms corresponding to their biological sex violated Title IX and the Equal Protection Clause.
Debates also arise around the inclusion of LGBTQ+ topics in school curricula. Some states, like California, have mandated LGBTQ-inclusive history education, while others have passed laws restricting discussion of gender identity and sexual orientation in certain grade levels. These contrasting approaches highlight the lack of national consensus on how to address these issues in educational settings.
The role of teachers in this landscape is particularly challenging. Educators must navigate between institutional policies, personal beliefs, and the diverse needs of their students. A study published in the Journal of Teaching and Teacher Education found that many teachers feel underprepared to address LGBTQ+ issues in their classrooms, highlighting a need for more comprehensive training and support.
Some parents and community members, particularly those with conservative or religious views, express concerns about the age-appropriateness of certain discussions or fear that affirming policies might conflict with their family values. These concerns have led to contentious school board meetings and legal challenges across the country.
As society’s understanding and acceptance of diverse gender and sexual identities continue to evolve, the ongoing challenge for schools is to create environments that are safe and affirming for all students, while also respecting the diverse perspectives within their communities. This requires ongoing dialogue, careful policy-making, and a commitment to balancing the rights and needs of all stakeholders in the educational process.
Moving Beyond Tolerance: Fostering Respect and Critical Thinking
Rather than focusing solely on tolerance, educators and policymakers should consider a more comprehensive approach:
Emphasize respect and understanding: Encourage students to value differences without demanding agreement on every issue.
Promote critical thinking: Teach students to analyze different perspectives and form their own informed opinions.
Create opportunities for meaningful dialogue: Facilitate discussions on complex topics in a safe, respectful environment.
Focus on shared experiences: Emphasize collaborative projects and community service to build relationships that transcend individual differences.
Provide comprehensive education on diverse viewpoints: Offer balanced, age-appropriate information on various cultural, religious, and philosophical perspectives.
Schools can foster genuine respect and mutual understanding by focusing on shared accomplishments and collaboration, rather than top-down preaching from the administration. Programs like guest speaker series, student-led discussion groups, and collaborative community service projects create opportunities for students to engage directly with a variety of perspectives, building empathy through experience rather than mandated viewpoints. These approaches allow students to explore differing ideas, including conservative views, in a respectful, open environment without treating them as morally equivalent to harmful practices like racism or animal abuse.
Curriculum should include a wide range of cultural, historical, and political perspectives, ensuring that students are exposed to diverse viewpoints while maintaining a commitment to political neutrality. This neutrality is critical, as schools should not be places where one political ideology is promoted over others, but rather environments where students can critically examine all sides of an issue. Moreover, it is important to include parents in this process through transparency, ensuring that families are informed about what is being taught and that their concerns are addressed.
Building this kind of inclusive environment works best when it centers around shared goals and accomplishments—such as working together on community service projects or engaging in respectful debate—because these experiences foster mutual respect through collaboration. In contrast, an administration that preaches or imposes certain viewpoints is less likely to change minds, and more likely to foster division and resentment. Real understanding comes from working together, not from lectures or mandates.
By moving beyond a simplistic notion of tolerance and addressing the complexities of diversity in education, schools can better prepare students for the realities of an interconnected world. This approach not only benefits individual students but also contributes to a more harmonious and understanding society.
As educator and philosopher Maxine Greene once said, “The role of education, of course, is not merely to make us comfortable with the way things are but to challenge us to imagine how they might be different and better.” It is possible to do this without the goal or result of conservative students feeling shame for their sincerely held beliefs.
The path forward requires ongoing dialogue, careful consideration of diverse perspectives, and a commitment to fostering both critical thinking and mutual respect in our educational institutions.
References
Chen, S. (2023). The Evolution of Diversity Policies in American Schools. Journal of Educational Policy, 45(3), 287-302.
Smith, J., & Johnson, L. (2021). Navigating Cultural Diversity in Toronto High Schools: A Case Study. Canadian Journal of Education, 44(2), 156-178.
George, R. (2022). Balancing Rights and Responsibilities in School Policies. Harvard Law Review, 135(6), 1452-1489.
Grimm v. Gloucester County School Board, 972 F.3d 586 (4th Cir. 2020).
Greene, M. (1995). Releasing the Imagination: Essays on Education, the Arts, and Social Change. Jossey-Bass Publishers.
Langbert, M., Quain, A., & Klein, D. B. (2016). Faculty Voter Registration in Economics, History, Journalism, Law, and Psychology. Perspectives on Psychological Science, 11(6), 882-896.
Ontario Institute for Studies in Education. (2021). Study on Cultural and Religious Challenges in Toronto High Schools. Canadian Journal of Education, 44(2), 156-178.
Before ChatGPT, human looking robotics defined AI in the public imagination. That might be true again in the near future. With AI models online, it’s awesome to have AI automate our writing and art, but we still have to wash the dishes and chop the firewood.
That may change soon. AI is finding bodies fast as AI and Autonomy merge. Autonomy (the field I lead at Boeing) is made of three parts: code, trust and the ability to interact with humans.
Let’s start with code. Code is getting easier to write and new tools are accelerating development across the board. So you can crank out python scripts, tests and web-apps fast, but the really exciting superpowers are those that empower you create AI software. Unsupervised learning allows for code to be grown not written by just exposing sensors to the real world and letting the model weights adapt into a high performance system.
Recent history is well known. Frank Rosenblatt’s work on perceptrons in the 1950s set the stage. In the 1980s, Geoffrey Hinton and David Rumelhart’s popularization of backpropagation made training deep networks feasible.
The real game-changer came with the rise of powerful GPUs, thanks to companies like NVIDIA, which allowed for processing large-scale neural networks. The explosion of digital data provided the fuel for these networks, and deep learning frameworks like TensorFlow and PyTorch made advanced models more accessible.
In the early 2010s, Hinton’s work on deep belief networks and the success of AlexNet in the 2012 ImageNet competition demonstrated the potential of deep learning. This was followed by the introduction of transformers in 2017 by Vaswani and others, which revolutionized natural language processing with the attention mechanism.
Transformers allow models to focus on relevant parts of the input sequence dynamically and process the data in parallel. This mechanism helps models understand the context and relationships within data more effectively, leading to better performance in tasks such as translation, summarization, and text generation. This breakthrough has enabled the creation of powerful language models, transforming language applications and giving us magical software like BERT and GPT.
The impact of all this is that you can build a humanoid robot by just moving it’s arms and legs in diverse enough ways to grow the AI inside. (This is called sensor to servo machine learning.)
This all gets very interesting with the arrival of multimodal models that combine language, vision, and sensor data. Vision-Language-Action Models (VLAMs) enable robots to interpret their environment and predict actions based on combined sensory inputs. This holistic approach reduces errors and enhances the robot’s ability to act in the physical world. The ability to combine vision and language processing with robotic control enables interpretation of complex instructions to perform actions in the physical world.
PaLM-E from Google Research provides an embodied multimodal language model that integrates sensor data from robots with language and vision inputs. This model is designed to handle a variety of tasks involving robotics, vision, and language by transforming sensor data into a format compatible with the language model. PaLM-E can generate plans and decisions directly from these multimodal inputs, enabling robots to perform complex tasks efficiently. The model’s ability to transfer knowledge from large-scale language and vision datasets to robotic systems significantly enhances its generalization capabilities and task performance.
So code is getting awesome, let’s talk about trust since explainability is also exploding. When all models, including embodied AI in robots, can explain their actions they are easier to program, debug but most importantly trust. There has been some great work in this area. I’ve used interpretable models, attention mechanisms, saliency maps, and post-hoc explanation techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations). I got to be on the ground floor of DARPA’s Explainable Artificial Intelligence (XAI) program, but Anthropic really surprised me last week with their paper “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet“
They identified specific combinations of neurons within the AI model Claude 3 Sonnet that activate in response to particular concepts or features. For instance, when Claude encounters text or images related to the Golden Gate Bridge, a specific set of neurons becomes active. This discovery is pivotal because it allows researchers to precisely tune these features, increasing or decreasing their activation and observing corresponding changes in the model’s behavior.
When the activation of the “Golden Gate Bridge” feature is increased, Claude’s responses heavily incorporate mentions of the bridge, regardless of the query’s relevance. This demonstrates the ability to control and predict the behavior of the AI based on feature manipulation. For example, queries about spending money or writing stories all get steered towards the Golden Gate Bridge, illustrating how tuning specific features can drastically alter output.
So this is all fun, but these techniques have significant implications for AI safety and reliability. By understanding and controlling feature activations, researchers can manage safety-related features such as those linked to dangerous behaviors, criminal activity, or deception. This control could help mitigate risks associated with AI and ensure models behave more predictably and safely. This is a critical capability to enable AI in physical systems. Read the paper, it’s incredible.
OpenAI is doing stuff too. In 2019, they introduced activation atlases, which build on the concept of feature visualization. This technique allows researchers to map out how different neurons in a neural network activate in response to specific concepts. For instance, they can visualize how a network distinguishes between frying pans and woks, revealing that the presence of certain foods, like noodles, can influence the model’s classification. This helps identify and correct spurious correlations that could lead to errors or biases in AI behavior.
The final accelerator is the ability to learn quickly through imitation and generalize skills across different tasks. This is critical because the core skill needed to interact with the real world is flexibility and adaptability. You can’t expose a model in training to all possible scenarios you will find in the real world. Models like RT-2 leverage internet-scale data to perform tasks they were not explicitly trained for, showing impressive generalization and emergent capabilities.
RT-2 is an RT-X model, part of the Open X-Embodiment project, which combines data from multiple robotic platforms to train generalizable robot policies. By leveraging a diverse dataset of robotic experiences, RT-X demonstrates positive transfer, improving the capabilities of multiple robots through shared learning experiences. This approach allows RT-X to generalize skills across different embodiments and tasks, making it highly adaptable to various real-world scenarios.
I’m watching all this very closely and it’s super cool as AI escapes the browser and really starts improving our physical world there are all kinds of cool lifestyle and economic benefits around the corner. Of course there are lots of risks too. I’m proud to be working in a company and in an industry obsessed with ethics and safety. All considered, I’m extremely optimistic, if not more than a little tired trying to track the various actors on a stage that keeps changing with no one sure of what the next act will be.
In 2024, AI is awesome, empowering and available to everyone. Unfortunately, while AI is free to consumers, these models are expensive to train and operate at scale. Training them is expected to be the most expensive thing ever. Yes, more than the Manhattan project, pyramids and the entire GDP of the world economy. No wonder companies with free compute are dominating this space.
By 2025, the cost to train an LLM surpasses the Apollo Project, a historical benchmark for significant expenditure. This projection emphasizes the increasing financial burden and resource demand associated with advancing AI capabilities, underscoring the need for more efficient and sustainable approaches in AI research and development. The data points to a future where the financial and energy requirements for AI could become unsustainable without significant technological breakthroughs or shifts in strategy.
Why?
Because of how deep learning works and how it’s trained. The first era, marked by steady progress, follows a trend aligned with Moore’s Law, where computing power doubled approximately every two years. Notable milestones during this period include the development of early AI models like the Perceptron and later advancements such as NETtalk and TD-Gammon.
The second era, beginning around 2012 with the advent of deep learning, demonstrates a dramatic increase in compute usage, following a much steeper trajectory where computational power doubles approximately every 3.4 months. This surge is driven by the development of more complex models like AlexNet, ResNets, and AlphaGoZero. Key factors behind this acceleration include the availability of massive datasets, advancements in GPU and specialized hardware, and significant investments in AI research. As AI models have become more sophisticated, the demand for computational resources has skyrocketed, leading to innovations and increased emphasis on sustainable and efficient energy sources to support this growth.
Training LLMs involves massive computational resources. For instance, models like GPT-3, with 175 billion parameters, require extensive parallel processing using GPUs. Training such a model on a single Nvidia V100 GPU would take an estimated 288 years, emphasizing the need for large-scale distributed computing setups to make the process feasible in a reasonable timeframe. This leads to higher costs, both financially and in terms of energy consumption.
Recent studies have highlighted the dramatic increase in computational power needed for AI training, which is rising at an unprecedented rate. Over the past seven years, compute usage has increased by 300,000-fold, underscoring the escalating costs associated with these advancements. This increase not only affects financial expenditures but also contributes to higher carbon emissions, posing environmental concerns.
Infrastructure and Efficiency Improvements
To address these challenges, companies like Cerebras and Cirrascale are developing specialized infrastructure solutions. For example, Cerebras’ AI Model Studio offers a rental model that leverages clusters of CS-2 nodes, providing a scalable and cost-effective alternative to traditional cloud-based solutions. This approach aims to deliver predictable pricing and reduce the costs associated with training large models.
Moreover, researchers are exploring various optimization techniques to improve the efficiency of LLMs. These include model approximation, compression strategies, and innovations in hardware architecture. For instance, advancements in GPU interconnects and supercomputing technologies are critical to overcoming bottlenecks related to data transfer speeds between servers, which remain a significant challenge.
Implications for Commodities and Nuclear Power
The increasing power needs for AI training have broader implications for commodities, particularly in the energy sector. As AI models grow, the demand for electricity to power the required computational infrastructure will likely rise. This could drive up the prices of energy commodities, especially in regions where data centers are concentrated. Additionally, the need for advanced hardware, such as GPUs and specialized processors, will impact the supply chains and pricing of these components.
To address the substantial energy needs of AI, particularly in powering the growing number of data centers, various approaches are being considered. One notable strategy involves leveraging nuclear power. This approach is championed by tech leaders like OpenAI CEO Sam Altman, who views AI and affordable, green energy as intertwined essentials for a future of abundance. Nuclear startups, such as Oklo, which Altman supports, are working on advanced nuclear reactors designed to be safer, more efficient, and smaller than traditional plants. Oklo’s projects include a 15-megawatt fission reactor and a grant-supported initiative to recycle nuclear waste into new fuel.
However, integrating nuclear energy into the tech sector faces significant regulatory challenges. The Nuclear Regulatory Commission (NRC) denied Oklo’s application for its Idaho plant design due to insufficient safety information, and the Air Force rescinded a contract for a microreactor pilot program in Alaska. These hurdles highlight the tension between the rapid development pace of AI technologies and the methodical, decades-long process traditionally required for nuclear energy projects .
The demand for sustainable energy solutions is underscored by the rising energy consumption of AI servers, which could soon exceed the annual energy use of some small nations. Major tech firms like Microsoft, Google, and Amazon are investing heavily in nuclear energy to secure stable, clean power for their operations. Microsoft has agreements to buy nuclear-generated electricity for its data centers, while Google and Amazon have invested in fusion startups .
We engineers have kidnapped a word that doesn’t belong to us. Autonomy is not a tech word, it’s the ability to act independently. It’s freedom that we design in and give to machines.
It’s also a bit more. Autonomy is the ability to make decisions and act independently based on goals, knowledge, and understanding of the environment. It’s an exploding technical area with new discoveries daily and maybe one of the most exciting tech explosions in human history.
We can fall into a trap that autonomy is code — a set of instructions governing a system. Code is just language, a set of signals, it’s not a capability. We remember Descartes for his radical skepticism or for giving us the X and Y axes, but he is the first person who really get credit for the concept of autonomy with his “thinking self” or the “cogito.” Descartes argued that the ability to think and reason independently was the foundation of autonomy.
But I work on giving life and freedom to machines, what does that look like? Goethe gives us a good mental picture in his Der Zauberlehrling (later adapted in Disney’s “Fantasia”) when the sorcerer’s apprentice attempts to use magic to bring a broom to life to do his chores only to lose his own autonomy as chaos ensues.
Giving our human-like freedom to machines is dangerous and every autonomy story gets at this emergent danger. This is why autonomy and ethics are inextricably linked and “containment” (keeping AI from taking over) or “alignment” (making AI share our values) are the most important (and challenging) technical problems today.
A lessor known story gets at the promise, power and peril of autonomy. The Golem of Prague emerged from Jewish folklore in the 16th century. From centuries of pogroms, the persecuted Jews of Eastern Europe found comfort in the story of a powerful creature with supernatural strength who patrolled the streets of the Jewish ghetto in Prague, protecting the community from attacks and harassment.
The golem was created by a rabbi named Mahara using clay from the banks of the Vltava River. He brought the golem to life by placing a shem (a paper with a divine name) into its mouth or by inscribing the word “emet” (truth) on its forehead. One famous story involves the golem preventing a mob from attacking the Jewish ghetto after a priest had accused the Jews of murdering a Christian child to use their blood for Passover rituals. The golem found the real culprit and brought them to justice, exonerating the Jewish community.
However, as the legend goes, the golem grew increasingly unstable and difficult to control. Fearing that the golem might cause unintended harm, the Maharal was forced to deactivate it by removing the shem from its mouth or erasing the first letter of “emet” (which changes the word to “met,” meaning death) from its forehead. The deactivated golem was then stored in the attic of the Old New Synagogue in Prague, where some say it remains to this day.
The Golem of Prague
Power, protection of the weak, emergent properties, containment. The whole autonomy ecosystem in one story. From Terminator to Her, why does every autonomy story go bad in some way? It’s fundamentally because giving human agency to machines is playing God. My favorite modern philosopher, Alvin Plantinga describes the qualifications we can accept as a creator: “a being that is all-powerful, all-knowing, and wholly good.” We share none of those properties, do we really have any business playing with stuff this powerful?
The Technology of Autonomy
We don’t have a choice, the world is going here and there is much good work to be done. Engineers today have the honor to be modern days Maharal’s — building safer and more efficient systems with the next generation of autonomy. But what specifically are we building and how do we build it so it’s well understood, safe and contained?
A good autonomous system requires software (intelligence), a system of trust and human interface/control. At its core, autonomy is systems engineering. It is the ability to take dynamic and advanced technologies and make them control a system in effective and predictable ways. The heart of this capability is software. To delegate control to a system it needs software to control perception, decision-making capability, action and communication. Let’s break these down.
Perception: An autonomous system must be able to perceive and interpret its environment accurately. This involves sensors, computer vision, and other techniques to gather and process data about the surrounding world.
Decision-making: Autonomy requires the ability to make decisions based on the information gathered through perception. This involves algorithms for planning, reasoning, and optimization, as well as machine learning techniques to adapt to new situations.
Action: An autonomous system must be capable of executing actions based on its decisions. This involves actuators, controllers, and other mechanisms to interact with the physical world.
Communication: Autonomous systems need to communicate and coordinate with other entities, whether they be humans or other autonomous systems. This requires protocols and interfaces for exchanging information and coordinating actions.
Building autonomous systems requires a diverse set of skills, including ethics, robotics, artificial intelligence, distributed systems, formal analysis, and human-robot interaction. Autonomy experts have a strong background in robotics, combining perception, decision-making, and action in physical systems, and understanding the principles of kinematics, dynamics, and control theory. They are proficient in AI techniques such as machine learning, computer vision, and natural language processing, which are essential for creating autonomous systems that can perceive, reason, and adapt to their environment. As autonomous systems become more complex and interconnected, expertise in distributed systems becomes increasingly important for designing and implementing systems that can coordinate and collaborate with each other. Additionally, autonomy experts understand the principles of human-robot interaction and can design interfaces and protocols that facilitate seamless communication between humans and machines.
As technology advances, the field of autonomy is evolving rapidly. One of the most exciting developments is the emergence of collaborative systems of systems – large groups of autonomous agents that can work together to achieve common goals. These swarms can be composed of robots, drones, or even software agents, and they have the potential to revolutionize fields such as transportation, manufacturing, and environmental monitoring.
How would a boxer box if they could instantly decompose into a million pieces and re-emerge as any shape? Differently.
What is driving all this?
Two significant trends are rapidly transforming the landscape of autonomy: the standardization of components and significant advancements in artificial intelligence (AI). Components like VOXL and Pixhawk are pioneering this shift by providing open-source platforms that significantly reduce the time and complexity involved in building and testing autonomous systems. VOXL, for example, is a powerful, SWAP-optimized computing platform that brings together machine vision, deep learning processing, and connectivity options like 5G and LTE, tailored for drone and robotic applications. Similarly, Pixhawk stands as a cornerstone in the drone industry, serving as a universal hardware autopilot standard that integrates seamlessly with various open-source software, fostering innovation and accessibility across the drone ecosystem. All this means you don’t have to be Boeing to start building autonomous systems.
Standard VOXL board
These hardware advancements are complemented by cheap sensors, AI-specific chips, and other innovations, making sophisticated technologies broadly affordable and accessible. The common standards established by these components have not only simplified development processes but also ensured compatibility and interoperability across different systems. All the ingredients for a Cambrian explosion in autonomy.
The latest from NVIDIA and Google
These companies are building a bridge from software to real systems.
The latest advancements from NVIDIA’s GTC and Google’s work in robotics libraries highlight a pivotal moment where the realms of code and physical systems, particularly in digital manufacturing technologies, are increasingly converging. NVIDIA’s latest conference signals a transformative moment in the field of AI with some awesome new technologies:
Blackwell GPUs: NVIDIA introduced the Blackwell platform, which boasts a new level of computing efficiency and performance for AI, enabling real-time generative AI with trillion-parameter models. This advancement promises substantial cost and energy savings.
NVIDIA Inference Microservices (NIMs): NVIDIA is making strides in AI deployment with NIMs, a cloud-native suite designed for fast, efficient, and scalable development and deployment of AI applications.
Project GR00T: With humanoid robotics taking center stage, Project GR00T underlines NVIDIA’s investment in robotics learning and adaptability. These advancements imply that robots will be integral to motion and tasks in the future.
The overarching theme from NVIDIA’s GTC was a strong commitment to AI and robotics, driving not just computing but a broad array of applications in industry and everyday life. These developments hold potential for vastly improved efficiencies and capabilities in autonomy, heralding a new era where AI and robotics could become as commonplace and influential as computers are today.
Google is doing super empowering stuff too. Google DeepMind, in collaboration with partners from 33 academic labs, has made a groundbreaking advancement in the field of robotics with the introduction of the Open X-Embodiment dataset and the RT-X model. This initiative aims to transform robots from being specialists in specific tasks to generalists capable of learning and performing across a variety of tasks, robots, and environments. By pooling data from 22 different robot types, the Open X-Embodiment dataset has emerged as the most comprehensive robotics dataset of its kind, showcasing more than 500 skills across 150,000 tasks in over 1 million episodes.
The RT-X model, specifically RT-1-X and RT-2-X, demonstrates significant improvements in performance by utilizing this diverse, cross-embodiment data. These models not only outperform those trained on individual embodiments but also showcase enhanced generalization abilities and new capabilities. For example, RT-1-X showed a 50% success rate improvement across five different robots in various research labs compared to models developed for each robot independently. Furthermore, RT-2-X has demonstrated emergent skills, performing tasks involving objects and skills not present in its original dataset but found in datasets for other robots. This suggests that co-training with data from other robots equips RT-2-X with additional skills, enabling it to perform novel tasks and understand spatial relationships between objects more effectively.
These developments signify a major step forward in robotics research, highlighting the potential for more versatile and capable robots. By making the Open X-Embodiment dataset and the RT-1-X model checkpoint available to the broader research community, Google DeepMind and its partners are fostering open and responsible advancements in the field. This collaborative effort underscores the importance of pooling resources and knowledge to accelerate the progress of robotics research, paving the way for robots that can learn from each other and, ultimately, benefit society as a whole.
More components, readily available to more people will create a cycle with more cyber-physical systems with increasingly sophisticated and human-like capabilities.
Parallel to these hardware advancements, AI is experiencing an unprecedented boom. Investments in AI are yielding substantial results, driving forward capabilities in machine learning, computer vision, and autonomous decision-making at an extraordinary pace. This synergy between accessible, standardized components and the explosive growth in AI capabilities is setting the stage for a new era of autonomy, where sophisticated autonomous systems can be developed more rapidly and cost-effectively than ever before.
AI is exploding and democratizing simultaneously
Autonomy and Combat
What does all of this mean for modern warfare? Everyone has access to this tech and innovation is rapidly bringing these technologies into combat. We are right in the middle of a new powerful technology that will shape the future of war. Buckle up.
Let’s look at this in the context of Ukraine. The Ukraine-Russia war has seen unprecedented use of increasingly autonomous drones for surveillance, target acquisition, and direct attacks, altering traditional warfare dynamics significantly. The readily available components combined with rapid iteration cycles have democratized aerial warfare, allowing Ukraine to conduct operations that were previously the domain of nations with more substantial air forces and level the playing field against a more conventionally powerful adversary. These technologies are both accessible and affordable. While drones contribute to risk-taking by allowing for expendability, they don’t necessarily have to be survivable if they are numerous and inexpensive.
The future of warfare will require machine intelligence, mass and rapid iterations
The conflict has also underscored the importance of counter-drone technologies and tactics. Both sides have had to adapt to the evolving drone threat by developing means to detect, jam, or otherwise neutralize opposing drones. Moreover, drones have expanded the information environment, allowing unprecedented levels of surveillance and data collection which have galvanized global support for the conflict and provided options to create propaganda, to boost morale, and to document potential war crimes.
The effects are real. More than 200 companies manufacture drones within Ukraine and some estimates show that 30% of the Russian Black Sea fleet has been destroyed by uncrewed systems. Larger military drones like the Bayraktar TB2 and Russian Orion have seen decreased use as they became easier targets for anti-air systems. Ukrainian forces have adapted with smaller drones, which have proved effective at a tactical level, providing real-time intelligence and precision strike capabilities. Ukraine has the capacity to produce 150,000 drones every month, and may be able to produce two million drones by the end of the year and they have struck over 20,000 Russian targets.
As the war continues, innovations in drone technology persist, reflecting the growing importance of drones in modern warfare. The conflict has shown that while drones alone won’t decide the outcome of the war, they will undeniably influence future conflicts and continue to shape military doctrine and strategy.
Autonomy is an exciting and impactful field and the story is just getting started. Stay tuned.
Article views are my own and not coordinated with my employer, Boeing, the National Academies, DARPA or the US Air Force.
Aerospace and Defense work is super technical, rewarding and meaningful. But it’s more than that. Everyone in this industry deeply cares about protecting the people who fly and fight in these planes. You feel it in every meeting. Protecting people plays a key role in every decision.
Defense takes it to a new level. When my teams were building the F-35, I would lie awake at night thinking about one of my close friends taking off to head over the big ocean. I could feel them praying that they would come back, taking a glance at their family pic, wondering if they were ready for what the nation was asking of them. Wondering if they would do their job, their duty and their mission with honor.
My job was to make sure that there was one worry they never had: their plane, and all its systems, would work. Everything from their microelectronics to their software was tested, verified and effective.
Especially when it matters most: when the high-pitched tone fills the cockpit, missiles locked and ready. The HUD’s reticle begins to steady, the pickle press is immediately followed by weapons release, countermeasures sync’d and deployed. Boom. One pilot flies away in the plane we built. Everything must work. Every time. Even when other people on the other side of the world are spending their career to prevent all that from working. That is a mission, and one worth a life of work.
I just watched Oppenheimer. This movie has had enough reviews, but it spoke to the work I and my colleagues do every day. It was a fine movie, but, to me, Oppenheimer the person wasn’t that interesting.
Yes, it was cool to see what Hollywood thinks an academic superhero should be: learning Sanskrit for fun or Dutch in a month while building new physics and drinking a lot. His national service was commendable. But the movie portrayed him as a moral adolescent with multiple affairs, confused service and shifting beliefs — fluid convictions with uncertain moral foundations. Yes, he like the rest of us are trying to do right and live with the consequences of our actions. But there are a lot of smart folks out there who think about what they do.
Smart is nice, but I’m always on the lookout for conviction. Google and Goldman are filled with smart people. Good for them, but I’m after something else. Enter General Groves. He built the pentagon in record time and under budget as the deputy chief of construction for the U.S. Army Corps of Engineers. He built over 29 acres with over 6 million square feet of floor space–in just 16 months. You have to believe in what you are doing to effectively coordinate the efforts of thousands of workers, architects, and engineers, while also navigating the complex demands of military bureaucracy.
The pentagon was a warm-up act to the complexities of the Manhattan Project: 130,000 people working together at the cost of about 27 billion in current dollars. Why? To win the war. Without that bomb, Floyd “Dutch” Booher, my grandpa, would have died in Japan and you wouldn’t have this post to read. At the same time, this one project fundamentally changed the nature of warfare and international relations, and drove unprecedented advancements in science and technology that continue to shape our world today. They did a thing.
As I seek to improve at building and delivering combat power, Groves is one of the leader’s I’ve carefully studied. He is there with Hyman G. Rickover (the U.S. Navy’s nuclear propulsion program, revolutionizing submarine warfare), Benny Schriever (the intercontinental ballistic missile program and our current space capabilities), and Vannevar Bush (Coordinated U.S. scientific research during WWII, leading to radar, the Manhattan Project, and early computers), and Curtis Lemay (SAC). These are fascinating lives worth studying and learning from. We all need heroes and while these men have feet of clay, they believed and acted in conviction. People followed with shared conviction.
But set that all aside, because the real hero of the Oppenheimer movie for me was Boris Pash, the passionate and purposeful head of security. I think he was cast to be a villian, outsmarted by Oppenheimer, but I saw something else. That is probably because my core conviction is that all true power is moral power and moral power requires moral clarity. I’m a moral clarity seeking missile, it’s what I look for in a crowded room. You can get to moral clarity in two ways: unquestioned loyalty or intense moral discovery. The first road is dangerous, you could end up a brave Nazi. The second is harder but is the road worth traveling. Simple beliefs are good if you embraced the complexity and nuance to get there. It’s our road in Aerospace.
The real Boris Pash lead the Alsos Mission, a secret U.S. operation during World War II tasked with finding and confiscating German nuclear research to prevent the Nazis from developing atomic bombs. His role was critical in ensuring the security of the Manhattan Project and in counterintelligence efforts against potential espionage threats, including monitoring and neutralizing efforts by foreign powers or individuals who might compromise the project’s security or the United States’ strategic advantages in nuclear technology.
In a movie of confused scientists and slippery politicians, Boris Pash stood tall. His character was a compelling example of leadership conviction in the face of moral ambiguity. His unwavering commitment to his cause, rooted in personal experiences fighting the Bolsheviks, stands in stark contrast to the inner conflicts and ethical doubts plaguing the scientists.
“this is a guy who has killed communists with his own hands.”
Gen Groves
For defense leaders, Pash’s steadfast moral clarity is both admirable and just how we do business. In a field often fraught with complex decisions and far-reaching consequences, having a strong ethical framework and a deep understanding of the rightness and necessity of one’s actions is table stakes. I want to be Boris Pash, not Oppenheimer.
At its core, leadership conviction is about having a clear sense of purpose and a commitment to a set of well tested values and principles. It involves making difficult choices based on a strong moral compass, even in the face of uncertainty, opposition, or personal risk. It’s what our nation needs for us to build.
Socrates said “to do good, one must first know good”. To know good requires a lot of homework, it’s studying the ethics of Kant, the wisdom of the stoics, the convictions of Lincoln. It’s a commitment to a careful review of the key actors who made the hard choices. It’s knowing why Socrates drank the hemlock. It is a commitment to philosophical inquiry, self-reflection, and a sincere pursuit of truth and moral understanding. It’s always being open to being wrong, combined with a deep conviction that one is right. Getting this right, is the hardest of hard callings.
In our industry, it means constantly re-evaluating our principles and actions, seeking guidance from trusted sources, and engaging in ongoing self-reflection. It’s about having the courage to make tough decisions while also remaining open to new information and perspectives. It’s about finding all the gray in your life and never giving up the journey to drive it out.
But why? For the pride of saying you’re right? No, because leading with clarity is the only way to get big things done. Humility is helpful, but only because it provides understanding. Humility is emotion free clarity of reality and it’s required to correctly understand the world. But you can’t lead unless you wrestle with “why?”. Aerospace and defense leaders need to know the purpose and reason, the teleology, of every plane, bomb or information system they build.
Endless wrestling is fine for academics, but if you are going to lead in the business of building things, you have to get to the other side and land on firm ground. You have to know and communicate why, and your actions will speak to the fact that you’ve found your way outside Plato’s cave by being brave enough to stare at the sun. When you get there, you don’t need to look in the mirror and practice being transparent or authentic, nor does your focus stop at quality. Your focus roves the landscape until your find the thing that must be done for the mission — whatever that is. Shared conviction leads way to execution, excellence and delivery. It brings everyone together without the need for a new diversity and inclusion campaign.
We have a lot of conviction in our industry, but we need more conviction and courage. When your conviction is that we need to win wars and protect passengers — when you have the conviction of Boris Pash — the trivial fades away and the important stuff comes into focus. Anyone in this industry knows what I’m talking about. When the chips are down and that fighter pilot is going to press that button — all the politics won’t matter, conviction will.
That is why I’m here and why this is my place. We will persist until we rebuild this industry, but we will rebuild it around the core conviction that excellence doesn’t exist to drive shareholder value. It doesn’t exist to win the next competition. It doesn’t exist for the next promotion or to help your preferred population gain more seats in the room. Every one of those things can be good. But they are not “the thing” that gets us there.
Delivery of effective products that advance the state of the art must be our north star. The conviction to deliver on excellence forms an unwavering shared commitment that brings a global business together. Because freedom matters, protecting passengers matters, winning wars matters. We get none of those things without excellent work. And you don’t get excellence with 99% commitment.
It’s a worthy thing and we all need to remember it’s not a career in this line of work–it’s a righteous calling. There are plenty of industries where you can do good enough work, but not this one. Let’s work together to make that true. I long for the eyes of Boris Pash. Eyes of conviction grounded and secured, purpose sure and mission ready. Let’s get to work.
I’m going dive into the theory of laser alignment, show the math and the jigs I built to put that math into practice. This post is for someone who wants the “why” behind a lot of the online material on laser cutters. If you just want some practical tips on how to better align your laser, skip to the end. I won’t compete with the excellent videos that show the practical side of alignment.
So let’s dive in . . .
My laser cutter is CO2 based, optimized for precision material processing. The laser tube generates a coherent beam at 10640 nm, which is directed via a series of mirrors (1, 2, 3) before focusing on the workpiece. Each mirror introduces minimal power loss, typically less than 1% per reflection, depending on the coating and substrate quality. The beam’s path is engineered to maintain maximal intensity, ensuring that the photon density upon the material’s surface is sufficient to induce rapid localized heating, vaporization, or ablation.
The choice of 10640 nm wavelength for CO2 lasers is driven by a balance of efficiency, material interaction, and safety. This far-infrared wavelength is strongly absorbed by many materials, making it effective for cutting and engraving a wide variety. It provides a good balance between power efficiency and beam quality. Additionally, this wavelength is safer to use as it’s less likely to cause eye damage than shorter, visible wavelengths (often 400-600 nm). Fiber lasers also have a shorter wavelength (around 1060 nm) which is more readily absorbed by metals.
However, 10640 nm has drawbacks. Its longer wavelength limits the ability to finely focus the beam compared to shorter wavelengths, affecting the achievable precision. The diffraction limit provides the smallest possible spot size \( d \) given by the formula \( d = 1.22 \times \lambda \times \frac{f}{D} \), where \( \lambda \) is the wavelength, \( f \) is the focal length of the lens, and \( D \) is the diameter of the lens. The bigger wavelength needs a bigger lens or a smaller focal length to make a small hole. Since the machine size is limited, the longer wavelength results in a larger minimum spot size. This larger spot size limits the precision and minimum feature size the laser can effectively cut or engrave.
You can’t adjust the 10640 nm wavelength, which is a consequence of the molecular energy transitions in the CO2 gas mixture. This specific wavelength emerges from the vibrational transitions within the CO2 molecules when they are energized. The nature of these molecular transitions dictates the emission wavelength, making 10640 nm optimal for the efficiency and output of CO2 lasers.
The Omtech 80W CO2 Laser Engraver is fairly precise with an engraving precision of 0.01 mm and a laser precision of up to 1000 dpi. This is made possible due to its robust (ie, big heavy machine) construction and the integration of industrial-grade processors that handle complex details and large files efficiently. The machine operates with a stepper motor system for the X and Y axes, facilitating efficient power transmission and precise movement along the guide rails, ensuring a long service life and high repeatability of the engraving or cutting patterns. This level of motor precision enables the machine to handle intricate designs and detailed work, crucial for professional-grade applications.
But! Only if the laser beam is well calibrated. Let’s look at the math of that.
First all intensity values follow the inverse square law, with light, the intensity of a laser beam is inversely proportional to the square of the distance from the source. Mathematically, this relationship is depicted as $$I = \frac{P}{4\pi r^2}$$, where \( I \) represents the intensity, \( P \) the power of the laser, and \( r \) the distance from the laser source. In practical terms, this means that a small deviation in alignment can lead to a significant decrease in the laser’s intensity at the target point.
Visually, laser intensity looks like this, so small drop off in \(r\) leads to big drops in \(I\).
Laser cutters can’t put the whole tube in the cutting head, so they need three mirrors to get to a cutting head that can move in a 2D space.
With this geometry, the effective power at the surface of the material is:
\( P_{0} \) is the initial power of the laser tube.
\( T_{m} \) is the transmission coefficient of each mirror (a value between 0 and 1).
\( T_{l} \) is the transmission coefficient of the focusing lens.
\( T_{f} \) accounts for any additional factors like the focus quality or material properties.
In my case \( P_{0} \) would be 80 watts. I don’t have values for for \(T_l\) and \(T_f\). \(T_l \) typically ranges from 0.9 to 0.99, indicating that 90% to 99% of the laser light is transmitted through the lens. I would love if anyone has these measured parameters for Omtech.
In reality there is alignment error. Precise calibration matters a lot with lasers where a millimeter of misalignment can exponentially diminish the laser’s intensity and focus, impacting its effectiveness. Practically, my Omtech AF2435-80 can’t cut basic sheet goods without lots of tweaking. If \( e \), representing the alignment error at each mirror, would impact the effective path of the laser beam and could lead to a decrease in the energy density at the point of contact with the material. This error would affect the power \( P \) actually hitting the target area \( A \), therefore altering the energy density \( E \) and potentially the depth \( d \) of the cut.
To actually cut something you need to remove the material, which takes power and time. A laser doesn’t burn material away like a hot light saber. Laser ablation is a process where the intense energy of a focused laser beam is absorbed by a material, causing its rapid heating and subsequent vaporization. This localized heating occurs at the laser’s focal point, where the energy density is highest. It can be so intense that it instantly removes (ablates) the material in the form of gas or plasma. The efficiency and nature of the ablation depend on the laser’s wavelength and the material’s properties. Essentially, the laser beam’s energy disrupts the material’s molecular bonds, leading to vaporization without significant heat transfer to surrounding areas, enabling precise cutting or engraving.
I like cutting wood. Here, the laser’s focused energy causes the wood to rapidly heat up at the point of contact. This intense heat can char or burn the wood, leading to a change in color and texture. In essence, the laser beam causes pyrolysis, where the wood decomposes under high heat in the absence of oxygen. This process can create smoke and a burnt appearance, but it’s controlled and doesn’t ignite a fire like an open flame would.
To cause ablation, the energy applied to a material is a function of power, spot size, and interaction time, affected by alignment errors \(\bar e\). The energy density \( E \) is defined by the laser power \( P \) divided by the spot area \( A \), and is given in watts per square meter. The interaction time \( t \), which is the time the laser is in contact with a point on the material, is crucial for determining the amount of energy absorbed. This is especially important because it affects the cutting depth \( d \) and is defined by the inverse of the feed rate \( v \). The burning power, or the energy delivered to the material, can be calculated by:
$$ E_{burn} = \frac{P_{eff} \cdot t}{A} $$
Substituting the effective power above, gives us the energy at the surface.
Since \( t \) is inversely proportional to \( v \) (the feed rate), and the depth of the cut \( d \) is proportional to the energy density over time, the equation can be further refined to calculate \( d \):
$$ d \propto \frac{P_{eff}}{v \cdot A} $$
This equation shows that the cutting depth \( d \) is directly proportional to the effective power \( P_{eff} \) and inversely proportional to the product of the feed rate \( v \) and the area of the spot size \( A \).
So, if you want to cut effectively, you maximize your power, hit at the right speed and get your beam as focused (small) as possible. To do this practically, you want to make sure you are cutting at the right focal distance and with the right alignment. You also want clean mirrors. The focal distance is determined by a ramp test. I’ll cover alignment below. Cleaning the mirrors increases the \(T_I\).
Alignment
To align my laser, I just couldn’t use the tape. First, you have to align for precision and then get accuracy through moving the tube. To get precision from mirror 1, you have to strike a target close to the source of mirror 1 and then farther away. There are many many videos that walk you through the sequence (mirror 1-2 close, mirror 1-2 far, etc). I want to focus on the math of precision.
The pulses will look like this:
Now, we can look at the x-dimension to see what point a straight line would intersect, call this \(x\).
\( x \) and \( y \) are the coordinates of the true point where the laser needs to be positioned.
\(x_1, y_1\) are the coordinates where the laser hits when the target is near.
\(x_2, y_2\) are the coordinates where the laser hits when the target is far.
\( d_{near} \) is the distance from the laser to the target when it is near.
\( d_{far} \) is the distance from the laser to the target when it is far.
Plotting what this looks like shows the relationship between the true dot and the near and far dots. It’s on the other side Here are the dots where near is blue, far is black and the red dot is the true dot that represents the laser moving in a straight line. These are extreme cases that would represent a pretty misaligned tool. If I run a simulation with d = 5 cm and \(\Delta d\) = 30 cm with a max distance from the center of 10 mm, I get:
So this is odd: the red dot isn’t in between the black and blue dots. The line connecting those two dots is the path of the laser at its current orientation. I really want to think that the ideal spot would be in between these two dots. However, the intuition that the point (x, y) might average out and fall between (x1, y1) and (x2, y2) is based on a misunderstanding of how the alignment of a laser system works. In a laser alignment system, when you’re adjusting the laser to hit a specific point on a target at different distances, you’re effectively changing the angle of the beam’s trajectory.
The laser source does not move directly between (x1, y1) and (x2, y2), but rather pivots around a point that aligns the beam with the target points. Since the laser’s position is being adjusted to correct for the angle, not for the position between two points, the corrected point will lie on a line that is extrapolated backwards from (x1, y1) and (x2, y2) towards the laser source.
The resulting position (x, y) where the laser needs to be for the beam to be straight and hit the same point on the target at any distance will be on the extension of this line and not necessarily between the two points (x1, y1) and (x2, y2). This is due to the nature of angular adjustment rather than linear movement. The position (x, y) is essentially where the angles of incidence and reflection converge to keep the beam’s path consistent over varying distances. It’s pretty cool that two points give you this angle. In fact, the ideal point is located further back on the line of the laser beam’s path extended backward from the near and far positions, which geometrically cannot lie between the near and far positions on the target unless all three are equal. Fortunately, as the points get very close, you can just fudge around with the dials to get these on top of each other which is probably what most people do.
If the plots are closer to the center, it’s much easier to just not worry about the math. If I constrain the points to be 2 mm from the center:
Too complicated? Here are some basic rules from the math: Just aim for where the nearest point hit at the farthest distance away. (The near point has less error.)
Linear Alignment: The points are always in a straight line. This is because the red point is calculated based on the positions of the blue and black points. It represents the position where the laser should be to maintain its alignment with the target at both distances. The calculation creates a direct linear relationship between these three points.
Relative Distances: The farther apart the blue and black points are, the farther the red point will be from both of them. This is because a greater distance between the near and far points means a larger angular adjustment is required for the laser, which results in a more significant shift in the red point’s position to maintain alignment.
Ordering of Points: If the blue and black points are flipped (i.e., if the far point becomes nearer to the center than the near point), the red point will also shift its position accordingly. The ordering of the blue and black points relative to the circle’s center will determine which side of these points the red point will fall on.
Proximity to the Center: When both blue and black points are close to the center, the red point will also be relatively closer to the center. This is because minor adjustments are needed when the target moves a shorter distance.
Symmetry in Movement: If the blue and black points are symmetrically positioned around the center (but at different distances), the red point will also tend to be symmetrically positioned with respect to these points along the line they form.
What I did
Armed with the right theory, I had to move past shooting at tape, so I used this svg from Ed Nisley to create these targets and I 3d printed a holder for them. This is the jig that fits over the nozzle:
And the jig that fits in the 18mm holes.
I also made a holder for my reverse laser so I could use this in the forward direction:
Note: these results are still new and will need to be replicated and further studied by the scientific community. This whole thing may be nonsense, but I did some thinking on the impact if it’s true.
The First Room-Temperature Ambient-Pressure Superconductor:
A material called LK-99, a modified-lead apatite crystal structure, achieves superconductivity at room temperature. pic.twitter.com/Xlm90Zabtg
Researchers from Korea University have synthesized a superconductor that works at ambient pressure and mild oven-like temperatures (like 260 deg F, not a room I want to be in) with what they call a modified lead-apatite (LK-99) structure. This is a significant breakthrough as previous near-room-temperature superconductors required high pressures to function (the highest-temperature results before were at >1 million atmospheres), making them impractical for many applications. They are posturing for this to be a big deal: the authors published two papers about it concurrently: one with six authors, and one with only three. (as in Nobel prize 3)
The modified lead-apatite (LK-99) structure is a type of crystal structure that achieves superconductivity from minute structural distortion caused by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. Their key innovation for letting electrons glide doesn’t come from low temp, or by squeezing together. It comes from an internal tension that forms as the material forms, just like the tempered glass of a car windshield.
This structure in particular is a specific arrangement of lead, oxygen, and other atoms in a pattern similar to the structure of apatite, a group of phosphate minerals that you might find in your teeth or bones.
The “modified” part comes in when researchers introduce another element, in this case, copper (Cu2+ ions), into the structure. This slight change, or “modification,” causes a small shrinkage in the overall structure and creates a stress that leads to the formation of superconducting quantum wells (SQWs) in the interface of the structure. These SQWs are what allow the material to become a superconductor at room temperature and ambient pressure.
Superconducting Quantum Wells (SQWs) can be thought of as very thin layers within a material where electrons can move freely. These layers are so thin that they confine the movement of electrons to two dimensions. This confinement changes the behavior of the electrons, allowing them to form pairs and move without resistance, which is the key characteristic of superconductivity. In essence, SQWs are the “magic zones” in a material where the amazing phenomenon of superconductivity happens.
The development of a room-temperature superconductor that operates at ambient pressure would be a significant advancement in the field of physics. Superconductors have zero electrical resistance, meaning that they can conduct electricity indefinitely without losing any energy. This could revolutionize many applications, including power transmission, transportation, and computing. For example, it could lead to more efficient power grids, faster and more efficient computers, and high-speed magnetic levitation trains.
The cool thing is that the LK-99 material can be prepared in about 34 hours using basic lab equipment, making it a practical option for various applications such as magnets, motors, cables, levitation trains, power cables, qubits for quantum computers, THz antennas, and more.
Transportation
So, if the paper is true, how could this change transportation?
Electric Vehicles (EVs): Superconductors can carry electric current without any resistance, which means that electric vehicles could become much more efficient. The batteries in EVs could last longer, and the vehicles could be lighter because the heavy copper wiring currently used could be replaced with superconducting materials. This could lead to significant improvements in range and performance for electric cars, buses, and trucks.
Maglev Trains: Superconductors are already used in some magnetic levitation (maglev) trains to create the magnetic fields that lift and propel the train. Room-temperature superconductors could make these systems much more efficient and easier to maintain, as they wouldn’t require the cooling systems that current superconductors need. This could make maglev trains more common and could lead to faster, more efficient public transportation.
Aircraft: In the aviation industry, superconductors could be used to create more efficient electric motors for aircraft. This could lead to electric airplanes that are more feasible and efficient, reducing the carbon footprint of air travel.
Shipping: Electric propulsion for ships could also become more efficient with the use of superconductors, leading to less reliance on fossil fuels in the shipping industry.
Infrastructure: Superconducting materials could be used in the power grid to reduce energy loss during transmission. This could make electric-powered transportation more efficient and sustainable on a large scale.
Space Travel: In space travel, superconductors could be used in the creation of powerful magnetic fields for ion drives or other advanced propulsion technologies, potentially opening up the solar system to more extensive exploration.
The other field I want to explore is computing, just some quick thoughts here on what I would want to explore at DARPA in an area like this:
Energy Efficiency: Superconductors carry electrical current without resistance, which means that they don’t produce heat. This could drastically reduce the energy consumption of computers and data centers, which currently use a significant amount of energy for cooling.
Processing Speed: Superconductors can switch between states almost instantly, which could lead to much faster processing speeds. This could enable more powerful computers and could advance fields that require heavy computation, like artificial intelligence and data analysis.
Quantum Computing: Superconductors are already used in some types of quantum computers, which can solve certain types of problems much more efficiently than classical computers. Room-temperature superconductors could make quantum computers more practical and accessible, potentially leading to a revolution in computing power.
Data Storage: Superconductors could also be used to create more efficient and compact data storage devices. This could increase the amount of data that can be stored in a given space and could improve the speed at which data can be accessed.
Internet Infrastructure: The internet relies on a vast network of cables to transmit data. Superconducting cables could transmit data with virtually no loss, improving the speed and reliability of internet connections.
Nanotechnology: The properties of superconductors could enable the development of new nanotechnologies in computing. For example, they could be used to create nanoscale circuits or other components. This could be from leveraging some cool physics. Superconductors expel magnetic fields, a phenomenon known as the Meissner effect. This property could be used in nanotechnology for creating magnetic field shields or for the precise control of magnetic fields at the nanoscale. Another cool application could leverage Josephson Junctions. Superconductors can form structures which are a type of quantum mechanical switch. These junctions can switch states incredibly quickly, potentially allowing for faster processing speeds in nanoscale electronic devices. Finally, you could build very sensitive sensors with superconductors by creating extremely sensitive magnetic sensors (SQUIDs – Superconducting Quantum Interference Devices). At the nanoscale, these could be used for a variety of applications, including the detection of small magnetic fields, such as those used in hard drives.
So, we will see how the debate on this pans out. In the meantime, speculate on some Lead mines and enjoy a dose of science-fueled optimism of the type of breakthroughs we may see in the near future.