Skip to main content
Posts by:

Tim Booher

Autonomy

I lead autonomy at Boeing. What exactly do I do?

We engineers have kidnapped a word that doesn’t belong to us. Autonomy is not a tech word, it’s the ability to act independently. It’s freedom that we design in and give to machines.

It’s also a bit more. Autonomy is the ability to make decisions and act independently based on goals, knowledge, and understanding of the environment. It’s an exploding technical area with new discoveries daily and maybe one of the most exciting tech explosions in human history.

We can fall into a trap that autonomy is code — a set of instructions governing a system. Code is just language, a set of signals, it’s not a capability. We remember Descartes for his radical skepticism or for giving us the X and Y axes, but he is the first person who really get credit for the concept of autonomy with his “thinking self” or the “cogito.” Descartes argued that the ability to think and reason independently was the foundation of autonomy.

But I work on giving life and freedom to machines, what does that look like? Goethe gives us a good mental picture in his Der Zauberlehrling (later adapted in Disney’s “Fantasia”) when the sorcerer’s apprentice attempts to use magic to bring a broom to life to do his chores only to lose his own autonomy as chaos ensues.

Giving our human-like freedom to machines is dangerous and every autonomy story gets at this emergent danger. This is why autonomy and ethics are inextricably linked and “containment” (keeping AI from taking over) or “alignment” (making AI share our values) are the most important (and challenging) technical problems today.

A lessor known story gets at the promise, power and peril of autonomy. The Golem of Prague emerged from Jewish folklore in the 16th century. From centuries of pogroms, the persecuted Jews of Eastern Europe found comfort in the story of a powerful creature with supernatural strength who patrolled the streets of the Jewish ghetto in Prague, protecting the community from attacks and harassment.

The golem was created by a rabbi named Mahara using clay from the banks of the Vltava River. He brought the golem to life by placing a shem (a paper with a divine name) into its mouth or by inscribing the word “emet” (truth) on its forehead. One famous story involves the golem preventing a mob from attacking the Jewish ghetto after a priest had accused the Jews of murdering a Christian child to use their blood for Passover rituals. The golem found the real culprit and brought them to justice, exonerating the Jewish community.

However, as the legend goes, the golem grew increasingly unstable and difficult to control. Fearing that the golem might cause unintended harm, the Maharal was forced to deactivate it by removing the shem from its mouth or erasing the first letter of “emet” (which changes the word to “met,” meaning death) from its forehead. The deactivated golem was then stored in the attic of the Old New Synagogue in Prague, where some say it remains to this day.

The Golem of Prague

Power, protection of the weak, emergent properties, containment. The whole autonomy ecosystem in one story. From Terminator to Her, why does every autonomy story go bad in some way? It’s fundamentally because giving human agency to machines is playing God. My favorite modern philosopher, Alvin Plantinga describes the qualifications we can accept as a creator: “a being that is all-powerful, all-knowing, and wholly good.” We share none of those properties, do we really have any business playing with stuff this powerful?

The Technology of Autonomy

We don’t have a choice, the world is going here and there is much good work to be done. Engineers today have the honor to be modern days Maharal’s — building safer and more efficient systems with the next generation of autonomy. But what specifically are we building and how do we build it so it’s well understood, safe and contained?

A good autonomous system requires software (intelligence), a system of trust and human interface/control. At its core, autonomy is systems engineering. It is the ability to take dynamic and advanced technologies and make them control a system in effective and predictable ways. The heart of this capability is software. To delegate control to a system it needs software to control perception, decision-making capability, action and communication. Let’s break these down.

  1. Perception: An autonomous system must be able to perceive and interpret its environment accurately. This involves sensors, computer vision, and other techniques to gather and process data about the surrounding world.
  2. Decision-making: Autonomy requires the ability to make decisions based on the information gathered through perception. This involves algorithms for planning, reasoning, and optimization, as well as machine learning techniques to adapt to new situations.
  3. Action: An autonomous system must be capable of executing actions based on its decisions. This involves actuators, controllers, and other mechanisms to interact with the physical world.
  4. Communication: Autonomous systems need to communicate and coordinate with other entities, whether they be humans or other autonomous systems. This requires protocols and interfaces for exchanging information and coordinating actions.

Building autonomous systems requires a diverse set of skills, including ethics, robotics, artificial intelligence, distributed systems, formal analysis, and human-robot interaction. Autonomy experts have a strong background in robotics, combining perception, decision-making, and action in physical systems, and understanding the principles of kinematics, dynamics, and control theory. They are proficient in AI techniques such as machine learning, computer vision, and natural language processing, which are essential for creating autonomous systems that can perceive, reason, and adapt to their environment. As autonomous systems become more complex and interconnected, expertise in distributed systems becomes increasingly important for designing and implementing systems that can coordinate and collaborate with each other. Additionally, autonomy experts understand the principles of human-robot interaction and can design interfaces and protocols that facilitate seamless communication between humans and machines.

As technology advances, the field of autonomy is evolving rapidly. One of the most exciting developments is the emergence of collaborative systems of systems – large groups of autonomous agents that can work together to achieve common goals. These swarms can be composed of robots, drones, or even software agents, and they have the potential to revolutionize fields such as transportation, manufacturing, and environmental monitoring.

How would a boxer box if they could instantly decompose into a million pieces and re-emerge as any shape? Differently.

What is driving all this?

Two significant trends are rapidly transforming the landscape of autonomy: the standardization of components and significant advancements in artificial intelligence (AI). Components like VOXL and Pixhawk are pioneering this shift by providing open-source platforms that significantly reduce the time and complexity involved in building and testing autonomous systems. VOXL, for example, is a powerful, SWAP-optimized computing platform that brings together machine vision, deep learning processing, and connectivity options like 5G and LTE, tailored for drone and robotic applications. Similarly, Pixhawk stands as a cornerstone in the drone industry, serving as a universal hardware autopilot standard that integrates seamlessly with various open-source software, fostering innovation and accessibility across the drone ecosystem. All this means you don’t have to be Boeing to start building autonomous systems.

Standard VOXL board

These hardware advancements are complemented by cheap sensors, AI-specific chips, and other innovations, making sophisticated technologies broadly affordable and accessible. The common standards established by these components have not only simplified development processes but also ensured compatibility and interoperability across different systems. All the ingredients for a Cambrian explosion in autonomy.

The latest from NVIDIA and Google

These companies are building a bridge from software to real systems.

The latest advancements from NVIDIA’s GTC and Google’s work in robotics libraries highlight a pivotal moment where the realms of code and physical systems, particularly in digital manufacturing technologies, are increasingly converging. NVIDIA’s latest conference signals a transformative moment in the field of AI with some awesome new technologies:

  • Blackwell GPUs: NVIDIA introduced the Blackwell platform, which boasts a new level of computing efficiency and performance for AI, enabling real-time generative AI with trillion-parameter models. This advancement promises substantial cost and energy savings.
  • NVIDIA Inference Microservices (NIMs): NVIDIA is making strides in AI deployment with NIMs, a cloud-native suite designed for fast, efficient, and scalable development and deployment of AI applications.
  • Project GR00T: With humanoid robotics taking center stage, Project GR00T underlines NVIDIA’s investment in robotics learning and adaptability. These advancements imply that robots will be integral to motion and tasks in the future.

The overarching theme from NVIDIA’s GTC was a strong commitment to AI and robotics, driving not just computing but a broad array of applications in industry and everyday life. These developments hold potential for vastly improved efficiencies and capabilities in autonomy, heralding a new era where AI and robotics could become as commonplace and influential as computers are today.

Google is doing super empowering stuff too. Google DeepMind, in collaboration with partners from 33 academic labs, has made a groundbreaking advancement in the field of robotics with the introduction of the Open X-Embodiment dataset and the RT-X model. This initiative aims to transform robots from being specialists in specific tasks to generalists capable of learning and performing across a variety of tasks, robots, and environments. By pooling data from 22 different robot types, the Open X-Embodiment dataset has emerged as the most comprehensive robotics dataset of its kind, showcasing more than 500 skills across 150,000 tasks in over 1 million episodes.

The RT-X model, specifically RT-1-X and RT-2-X, demonstrates significant improvements in performance by utilizing this diverse, cross-embodiment data. These models not only outperform those trained on individual embodiments but also showcase enhanced generalization abilities and new capabilities. For example, RT-1-X showed a 50% success rate improvement across five different robots in various research labs compared to models developed for each robot independently. Furthermore, RT-2-X has demonstrated emergent skills, performing tasks involving objects and skills not present in its original dataset but found in datasets for other robots. This suggests that co-training with data from other robots equips RT-2-X with additional skills, enabling it to perform novel tasks and understand spatial relationships between objects more effectively.

These developments signify a major step forward in robotics research, highlighting the potential for more versatile and capable robots. By making the Open X-Embodiment dataset and the RT-1-X model checkpoint available to the broader research community, Google DeepMind and its partners are fostering open and responsible advancements in the field. This collaborative effort underscores the importance of pooling resources and knowledge to accelerate the progress of robotics research, paving the way for robots that can learn from each other and, ultimately, benefit society as a whole.

More components, readily available to more people will create a cycle with more cyber-physical systems with increasingly sophisticated and human-like capabilities.

Parallel to these hardware advancements, AI is experiencing an unprecedented boom. Investments in AI are yielding substantial results, driving forward capabilities in machine learning, computer vision, and autonomous decision-making at an extraordinary pace. This synergy between accessible, standardized components and the explosive growth in AI capabilities is setting the stage for a new era of autonomy, where sophisticated autonomous systems can be developed more rapidly and cost-effectively than ever before.

AI is exploding and democratizing simultaneously

Autonomy and Combat

What does all of this mean for modern warfare? Everyone has access to this tech and innovation is rapidly bringing these technologies into combat. We are right in the middle of a new powerful technology that will shape the future of war. Buckle up.

Let’s look at this in the context of Ukraine. The Ukraine-Russia war has seen unprecedented use of increasingly autonomous drones for surveillance, target acquisition, and direct attacks, altering traditional warfare dynamics significantly. The readily available components combined with rapid iteration cycles have democratized aerial warfare, allowing Ukraine to conduct operations that were previously the domain of nations with more substantial air forces and level the playing field against a more conventionally powerful adversary. These technologies are both accessible and affordable. While drones contribute to risk-taking by allowing for expendability, they don’t necessarily have to be survivable if they are numerous and inexpensive.

The future of warfare will require machine intelligence, mass and rapid iterations

The conflict has also underscored the importance of counter-drone technologies and tactics. Both sides have had to adapt to the evolving drone threat by developing means to detect, jam, or otherwise neutralize opposing drones. Moreover, drones have expanded the information environment, allowing unprecedented levels of surveillance and data collection which have galvanized global support for the conflict and provided options to create propaganda, to boost morale, and to document potential war crimes.

The effects are real. More than 200 companies manufacture drones within Ukraine and some estimates show that 30% of the Russian Black Sea fleet has been destroyed by uncrewed systems. Larger military drones like the Bayraktar TB2 and Russian Orion have seen decreased use as they became easier targets for anti-air systems. Ukrainian forces have adapted with smaller drones, which have proved effective at a tactical level, providing real-time intelligence and precision strike capabilities. Ukraine has the capacity to produce 150,000 drones every month, and may be able to produce two million drones by the end of the year and they have struck over 20,000 Russian targets.

As the war continues, innovations in drone technology persist, reflecting the growing importance of drones in modern warfare. The conflict has shown that while drones alone won’t decide the outcome of the war, they will undeniably influence future conflicts and continue to shape military doctrine and strategy.

Autonomy is an exciting and impactful field and the story is just getting started. Stay tuned.

By 0 Comments

Conviction and Leadership in Aerospace

Article views are my own and not coordinated with my employer, Boeing, the National Academies, DARPA or the US Air Force.

Aerospace and Defense work is super technical, rewarding and meaningful. But it’s more than that. Everyone in this industry deeply cares about protecting the people who fly and fight in these planes. You feel it in every meeting. Protecting people plays a key role in every decision.

Defense takes it to a new level. When my teams were building the F-35, I would lie awake at night thinking about one of my close friends taking off to head over the big ocean. I could feel them praying that they would come back, taking a glance at their family pic, wondering if they were ready for what the nation was asking of them. Wondering if they would do their job, their duty and their mission with honor.

My job was to make sure that there was one worry they never had: their plane, and all its systems, would work. Everything from their microelectronics to their software was tested, verified and effective.

Especially when it matters most: when the high-pitched tone fills the cockpit, missiles locked and ready. The HUD’s reticle begins to steady, the pickle press is immediately followed by weapons release, countermeasures sync’d and deployed. Boom. One pilot flies away in the plane we built. Everything must work. Every time. Even when other people on the other side of the world are spending their career to prevent all that from working. That is a mission, and one worth a life of work.

I just watched Oppenheimer. This movie has had enough reviews, but it spoke to the work I and my colleagues do every day. It was a fine movie, but, to me, Oppenheimer the person wasn’t that interesting.

Yes, it was cool to see what Hollywood thinks an academic superhero should be: learning Sanskrit for fun or Dutch in a month while building new physics and drinking a lot. His national service was commendable. But the movie portrayed him as a moral adolescent with multiple affairs, confused service and shifting beliefs — fluid convictions with uncertain moral foundations. Yes, he like the rest of us are trying to do right and live with the consequences of our actions. But there are a lot of smart folks out there who think about what they do.

Smart is nice, but I’m always on the lookout for conviction. Google and Goldman are filled with smart people. Good for them, but I’m after something else. Enter General Groves. He built the pentagon in record time and under budget as the deputy chief of construction for the U.S. Army Corps of Engineers. He built over 29 acres with over 6 million square feet of floor space–in just 16 months. You have to believe in what you are doing to effectively coordinate the efforts of thousands of workers, architects, and engineers, while also navigating the complex demands of military bureaucracy.

The pentagon was a warm-up act to the complexities of the Manhattan Project: 130,000 people working together at the cost of about 27 billion in current dollars. Why? To win the war. Without that bomb, Floyd “Dutch” Booher, my grandpa, would have died in Japan and you wouldn’t have this post to read. At the same time, this one project fundamentally changed the nature of warfare and international relations, and drove unprecedented advancements in science and technology that continue to shape our world today. They did a thing.

As I seek to improve at building and delivering combat power, Groves is one of the leader’s I’ve carefully studied. He is there with Hyman G. Rickover (the U.S. Navy’s nuclear propulsion program, revolutionizing submarine warfare), Benny Schriever (the intercontinental ballistic missile program and our current space capabilities), and Vannevar Bush (Coordinated U.S. scientific research during WWII, leading to radar, the Manhattan Project, and early computers), and Curtis Lemay (SAC). These are fascinating lives worth studying and learning from. We all need heroes and while these men have feet of clay, they believed and acted in conviction. People followed with shared conviction.

But set that all aside, because the real hero of the Oppenheimer movie for me was Boris Pash, the passionate and purposeful head of security. I think he was cast to be a villian, outsmarted by Oppenheimer, but I saw something else. That is probably because my core conviction is that all true power is moral power and moral power requires moral clarity. I’m a moral clarity seeking missile, it’s what I look for in a crowded room. You can get to moral clarity in two ways: unquestioned loyalty or intense moral discovery. The first road is dangerous, you could end up a brave Nazi. The second is harder but is the road worth traveling. Simple beliefs are good if you embraced the complexity and nuance to get there. It’s our road in Aerospace.

The real Boris Pash lead the Alsos Mission, a secret U.S. operation during World War II tasked with finding and confiscating German nuclear research to prevent the Nazis from developing atomic bombs. His role was critical in ensuring the security of the Manhattan Project and in counterintelligence efforts against potential espionage threats, including monitoring and neutralizing efforts by foreign powers or individuals who might compromise the project’s security or the United States’ strategic advantages in nuclear technology.

In a movie of confused scientists and slippery politicians, Boris Pash stood tall. His character was a compelling example of leadership conviction in the face of moral ambiguity. His unwavering commitment to his cause, rooted in personal experiences fighting the Bolsheviks, stands in stark contrast to the inner conflicts and ethical doubts plaguing the scientists.

“this is a guy who has killed communists with his own hands.”

Gen Groves

For defense leaders, Pash’s steadfast moral clarity is both admirable and just how we do business. In a field often fraught with complex decisions and far-reaching consequences, having a strong ethical framework and a deep understanding of the rightness and necessity of one’s actions is table stakes. I want to be Boris Pash, not Oppenheimer.

At its core, leadership conviction is about having a clear sense of purpose and a commitment to a set of well tested values and principles. It involves making difficult choices based on a strong moral compass, even in the face of uncertainty, opposition, or personal risk. It’s what our nation needs for us to build.

Socrates said “to do good, one must first know good”. To know good requires a lot of homework, it’s studying the ethics of Kant, the wisdom of the stoics, the convictions of Lincoln. It’s a commitment to a careful review of the key actors who made the hard choices. It’s knowing why Socrates drank the hemlock. It is a commitment to philosophical inquiry, self-reflection, and a sincere pursuit of truth and moral understanding. It’s always being open to being wrong, combined with a deep conviction that one is right. Getting this right, is the hardest of hard callings.

In our industry, it means constantly re-evaluating our principles and actions, seeking guidance from trusted sources, and engaging in ongoing self-reflection. It’s about having the courage to make tough decisions while also remaining open to new information and perspectives. It’s about finding all the gray in your life and never giving up the journey to drive it out.

But why? For the pride of saying you’re right? No, because leading with clarity is the only way to get big things done. Humility is helpful, but only because it provides understanding. Humility is emotion free clarity of reality and it’s required to correctly understand the world. But you can’t lead unless you wrestle with “why?”. Aerospace and defense leaders need to know the purpose and reason, the teleology, of every plane, bomb or information system they build.

Endless wrestling is fine for academics, but if you are going to lead in the business of building things, you have to get to the other side and land on firm ground. You have to know and communicate why, and your actions will speak to the fact that you’ve found your way outside Plato’s cave by being brave enough to stare at the sun. When you get there, you don’t need to look in the mirror and practice being transparent or authentic, nor does your focus stop at quality. Your focus roves the landscape until your find the thing that must be done for the mission — whatever that is. Shared conviction leads way to execution, excellence and delivery. It brings everyone together without the need for a new diversity and inclusion campaign.

We have a lot of conviction in our industry, but we need more conviction and courage. When your conviction is that we need to win wars and protect passengers — when you have the conviction of Boris Pash — the trivial fades away and the important stuff comes into focus. Anyone in this industry knows what I’m talking about. When the chips are down and that fighter pilot is going to press that button — all the politics won’t matter, conviction will.

That is why I’m here and why this is my place. We will persist until we rebuild this industry, but we will rebuild it around the core conviction that excellence doesn’t exist to drive shareholder value. It doesn’t exist to win the next competition. It doesn’t exist for the next promotion or to help your preferred population gain more seats in the room. Every one of those things can be good. But they are not “the thing” that gets us there.

Delivery of effective products that advance the state of the art must be our north star. The conviction to deliver on excellence forms an unwavering shared commitment that brings a global business together. Because freedom matters, protecting passengers matters, winning wars matters. We get none of those things without excellent work. And you don’t get excellence with 99% commitment.

It’s a worthy thing and we all need to remember it’s not a career in this line of work–it’s a righteous calling. There are plenty of industries where you can do good enough work, but not this one. Let’s work together to make that true. I long for the eyes of Boris Pash. Eyes of conviction grounded and secured, purpose sure and mission ready. Let’s get to work.

By 5 Comments

Laser Alignment

I’m going dive into the theory of laser alignment, show the math and the jigs I built to put that math into practice. This post is for someone who wants the “why” behind a lot of the online material on laser cutters. If you just want some practical tips on how to better align your laser, skip to the end. I won’t compete with the excellent videos that show the practical side of alignment.

So let’s dive in . . .

My laser cutter is CO2 based, optimized for precision material processing. The laser tube generates a coherent beam at 10640 nm, which is directed via a series of mirrors (1, 2, 3) before focusing on the workpiece. Each mirror introduces minimal power loss, typically less than 1% per reflection, depending on the coating and substrate quality. The beam’s path is engineered to maintain maximal intensity, ensuring that the photon density upon the material’s surface is sufficient to induce rapid localized heating, vaporization, or ablation.

The choice of 10640 nm wavelength for CO2 lasers is driven by a balance of efficiency, material interaction, and safety. This far-infrared wavelength is strongly absorbed by many materials, making it effective for cutting and engraving a wide variety. It provides a good balance between power efficiency and beam quality. Additionally, this wavelength is safer to use as it’s less likely to cause eye damage than shorter, visible wavelengths (often 400-600 nm). Fiber lasers also have a shorter wavelength (around 1060 nm) which is more readily absorbed by metals.

However, 10640 nm has drawbacks. Its longer wavelength limits the ability to finely focus the beam compared to shorter wavelengths, affecting the achievable precision. The diffraction limit provides the smallest possible spot size \( d \) given by the formula \( d = 1.22 \times \lambda \times \frac{f}{D} \), where \( \lambda \) is the wavelength, \( f \) is the focal length of the lens, and \( D \) is the diameter of the lens. The bigger wavelength needs a bigger lens or a smaller focal length to make a small hole. Since the machine size is limited, the longer wavelength results in a larger minimum spot size. This larger spot size limits the precision and minimum feature size the laser can effectively cut or engrave.

You can’t adjust the 10640 nm wavelength, which is a consequence of the molecular energy transitions in the CO2 gas mixture. This specific wavelength emerges from the vibrational transitions within the CO2 molecules when they are energized. The nature of these molecular transitions dictates the emission wavelength, making 10640 nm optimal for the efficiency and output of CO2 lasers.

The Omtech 80W CO2 Laser Engraver is fairly precise with an engraving precision of 0.01 mm and a laser precision of up to 1000 dpi. This is made possible due to its robust (ie, big heavy machine) construction and the integration of industrial-grade processors that handle complex details and large files efficiently. The machine operates with a stepper motor system for the X and Y axes, facilitating efficient power transmission and precise movement along the guide rails, ensuring a long service life and high repeatability of the engraving or cutting patterns. This level of motor precision enables the machine to handle intricate designs and detailed work, crucial for professional-grade applications.

But! Only if the laser beam is well calibrated. Let’s look at the math of that.

First all intensity values follow the inverse square law, with light, the intensity of a laser beam is inversely proportional to the square of the distance from the source. Mathematically, this relationship is depicted as $$I = \frac{P}{4\pi r^2}$$, where \( I \) represents the intensity, \( P \) the power of the laser, and \( r \) the distance from the laser source. In practical terms, this means that a small deviation in alignment can lead to a significant decrease in the laser’s intensity at the target point.

Visually, laser intensity looks like this, so small drop off in \(r\) leads to big drops in \(I\).

Laser cutters can’t put the whole tube in the cutting head, so they need three mirrors to get to a cutting head that can move in a 2D space.

With this geometry, the effective power at the surface of the material is:

\[ P_{eff} = P_{0} \times T_{m}^{3} \times T_{l} \times T_{f} \]

Where:

  • \( P_{0} \) is the initial power of the laser tube.
  • \( T_{m} \) is the transmission coefficient of each mirror (a value between 0 and 1).
  • \( T_{l} \) is the transmission coefficient of the focusing lens.
  • \( T_{f} \) accounts for any additional factors like the focus quality or material properties.

In my case \( P_{0} \) would be 80 watts. I don’t have values for for \(T_l\) and \(T_f\). \(T_l \) typically ranges from 0.9 to 0.99, indicating that 90% to 99% of the laser light is transmitted through the lens. I would love if anyone has these measured parameters for Omtech.

In reality there is alignment error. Precise calibration matters a lot with lasers where a millimeter of misalignment can exponentially diminish the laser’s intensity and focus, impacting its effectiveness. Practically, my Omtech AF2435-80 can’t cut basic sheet goods without lots of tweaking. If \( e \), representing the alignment error at each mirror, would impact the effective path of the laser beam and could lead to a decrease in the energy density at the point of contact with the material. This error would affect the power \( P \) actually hitting the target area \( A \), therefore altering the energy density \( E \) and potentially the depth \( d \) of the cut.

$$ P_{eff} = P_0 \times (T_m – e)^3 \times T_l \times T_f $$

To actually cut something you need to remove the material, which takes power and time. A laser doesn’t burn material away like a hot light saber. Laser ablation is a process where the intense energy of a focused laser beam is absorbed by a material, causing its rapid heating and subsequent vaporization. This localized heating occurs at the laser’s focal point, where the energy density is highest. It can be so intense that it instantly removes (ablates) the material in the form of gas or plasma. The efficiency and nature of the ablation depend on the laser’s wavelength and the material’s properties. Essentially, the laser beam’s energy disrupts the material’s molecular bonds, leading to vaporization without significant heat transfer to surrounding areas, enabling precise cutting or engraving.

I like cutting wood. Here, the laser’s focused energy causes the wood to rapidly heat up at the point of contact. This intense heat can char or burn the wood, leading to a change in color and texture. In essence, the laser beam causes pyrolysis, where the wood decomposes under high heat in the absence of oxygen. This process can create smoke and a burnt appearance, but it’s controlled and doesn’t ignite a fire like an open flame would.

To cause ablation, the energy applied to a material is a function of power, spot size, and interaction time, affected by alignment errors \(\bar e\). The energy density \( E \) is defined by the laser power \( P \) divided by the spot area \( A \), and is given in watts per square meter. The interaction time \( t \), which is the time the laser is in contact with a point on the material, is crucial for determining the amount of energy absorbed. This is especially important because it affects the cutting depth \( d \) and is defined by the inverse of the feed rate \( v \). The burning power, or the energy delivered to the material, can be calculated by:

$$ E_{burn} = \frac{P_{eff} \cdot t}{A} $$

Substituting the effective power above, gives us the energy at the surface.

$$ E_{burn} = \frac{P_{eff} \cdot t}{A} = \frac{P_{eff}}{v \cdot A} $$

Since \( t \) is inversely proportional to \( v \) (the feed rate), and the depth of the cut \( d \) is proportional to the energy density over time, the equation can be further refined to calculate \( d \):

$$ d \propto \frac{P_{eff}}{v \cdot A} $$

This equation shows that the cutting depth \( d \) is directly proportional to the effective power \( P_{eff} \) and inversely proportional to the product of the feed rate \( v \) and the area of the spot size \( A \).

So, if you want to cut effectively, you maximize your power, hit at the right speed and get your beam as focused (small) as possible. To do this practically, you want to make sure you are cutting at the right focal distance and with the right alignment. You also want clean mirrors. The focal distance is determined by a ramp test. I’ll cover alignment below. Cleaning the mirrors increases the \(T_I\).

Alignment

To align my laser, I just couldn’t use the tape. First, you have to align for precision and then get accuracy through moving the tube. To get precision from mirror 1, you have to strike a target close to the source of mirror 1 and then farther away. There are many many videos that walk you through the sequence (mirror 1-2 close, mirror 1-2 far, etc). I want to focus on the math of precision.

The pulses will look like this:

Now, we can look at the x-dimension to see what point a straight line would intersect, call this \(x\).

Writing out the similar triangles gives:

For the x-coordinate:

$$ \frac{x – x_1}{\Delta d} = \frac{x_2 – x_1}{d + \Delta d} $$

For the y-coordinate:

$$ \frac{y – y_1}{\Delta d} = \frac{y_2 – y_1}{d + \Delta d} $$

So we can solve for the point a straight line would make when two targets are space \( \Delta d \) apart where \( \Delta d = d_{far} – d_{near} \):

  1. For the x-coordinate:

$$ x = x_1 + (x_2 – x_1) \cdot \frac{d_{near}}{d_{far} – d_{near}} $$

  1. For the y-coordinate:

$$ y = y_1 + (y_2 – y_1) \cdot \frac{d_{near}}{d_{far} – d_{near}} $$

Where:

  • \( x \) and \( y \) are the coordinates of the true point where the laser needs to be positioned.
  • \(x_1, y_1\) are the coordinates where the laser hits when the target is near.
  • \(x_2, y_2\) are the coordinates where the laser hits when the target is far.
  • \( d_{near} \) is the distance from the laser to the target when it is near.
  • \( d_{far} \) is the distance from the laser to the target when it is far.

Plotting what this looks like shows the relationship between the true dot and the near and far dots. It’s on the other side Here are the dots where near is blue, far is black and the red dot is the true dot that represents the laser moving in a straight line. These are extreme cases that would represent a pretty misaligned tool.
If I run a simulation with d = 5 cm and \(\Delta d\) = 30 cm with a max distance from the center of 10 mm, I get:

So this is odd: the red dot isn’t in between the black and blue dots. The line connecting those two dots is the path of the laser at its current orientation. I really want to think that the ideal spot would be in between these two dots. However, the intuition that the point (x, y) might average out and fall between (x1, y1) and (x2, y2) is based on a misunderstanding of how the alignment of a laser system works. In a laser alignment system, when you’re adjusting the laser to hit a specific point on a target at different distances, you’re effectively changing the angle of the beam’s trajectory.

The laser source does not move directly between (x1, y1) and (x2, y2), but rather pivots around a point that aligns the beam with the target points. Since the laser’s position is being adjusted to correct for the angle, not for the position between two points, the corrected point will lie on a line that is extrapolated backwards from (x1, y1) and (x2, y2) towards the laser source.

The resulting position (x, y) where the laser needs to be for the beam to be straight and hit the same point on the target at any distance will be on the extension of this line and not necessarily between the two points (x1, y1) and (x2, y2). This is due to the nature of angular adjustment rather than linear movement. The position (x, y) is essentially where the angles of incidence and reflection converge to keep the beam’s path consistent over varying distances. It’s pretty cool that two points give you this angle. In fact, the ideal point is located further back on the line of the laser beam’s path extended backward from the near and far positions, which geometrically cannot lie between the near and far positions on the target unless all three are equal. Fortunately, as the points get very close, you can just fudge around with the dials to get these on top of each other which is probably what most people do.

If the plots are closer to the center, it’s much easier to just not worry about the math. If I constrain the points to be 2 mm from the center:

Too complicated? Here are some basic rules from the math:
Just aim for where the nearest point hit at the farthest distance away. (The near point has less error.)

  1. Linear Alignment: The points are always in a straight line. This is because the red point is calculated based on the positions of the blue and black points. It represents the position where the laser should be to maintain its alignment with the target at both distances. The calculation creates a direct linear relationship between these three points.
  2. Relative Distances: The farther apart the blue and black points are, the farther the red point will be from both of them. This is because a greater distance between the near and far points means a larger angular adjustment is required for the laser, which results in a more significant shift in the red point’s position to maintain alignment.
  3. Ordering of Points: If the blue and black points are flipped (i.e., if the far point becomes nearer to the center than the near point), the red point will also shift its position accordingly. The ordering of the blue and black points relative to the circle’s center will determine which side of these points the red point will fall on.
  4. Proximity to the Center: When both blue and black points are close to the center, the red point will also be relatively closer to the center. This is because minor adjustments are needed when the target moves a shorter distance.
  5. Symmetry in Movement: If the blue and black points are symmetrically positioned around the center (but at different distances), the red point will also tend to be symmetrically positioned with respect to these points along the line they form.

What I did

Armed with the right theory, I had to move past shooting at tape, so I used this svg from Ed Nisley to create these targets and I 3d printed a holder for them. This is the jig that fits over the nozzle:

And the jig that fits in the 18mm holes.

I also made a holder for my reverse laser so I could use this in the forward direction:

Armed with all this, I built a calculator and you can poke at my R code.

Also, let me know if you want the STL or svg files for the alignment jigs.

By 3 Comments

“Room Temperature” Superconductor?

Note: these results are still new and will need to be replicated and further studied by the scientific community. This whole thing may be nonsense, but I did some thinking on the impact if it’s true.

Superconductor in action

Researchers from Korea University have synthesized a superconductor that works at ambient pressure and mild oven-like temperatures (like 260 deg F, not a room I want to be in) with what they call a modified lead-apatite (LK-99) structure. This is a significant breakthrough as previous near-room-temperature superconductors required high pressures to function (the highest-temperature results before were at >1 million atmospheres), making them impractical for many applications. They are posturing for this to be a big deal: the authors published two papers about it concurrently: one with six authors, and one with only three. (as in Nobel prize 3)

You can read the paper here.

The modified lead-apatite (LK-99) structure is a type of crystal structure that achieves superconductivity from minute structural distortion caused by a slight volume shrinkage (0.48 %), not by external factors such as temperature and pressure. Their key innovation for letting electrons glide doesn’t come from low temp, or by squeezing together. It comes from an internal tension that forms as the material forms, just like the tempered glass of a car windshield.

This structure in particular is a specific arrangement of lead, oxygen, and other atoms in a pattern similar to the structure of apatite, a group of phosphate minerals that you might find in your teeth or bones.

The “modified” part comes in when researchers introduce another element, in this case, copper (Cu2+ ions), into the structure. This slight change, or “modification,” causes a small shrinkage in the overall structure and creates a stress that leads to the formation of superconducting quantum wells (SQWs) in the interface of the structure. These SQWs are what allow the material to become a superconductor at room temperature and ambient pressure.

Superconducting Quantum Wells (SQWs) can be thought of as very thin layers within a material where electrons can move freely. These layers are so thin that they confine the movement of electrons to two dimensions. This confinement changes the behavior of the electrons, allowing them to form pairs and move without resistance, which is the key characteristic of superconductivity. In essence, SQWs are the “magic zones” in a material where the amazing phenomenon of superconductivity happens.

The development of a room-temperature superconductor that operates at ambient pressure would be a significant advancement in the field of physics. Superconductors have zero electrical resistance, meaning that they can conduct electricity indefinitely without losing any energy. This could revolutionize many applications, including power transmission, transportation, and computing. For example, it could lead to more efficient power grids, faster and more efficient computers, and high-speed magnetic levitation trains.

The cool thing is that the LK-99 material can be prepared in about 34 hours using basic lab equipment, making it a practical option for various applications such as magnets, motors, cables, levitation trains, power cables, qubits for quantum computers, THz antennas, and more.

Transportation

So, if the paper is true, how could this change transportation?

Electric Vehicles (EVs): Superconductors can carry electric current without any resistance, which means that electric vehicles could become much more efficient. The batteries in EVs could last longer, and the vehicles could be lighter because the heavy copper wiring currently used could be replaced with superconducting materials. This could lead to significant improvements in range and performance for electric cars, buses, and trucks.

Maglev Trains: Superconductors are already used in some magnetic levitation (maglev) trains to create the magnetic fields that lift and propel the train. Room-temperature superconductors could make these systems much more efficient and easier to maintain, as they wouldn’t require the cooling systems that current superconductors need. This could make maglev trains more common and could lead to faster, more efficient public transportation.

Aircraft: In the aviation industry, superconductors could be used to create more efficient electric motors for aircraft. This could lead to electric airplanes that are more feasible and efficient, reducing the carbon footprint of air travel.

Shipping: Electric propulsion for ships could also become more efficient with the use of superconductors, leading to less reliance on fossil fuels in the shipping industry.

Infrastructure: Superconducting materials could be used in the power grid to reduce energy loss during transmission. This could make electric-powered transportation more efficient and sustainable on a large scale.

Space Travel: In space travel, superconductors could be used in the creation of powerful magnetic fields for ion drives or other advanced propulsion technologies, potentially opening up the solar system to more extensive exploration.

The other field I want to explore is computing, just some quick thoughts here on what I would want to explore at DARPA in an area like this:

Energy Efficiency: Superconductors carry electrical current without resistance, which means that they don’t produce heat. This could drastically reduce the energy consumption of computers and data centers, which currently use a significant amount of energy for cooling.

Processing Speed: Superconductors can switch between states almost instantly, which could lead to much faster processing speeds. This could enable more powerful computers and could advance fields that require heavy computation, like artificial intelligence and data analysis.

Quantum Computing: Superconductors are already used in some types of quantum computers, which can solve certain types of problems much more efficiently than classical computers. Room-temperature superconductors could make quantum computers more practical and accessible, potentially leading to a revolution in computing power.

Data Storage: Superconductors could also be used to create more efficient and compact data storage devices. This could increase the amount of data that can be stored in a given space and could improve the speed at which data can be accessed.

Internet Infrastructure: The internet relies on a vast network of cables to transmit data. Superconducting cables could transmit data with virtually no loss, improving the speed and reliability of internet connections.

Nanotechnology: The properties of superconductors could enable the development of new nanotechnologies in computing. For example, they could be used to create nanoscale circuits or other components. This could be from leveraging some cool physics. Superconductors expel magnetic fields, a phenomenon known as the Meissner effect. This property could be used in nanotechnology for creating magnetic field shields or for the precise control of magnetic fields at the nanoscale. Another cool application could leverage Josephson Junctions. Superconductors can form structures which are a type of quantum mechanical switch. These junctions can switch states incredibly quickly, potentially allowing for faster processing speeds in nanoscale electronic devices. Finally, you could build very sensitive sensors with superconductors by creating extremely sensitive magnetic sensors (SQUIDs – Superconducting Quantum Interference Devices). At the nanoscale, these could be used for a variety of applications, including the detection of small magnetic fields, such as those used in hard drives.

So, we will see how the debate on this pans out. In the meantime, speculate on some Lead mines and enjoy a dose of science-fueled optimism of the type of breakthroughs we may see in the near future.

By 0 Comments

All those wireless Mice! (from Taiwan)

If you are like us, you get too many mice and keyboards lying around. With 5 or six devices, and an equal number of receivers you can just plug stuff in to find what matches. That assumes the peripherals are working and haven’t been lost. Can we interrogate the usb receivers and discover their associated hardware?

First, the findings: I tried four random USB receivers and found out their purpose and matching peripherals (Amazon Basics and Logitech) via windows powershell commands I share below. The interesting thing is that each one was made by a different company from the North part of Taiwan. These are huge companies that I’ve never heard of, each taking parts from the TSMC ecosystem and putting them into devices. The Amazon Basics companies were Pixart Imaging, Inc, MOSART Semiconductor Corporation and Chicony Electronics.

Pixart Imaging, Inc

If you want to follow the tech details below, I’m eager to show how I figured out what these black boxes are for the three different receivers.

All three of these companies were close to each other on the north side of Taiwan.

What are the numbers that identify these receivers?

First, some background. Each device has a ClassGuid, a unique identifier assigned to a device class in the Windows operating system. A device class is a group of devices that have similar characteristics and perform similar functions. The ClassGuid provides a way for Windows to recognize and manage different types of devices connected to the system.

You look into something small like this, the well goes deep. The logitech part had external serial numbers and government spectrum identifiers, the Amazon Basics parts required me to use windows PowerShell to learn about them.

The ClassGuid was originally defined by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE).

Today, the UUID specification is maintained by the Internet Engineering Task Force (IETF) as part of the Request for Comments (RFC) series. The current UUID specification can be found in RFC 4122, titled “A Universally Unique IDentifier (UUID) URN Namespace.”

RFC 4122 defines the structure and generation algorithms for UUIDs, ensuring that the generated identifiers are unique across different systems and over time. It covers various UUID versions that use different methods for generating unique values, such as time-based, random-based, or name-based algorithms.

Each device class in Windows has its own ClassGuid, which is typically a hexadecimal string enclosed in braces (e.g. {4d36e967-e325-11ce-bfc1-08002be10318}). The ClassGuid is used to associate a device driver with the corresponding device class, allowing the operating system to communicate with the device and perform necessary functions.

For example, when you connect a USB device to your computer, Windows uses the ClassGuid to determine which driver to use to communicate with the device. If the device is a Human Interface Device (HID), such as a mouse or keyboard, Windows will use the HID driver associated with the {745a17a0-74d3-11d0-b6fe-00a0c90f57da} ClassGuid. If the device is a USB mass storage device, Windows will use the storage driver associated with the {4d36e967-e325-11ce-bfc1-08002be10318} ClassGuid.

The ClassGuid for USB receivers, or USB dongles, will depend on the type of device.

For example, if you have a USB Wi-Fi adapter or a Bluetooth dongle, their ClassGuids will be either:

  • USB Wi-Fi adapter: {4d36e972-e325-11ce-bfc1-08002be10318} (Network adapters)
  • Bluetooth dongle: {e0cbf06c-cd8b-4647-bb8a-263b43f0f974} (Bluetooth radios)

However, if you have a USB receiver for a wireless mouse or keyboard, its ClassGuid will be that of a Human Interface Device (HID), which is {745a17a0-74d3-11d0-b6fe-00a0c90f57da}.

First Dongle, by Logitech

First, you can look at the device and read a couple numbers (only by taking pictures with my iPhone and zooming in, then using their character recognition).

Looking at the outside I see FCC ID JNZCU0010 refers to a specific device authorized by the Federal Communications Commission (FCC) in the United States. The FCC ID is an alphanumeric code that is assigned to devices that have been tested and comply with the FCC’s requirements for radio frequency interference, safety, and other regulatory standards.

The FCC ID JNZCU0010 belongs to the Logitech Unifying Receiver, which is a wireless USB receiver that connects multiple compatible Logitech devices, such as keyboards and mice, to a single computer using a 2.4 GHz wireless connection. This receiver is particularly useful for reducing the number of USB ports occupied by input devices and for providing a seamless connection experience between compatible Logitech peripherals.

You can find more information about the device by searching the FCC’s equipment authorization database with the FCC ID. The database typically contains documents related to the device’s testing, internal and external photos, and user manuals. You can access the database here: https://www.fcc.gov/oet/ea/fccid

The “IC” code, in this case, “IC: 4418A-CU0010”, refers to an Industry Canada (IC) certification number for the device. Industry Canada is the Canadian government agency responsible for regulating and certifying radio frequency devices, ensuring that they meet the necessary requirements and do not cause harmful interference.

Similar to the FCC ID in the United States, the IC certification number identifies a specific device tested and approved for use in Canada. The IC number “4418A-CU0010” is associated with the same Logitech Unifying Receiver that the FCC ID JNZCU0010 refers to.

So, both the FCC ID JNZCU0010 and the IC number 4418A-CU0010 identify the Logitech Unifying Receiver, confirming that it has been tested and certified for use in both the United States and Canada. This is enough to learn all I need about this receiver. A quick Bing/Google can tell you it’s compatible with: It is compatible with all Logitech Unifying products, which include mice and keyboards that have a Unifying logo displayed on them. You can connect up to 6 compatible keyboards and mice to one computer with a single Unifying receiver.

Active Interrogation for the unmarked receivers

While I’m a linux user, I’ve learned a bit of Windows PowerShell since that is where the hardware is directly connected. Let’s use windows to find out about the black boxes that have no markings that Amazon uses. Hint, it’s going to take us into the crazy Taiwanese CMOS ecosystem.

This command uses the “Get-PnpDevice” cmdlet to get a list of all the Plug and Play (PnP) devices currently connected to your computer. The “-PresentOnly” parameter limits the results to only devices that are currently connected. The “Where-Object” cmdlet is then used to filter the results to only devices with a ClassGuid that begins with ‘{‘, which is the format of all ClassGuids.

The “Select-Object” cmdlet is used to select the “ClassGuid”, “FriendlyName”, “Manufacturer”, “Description”, and “DeviceID” properties of each device. The “-Unique” parameter ensures that each device is only listed once.

When you run this command, you will see a list of all the ClassGuids associated with their respective devices. The output will include the device’s FriendlyName, which is a human-readable name for the device; the Manufacturer, which is the company that produced the device; the Description, which is a brief description of the device’s function; and the DeviceID, which is a unique identifier for the device. This additional information should give you a better understanding of what each connected device is and what its purpose is.

You can paste these commands into powershell and learn a lot:

Get-PnpDevice -PresentOnly | Where-Object {$_.ClassGuid -like '{*}'} | Select-Object ClassGuid, FriendlyName, Manufacturer, Description, DeviceID -Unique

or to find what we are looking for:

Get-PnpDevice -PresentOnly | Where-Object {($_.ClassGuid -eq '{745a17a0-74d3-11d0-b6fe-00a0c90f57da}') -or ($_.ClassGuid -eq '{4d36e96f-e325-11ce-bfc1-08002be10318}' -and $_.Description -like '*USB Receiver*')} | Select-Object ClassGuid, FriendlyName, Manufacturer, Description, DeviceID -Unique

This command queries Win32_USBControllerDevice for USB devices, and then retrieves detailed information on each device. The output will display the PNPDeviceID and the Description for each USB device.

You can then check the output for the relevant device and see if the model number (CU0010) is mentioned in the device description or the PNPDeviceID. If the model number is not explicitly mentioned, you might be able to find a unique identifier for the device that can help you confirm the model number through an online search or by checking the device documentation.

So if we run:

Get-WmiObject Win32_USBControllerDevice -Impersonation Impersonate | Foreach-Object { [Wmi]$_.Dependent } | Select-Object PNPDeviceID, Description

So I plug in a mystery receiver (bad idea) and that produces this output on my machine. Note the VID (vendor id) and PID (part id). You can look up parts based on that.

USB\VID_3938&PID_1059\6&208E681&0&3                USB Composite Device
USB\VID_3938&PID_1059&MI_00\7&E30261B&0&0000       USB Input Device
HID\VID_3938&PID_1059&MI_00\8&3483BAC7&0&0000      HID Keyboard Device
USB\VID_3938&PID_1059&MI_01\7&E30261B&0&0001       USB Input Device
HID\VID_3938&PID_1059&MI_01&COL01\8&C190CFE&0&0000 HID-compliant mouse
HID\VID_3938&PID_1059&MI_01&COL02\8&C190CFE&0&0001 HID-compliant consumer control device
HID\VID_3938&PID_1059&MI_01&COL03\8&C190CFE&0&0002 HID-compliant system controller
HID\VID_3938&PID_1059&MI_01&COL04\8&C190CFE&0&0003 HID-compliant vendor-defined device
HID\VID_3938&PID_1059&MI_01&COL05\8&C190CFE&0&0004 HID-compliant device

The  USB Implementers Forum (USB-IF) assigns vendor ID 3938 to MOSART Semiconductor Corporation, a Taiwan-based company that designs and develops integrated circuits (ICs) such as consumer ICs, PC peripheral ICs, and wireless consumer ICs.

Through some googling, I can find ou this is connected to my amazon basics keyboard:

KeyValue
BrandAmazon Basics
Item model numberKS1-US
Operating SystemWindows 7
Item Weight1.05 pounds
Product Dimensions5.61 x 1.13 x 17.83 inches
Item Dimensions LxWxH5.61 x 1.13 x 17.83 inches
ColorBlack
Batteries2 AAA batteries required. (included)
ManufacturerAmazon Basics
ASINB07WV5WN7B
Country of OriginChina
Date First AvailableNovember 11, 2019
Product Details

It’s strange that most components are made in Taiwan, but Country of Origin is listed as China.

A random dongle

Let’s try the next one, I connect a usb receiver with absolutely zero markings on it and find:

USB\VID_04F2&PID_1825\6&208E681&0&4           USB Input Device
HID\VID_04F2&PID_1825&COL01\7&295CC939&0&0000 HID-compliant mouse
HID\VID_04F2&PID_1825&COL02\7&295CC939&0&0001 HID-compliant vendor-defined device

The new device appears to be a USB Input Device with VID (Vendor ID) “04F2” and PID (Product ID) “1825”. VID “04F2” belongs to Chicony Electronics, a company that manufactures computer peripherals such as keyboards and mice. So where is  群光電子股份有限公司 located?

Just another 50 story building in a far away place.

In this case, the USB Input Device seems to be a multi-function device, as it includes both an HID-compliant mouse and an HID-compliant vendor-defined device. The device might be a mouse with additional features or a keyboard with an integrated touchpad. I had to set it aside and marvel and the world’s CMOS supply chain.

Trying my last device:

USB\VID_093A&PID_2510\5&7993A9C&0&2 - USB Input Device
HID\VID_093A&PID_2510&COL01\6&5B1E42D&1&0000 - HID-compliant mouse
HID\VID_093A&PID_2510&COL02\6&5B1E42D&1&0001 - HID-compliant vendor-defined device
HID\VID_093A&PID_2510&COL03\6&5B1E42D&1&0002 - HID-compliant consumer control device
HID\VID_093A&PID_2510&COL04\6&5B1E42D&1&0003 - HID-compliant system controller

So the vendor ID is Pixart Imaging, Inc and the part number is for an “optical mouse”.

is a Taiwan-based company founded in 1998 that specializes in designing and manufacturing CMOS (Complementary Metal-Oxide-Semiconductor) image sensors and related imaging application products. These components are commonly used in various devices, such as optical mice, digital cameras, webcams, and other consumer electronics.

One of the key products that Pixart Imaging is known for is its optical navigation sensors used in computer mice. These sensors replaced traditional mechanical ball-tracking mechanisms, allowing for more accurate and responsive cursor movement. The company’s optical sensors are widely used by different mouse manufacturers due to their high performance, low power consumption, and cost-effectiveness.

In addition to optical sensors, Pixart Imaging also offers a range of other products, such as capacitive touch controllers, fingerprint identification modules, and gesture recognition solutions. These components are utilized in a variety of applications, including smartphones, tablets, wearables, and IoT devices.

What are each of these entries?

  1. USB\VID_093A&PID_2510\5&7993A9C&0&2 – USB Input Device: This is a generic USB input device that can be used to transmit data between the connected device and the computer. It could represent a variety of peripherals, such as keyboards or game controllers.
  2. HID\VID_093A&PID_2510&COL01\6&5B1E42D&1&0000 – HID-compliant mouse: This entry represents a Human Interface Device (HID) compliant mouse. It follows the HID protocol to communicate with the computer, allowing for plug-and-play functionality and seamless interaction between the user and the computer.
  3. HID\VID_093A&PID_2510&COL02\6&5B1E42D&1&0001 – HID-compliant vendor-defined device: This entry represents a device that adheres to the HID protocol, but its specific function is defined by the vendor. It could be a custom input device or a specialized peripheral designed for a particular purpose.
  4. HID\VID_093A&PID_2510&COL03\6&5B1E42D&1&0002 – HID-compliant consumer control device: This is a Human Interface Device that is specifically designed for consumer electronics control, such as multimedia keyboards, remote controls, or other devices used to control media playback or volume.
  5. HID\VID_093A&PID_2510&COL04\6&5B1E42D&1&0003 – HID-compliant system controller: This entry represents a device that follows the HID protocol and serves as a system controller. It could be a device like a power management controller, system monitoring device, or another type of controller that helps manage various aspects of a computer system.

So, I was able to quickly find out this matched my Amazon Basics Mouse.

Time to catch my flight.

By One Comment

Quick(ish) Price Check on a Car

So, is it a good price?

With my oldest daughter heading off to college soon, we’ve realized that our family car doesn’t need to be as large as it used to be. We’ve had a great relationship with our local CarMax over the years, and we appreciate their no-haggle pricing model. My wife had her eyes set on a particular model: a 2019 Volvo XC90 T6 Momentum. The specific car she found was listed at $35,998, with 47,000 miles on the odometer.

But is the price good or bad? As a hacker/data scientist, I knew could get the data to make an informed decision but doing analysis at home is a great way to learn and use new technologies. The bottom line is that the predicted price would be $40,636 or 11.4% higher than the CarMax asking price. If I compare to the specific trim, the price should be $38,666. So the price is probably fair. Now how did I come up with that number?

Calculations

Armed with Python and an array of web scraping tools, I embarked on a mission to collect data that would help me determine a fair value for our new car. I wrote a series of scripts to extract relevant information, such as price, age, and cost from various websites. This required a significant amount of Python work to convert the HTML data into a format that could be analyzed effectively.

Once I had amassed a good enough dataset (close to 200 cars), I began comparing different statistical techniques to find the most accurate pricing model. In this blog post, I’ll detail my journey through the world of logistic regression and compare it to more modern data science methods, revealing which technique ultimately led us to the fairest car price.

First, I did some basic web searching. According to Edmunds, the average price for a 2019 Volvo XC90 T6 Momentum with similar mileage is between $33,995 and $43,998 and my $35,998 falls within this range.

As for how the Momentum compares to other Volvo options and similar cars, there are a few things to consider. The Momentum is one of four trim levels available for the 2019 XC902. It comes with a number of standard features, including leather upholstery, a panoramic sunroof, and a 9-inch touchscreen infotainment system. Other trim levels offer additional features and options.

The 2019 Volvo XC90 comes in four trim levels: Momentum, R-Design, Inscription, and Excellence. The R-Design offers a sportier look and feel, while the Inscription adds more luxury features. The Excellence is the most luxurious and expensive option, with seating for four instead of seven. The Momentum is the most basic.

In terms of similar cars, some options to consider might include the Audi Q7 or the BMW X5. Both of these SUVs are similarly sized and priced to the XC90.

To get there, I had to do some web scraping, data cleaning, and built a basic logistic regression model, as well as other modern data science methods. To begin my data collection journey, I decided (in 2 seconds) to focus on three primary sources: Google’s search summary, Carvana, and Edmunds.

My first step was to search for Volvo XC90 on each of these websites. I then used the Google Chrome toolbar to inspect the webpage’s HTML structure and identify the <div> element containing the desired data. By clicking through the pages, I was able to copy the relevant HTML and put this in a text file, enclosed within <html> and <body> tags. This format made it easier for me to work with the BeautifulSoup Python library, which allowed me to extract the data I needed and convert it into CSV files.

Since the data from each source varied, I had to run several regular expressions on many fields to further refine the information I collected. This process ensured that the data was clean and consistent, making it suitable for my upcoming analysis.

Finally, I combined all the data from the three sources into a single CSV file. This master dataset provided a solid foundation for my pricing analysis and allowed me to compare various data science techniques in order to determine the most accurate and fair price for the 2019 Volvo XC90 T6 Momentum.

In the following sections, I’ll delve deeper into the data analysis process and discuss the different statistical methods I employed to make our car-buying decision.

First, data from carvana looked like this:

<div class="tk-pane full-width">
    <div class="inventory-type carvana-certified" data-qa="inventory-type">Carvana Certified
    </div>
    <div class="make-model" data-qa="make-model">
        <div class="year-make">2020 Volvo XC90</div>
    </div>
    <div class="trim-mileage" data-qa="trim-mileage"><span>T6 Momentum</span> • <span>36,614
            miles</span></div>
</div>
<div class="tk-pane middle-frame-pane">
    <div class="flex flex-col h-full justify-end" data-qa="pricing">
        <div data-qa="price" class="flex items-end font-bold mb-4 text-2xl">$44,990</div>
    </div>
</div>

In this code snippet, I used the BeautifulSoup library to extract relevant data from the saved HTML file, which contained information on Volvo XC90 listings. The script below searches for specific <div> elements containing the year, make, trim, mileage, and price details. It then cleans up the data by removing unnecessary whitespace and commas before storing it in a dictionary. Finally, the script compiles all the dictionaries into a list and exports the data to a CSV file for further analysis.

I could then repeat this process with Google to get a variety of local sources.

One challenge from the Google results, was that I had a lot of data in the images (they were base64 encoded) so wrote a bash script to clean up the tags using sed (pro tip: learn awk and sed)

When working with the Google search results, I had to take a slightly different approach compared to the strategies used for Carvana and Edmunds. Google results did not have a consistent HTML structure that could be easily parsed to extract the desired information. Instead, I focused on identifying patterns within the text format itself to retrieve the necessary details. By leveraging regular expressions, I was able to pinpoint and extract the specific pieces of information, such as the year, make, trim, mileage, and price, directly from the text. My scrape code is below.

Scraping Edmunds required both approaches of using formatting and structure.

All together, I got 174 records of used Volvo XC90s, I could easily get 10x this since the scripts exist and I could mine craigslist and other places. With the data I have, I can use R to explore the data:

# Load the readxl package
library(readxl)
library(scales)
library(scatterplot3d)

# Read the data from data.xlsx into a data frame
df <- read_excel("data.xlsx")

df$Price<-as.numeric(df$Price)/1000

# Select the columns you want to use
df <- df[, c("Title", "Desc", "Mileage", "Price", "Year", "Source")]

# Plot Year vs. Price with labeled axes and formatted y-axis
plot(df$Year, df$Price, xlab = "Year", ylab = "Price ($ '000)",
     yaxt = "n")  # Don't plot y-axis yet

# Add horizontal grid lines
grid()

# Format y-axis as currency
axis(side = 2, at = pretty(df$Price), labels = dollar(pretty(df$Price)))

abline(lm(Price ~ Year, data = df), col = "red")
Armed with this data, we can assign a logistic regression model.

This code snippet employs the scatterplot3d() function to show a 3D scatter plot that displays the relationship between three variables in the dataset. Additionally, the lm() function is utilized to fit a linear regression model, which helps to identify trends and patterns within the data. To enhance the plot and provide a clearer representation of the fitted model, the plane3d() function is used to add a plane that represents the linear regression model within the 3D scatter plot.

model <- lm(Price ~ Year + Mileage, data = df)

# Plot the data and model
s3d <- scatterplot3d(df$Year, df$Mileage, df$Price,
                     xlab = "Year", ylab = "Mileage", zlab = "Price",
                     color = "blue")
s3d$plane3d(model, draw_polygon = TRUE)

So, we can now predict the price of 2019 Volvo XC90 T6 Momentum with 47K miles, which is $40,636 or 11.4% higher than the CarMax asking price of $35,998.

# Create a new data frame with the values for the independent variables
new_data <- data.frame(Year = 2019, Mileage = 45000)

# Use the model to predict the price of a 2019 car with 45000 miles
predicted_price <- predict(model, new_data)

# Print the predicted price
print(predicted_price)

Other Methods

Ok, so now let’s use “data science”. Besides linear regression, there are several other techniques that I can use to take into account the multiple variables (year, mileage, price) in your dataset. Here are some popular techniques:

Decision Trees: A decision tree is a tree-like model that uses a flowchart-like structure to make decisions based on the input features. It is a popular method for both classification and regression problems, and it can handle both categorical and numerical data.

Random Forest: Random forest is an ensemble learning technique that combines multiple decision trees to make predictions. It can handle both regression and classification problems and can handle missing data and noisy data.

Support Vector Machines (SVM): SVM is a powerful machine learning algorithm that can be used for both classification and regression problems. It works by finding the best hyperplane that separates the data into different classes or groups based on the input features.

Neural Networks: Neural networks are a class of machine learning algorithms that are inspired by the structure and function of the human brain. They are powerful models that can handle both numerical and categorical data and can be used for both regression and classification problems.

Gradient Boosting: Gradient boosting is a technique that combines multiple weak models to create a stronger one. It works by iteratively adding weak models to a strong model, with each model focusing on the errors made by the previous model.

All of these techniques can take multiple variables into account, and each has its strengths and weaknesses. The choice of which technique to use will depend on the specific nature of your problem and your data. It is often a good idea to try several techniques and compare their performance to see which one works best for your data.

I’m going to use random forest and a decision tree model.

Random Forest

# Load the randomForest package
library(randomForest)

# "Title", "Desc", "Mileage", "Price", "Year", "Source"

# Split the data into training and testing sets
set.seed(123)  # For reproducibility
train_index <- sample(1:nrow(df), size = 0.7 * nrow(df))
train_data <- df[train_index, ]
test_data <- df[-train_index, ]

# Fit a random forest model
model <- randomForest(Price ~ Year + Mileage, data = train_data, ntree = 500)

# Predict the prices for the test data
predictions <- predict(model, test_data)

# Calculate the mean squared error of the predictions
mse <- mean((test_data$Price - predictions)^2)

# Print the mean squared error
cat("Mean Squared Error:", mse)

The output from the random forest model you provided indicates that the model has a mean squared error (MSE) of 17.14768 and a variance explained of 88.61%. A lower MSE value indicates a better fit of the model to the data, while a higher variance explained value indicates that the model can explain a larger portion of the variation in the target variable.

Overall, an MSE of 17.14768 is reasonably low and suggests that the model has a good fit to the training data. A variance explained of 88.61% suggests that the model is able to explain a large portion of the variation in the target variable, which is also a good sign.

However, the random forest method shows a predicted cost of $37,276.54.

I also tried cross-validation techniques to get a better understanding of the model’s overall performance (MSE 33.890). Changing to a new technique such as a decision tree model, turned MSE into 50.91. Logistic regression works just fine.

Adding the Trim

However, I was worried that I was comparing the Momentum to the higher trim options. So to get the trim, I tried the following prompt in Gpt4 to translate the text to one of the four trims.

don't tell me the steps, just do it and show me the results.
given this list add, a column (via csv) that categorizes each one into only five categories Momentum, R-Design, Inscription, Excellence, or Unknown

That worked perfectly and we can see that we have mostly Momentums.

ExcellenceInscriptionMomentumR-DesignUnknown
Count0688789
Percent0.00%39.53%50.58%4.65%5.23%
Frequency and Count of Cars

And this probably invalidates my analysis as Inscriptions (in blue) do have clearly higher prices:

Plot of Price By Year

We can see the average prices (in thousands). In 2019 Inscriptions cost less than Momentums? That is probably a small \(n\) problem since we only have 7 Inscriptions and 16 Momentum’s in our data set for 2019.

YearR-DesignInscriptionMomentum
2014$19.99NANA
2016$30.59$32.59$28.60
2017$32.79$32.97$31.22
2018$37.99$40.69$33.23
2019NA$36.79$39.09
2020NA$47.94$43.16
Average Prices by Trim (in thousand dollars)

So, if we restrict our data set smaller, what would the predicted price of the 2019 Momentum be? Just adding a filter and running our regression code above we have $38,666 which means we still have a good/reasonable price.

Quick Excursion

One last thing I’m interested in: does mileage or age matter more. Let’s build a new model.

# Create Age variable
df$Age <- 2023 - df$Year

# Fit a linear regression model
model <- lm(Price ~ Mileage + Age, data = df)

# Print the coefficients
summary(model)$coef
EstimateStd. Errort valuePr(>|t|)
(Intercept)61.349130.6908488.803722.28E-144
Mileage-0.000222.44E-05-8.838691.18E-15
Age-2.754590.27132-10.15253.15E-19
Impact of Different Variables

Based on the regression results, we can see that both Age and Mileage have a significant effect on Price, as their p-values are very small (<0.05). However, we can also see that Age has a larger absolute t-score (-10.15) than Mileage (-8.84), indicating that Age may have a slightly greater effect on Price than Mileage. Additionally, the estimates show that for every one-year increase in Age, the Price decreases by approximately 2.75 thousand dollars, while for every one-mile increase in Mileage, the Price decreases by approximately 0.0002 thousand dollars (or 20 cents). That is actually pretty interesting.

This isn’t that far off. According to the US government, a car depreciates by an average of $0.17 per mile driven. This is based on a five-year ownership period, during which time a car is expected to be driven approximately 12,000 miles per year, for a total of 60,000 miles.

In terms of depreciation per year, it can vary depending on factors such as make and model of the car, age, and condition. However, a general rule of thumb is that a car can lose anywhere from 15% to 25% of its value in the first year, and then between 5% and 15% per year after that. So on average, a car might depreciate by about 10% per year.

Code

While initially in the original blog post, I moved all the code to the end.

Carvana Scrape Code

Cleaner Code

Google Scrape Code

Edumund’s Scrape Code

By 6 Comments

The (necessary) Luxury of Honesty and Vulnerability

Professionally, my goal was “work hard until something good happens” for many years. I had the luxury of the world’s best string of bosses. I had no idea how lucky I was to be learning from giants who poured wisdom into me, protected me and told me hard truths that shaped my character.

At DARPA I found a new luxury: intense, intentional and open honesty. I learned that as a PM, I could say “I don’t understand” 10x and not be seen as stupid. I could walk into any room as an open book and dig in to my questions. It was glorious and it actually made me smart on several topics. Honesty became my learning superpower.

This was such a contrast to my early years at MIT. I didn’t feel like I belonged in the world of the world’s smartest people. I seemed to struggle in ways that others didn’t. Classes were too hard. In every conversation or class, when you stop understanding, it just gets harder and harder to interject and reveal where your misunderstanding started.

I needed offset strategy to succeed. I took notes in class without understanding things and found ways to learn outside of class. This took the form of finding friends, online explanations, good books and working extra problems. However, I had to act like a spy in 80s Berlin: developing sources to whom I could be truly honest and reveal the depth of my misunderstanding. I found those folks and they made all the difference. It worked, but it was really inefficient.

Real Conversation (AI generated art)

If only I had the ability to said “I don’t understand” in every context; to speak with one voice in all contexts. I would have learned, and contributed, so much more.

All this made me think about the bigger picture to all this: honesty and vulnerability are not just professional luxuries, they are important moral and ethical values. They are essential traits that can help us to build meaningful relationships, grow as individuals, and ultimately live a good life.

Honesty is a powerful tool that helps us to build trust, not just in the workplace, but also in our personal lives. When we are honest with ourselves and others, we are able to form stronger bonds and build deeper connections. This, in turn, allows us to better understand the perspectives and motivations of those around us. This, in turn, leads to more productive and meaningful interactions, whether in the workplace or in our personal relationships.

Vulnerability, on the other hand, allows us to be open and authentic. It’s the only way to build trust. It helps us to be more approachable and genuine, and it creates an environment of trust. When we are vulnerable, we are able to admit our weaknesses and share our struggles, which in turn helps us to connect with others and gain support. This, in turn, helps us to grow as individuals and lead more fulfilling lives.

Furthermore, honesty and vulnerability also help us to build resilience. When we are honest with ourselves and others, we are able to confront difficult situations head-on and find solutions. This, in turn, helps us to become more resilient and better equipped to handle challenges and setbacks.

I’m writing this post because I had the chance to regress recently after I left an aerospace engineering conference for a group of cyber researchers. I felt a little out of my element as I switched contexts. One expert in particular, wanted to make a strong impression as we started talking. The bar was loud. We had a mediocre conversation where I didn’t follow half of what they were saying. They used the opportunity to just throw lots of words at me. We both started looking for exit strategies.

We did talk about some really technical stuff (the limits of TEEs, quantum computing and cyber security, what an efficient market for exploits would look like, etc) but I really didn’t learn or contribute much because I didn’t stop the conversation when I didn’t understand. I wasn’t vulnerable and I didn’t push back in a way that would generate a conversation to remember. That would be a conversation that changed us, not just the opportunity to participate in a professional dance.

However, as I dove back into the AIAA conference, I used this disappointment to dive back into discussions. I was intentional to dive into the discomfort of “I don’t get that, Can you say that again?”, and “I’m sorry, let’s get to somewhere more quiet, what you are saying is important”. The back side of these conversations led to incredible fun, relationship building and some real learning.

If you find yourself confused. Say it and say it early. If you forget someone’s name. Say that too. It’s always better to chose honesty and vulnerability over putting on a front that both you and your counterparty will see through. It makes both of you smarter, and opens the door to real relationships, ultimately paving the way to get big things done.

By 2 Comments

The Problem with Integrity

I think about Winston Churchill a lot. A successful writer, correspondent, painter, politician and businessman, he is known for his bold principled stand against Hitler. However, zoom in and a more complex narrative emerges.

After Neville Chamberlain negotiated the Munich Agreement in 1938, which sought to appease Nazi Germany by allowing them to take control of the Sudetenland region of Czechoslovakia, Winston Churchill famously said, “You were given the choice between war and dishonor. You chose dishonor, and you will have war.”

They look wise now, but his comments were unwelcome. Many people in Britain and other countries believed that the agreement would prevent war and that Churchill’s warnings were alarmist. He was even removed from his position as First Lord of the Admiralty in 1939 and was not given any major government positions until 1940 when he became Prime Minister as the war had already started.

He pushed through years of criticism and personally rallied a nation with his bold, counter-cultural, stand that led the allied powers to victory. If we stopped there, we have a tight schoolbook story of leader who did the right thing and was vindicated and honored.

Unfortunately for Sir Winston, shortly after the war, his government was defeated in the general election of July 1945. The British people were tired after six years of war and preferred the Labour Party’s program of social reforms. He found himself doubted, vindicated and then cast away.

Everyone and every organization wants integrity, but actually having it, keeping it and acting on it is a challenge. Nothing is simple when you find yourself at a different place than those around you. Holding a counter-cultural view means going against the dominant beliefs and values of the system you are working hard to support.

Yes, integrity can be dangerous. The word integrity doesn’t have meaning and power when everyone agrees. When integrity forces you to be different, it’s dangerous for you and others. You risk losing your friends, your job and your sleep.

Counter-cultural integrity threatens the status quo and big organizations need broad buy in to the status quo to get things done. The right thing may be good in the long run, but it can be disruptive now. Personal integrity requires standards that may not change with culture or a corporation’s strategy for risk mitigation. Anyone who holds to an independent set of standards will eventually find themselves a problem in a rigid and ever changing system.

Standing Alone

Worst of all, the road towards acting on integrity brings you into contact with the dark side. There is a temptation to be right and feel superior to the system you are in. You need the inspiration of the examples of Martin Luther King, Gandhi, Winston Churchill or Abraham Lincoln, but you can’t conflate your situation and your strength with theirs. You have to remain humble while not losing your convictions. You have to be continually open to being wrong. You have to push yourself to be flexible. You have to see and respect the other side.

The other temptation is to give in to self-pity and think that the world is against you. The professional world can be a dark, amoral, and heartless place. That is reality–true everywhere. No one is a unique and constant victim of unjustified persecution or mistreatment. There is no conspiracy, just people trying to make it all work and not lose their status, jobs or relationships.

The stress of finding yourself alone, against the crowd, is real. The only way to survive it is to have a trusted network of friends– not friends who just listen, affirm and agree, but friends who hold you accountable and clarify your thoughts. A good friend turns a dark and lonely road into conviction that can confirm your individuality and authenticity. Most important, sharing your story can lead to positive change. There is no tighter community than like-minded individuals who support, refine, and validate each other’s perspectives.

I’ve taken several counter-cultural stands in my life with a wide range of outcomes. All were painful. Every stand I’ve taken has resulted in some degree of lost friendships and increased pain. Some day I may be proud of these actions, but all of them resulted in a lost opportunity where I had to get off the boat and watch it sail on. If there is any pride in that, it’s drowned out by the sadness of it all.

I’ve learned that I’m not particularly brave or strong. However, I’m blessed with great community, a love of history and a deep care for others. Most of all, I feel convicted to protect others who trust and depend on me doing the right thing.

But this isn’t the movies. As I’ve post-processed tough stands, when I did the most good, I felt the worst inside. Taking a different road leaves you feeling alone and scared, self-conscious and unsure. Integrity put into practice in these scenarios makes part of you wish you didn’t have it. It’s not fun and it doesn’t feel courageous. When I’ve done the most right, the overwhelming emotion is sadness and insecurity.

Here I’m convicted and encouraged by two father son chats. The first is Polonius, who is giving advice in Hamlet to his son Laertes as he prepares to leave for France. Polonius urges his son to be honest with himself, to be true to his own values and beliefs, and to avoid the temptations and pitfalls of the world. He advises Laertes to “neither a borrower nor a lender be” and to “this above all: to thine own self be true, And it must follow, as the night the day, Thou canst not then be false to any man.” If Laertes is true to himself, he will not be able to deceive others; he will be true to himself and avoid compromise for the sake of others or for the sake of fitting in.

The second is from Cicero. He wrote to his son, Marcus, that an unjust act will never profit him.

Stop. Just let the gravity of this settle. This may be the most counter-cultural message ever written. A thief can at least enjoy his spoils a little bit, right? A boss who complies with pressure to promote a lesser qualified person at the expense of her/his better judgment may get a promotion that helps their career, right?

No, Cicero says it won’t help them. He believed in an absolute view of morality and integrity. He wrote that an unjust act is not only morally wrong, but it will also ultimately harm the person who commits it and the society that allows it. He believed that living a virtuous life and making the right choices, even if difficult, is the only true path to true happiness and fulfillment. In his letter Ad Familiares Cicero wrote: “There is nothing more virtuous, nothing more in accord with duty, than to take one’s stand for what is right.” He also wrote in this same letter:

“What is morally right is not always politically expedient; and what is politically expedient is not always morally right.”

Cicero believed that true success and happiness come from living a virtuous life, and that this requires standing up for what is right, even when it may be difficult or unpopular. Even when it looks like an easy compromise, it will never profit you. Never. Even if you are lucky and some good results from your actions, you have harmed your soul.

The wisdom of history, a deep personal faith and a tight network of friends all give me confidence to do the right thing, no matter the cost. This is true even with full knowledge of just how dangerous, costly and lonely the road of counter-cultural integrity is.

I’m jealous of those who can compromise, make things work and steer situations to a middle ground, but I have trouble here. This isn’t about being brave or honorable as much as I see no other option. Without deep reflection on, and a commitment to hold to standards, I wouldn’t have individuality or authenticity.

Just as breathing is a necessary function for survival, holding to one’s standards and integrity is necessary for maintaining a sense of self and personal agency. Without it, one risks becoming a mere follower or conforming to the beliefs and values of others, losing their unique perspective and individuality. Holding to one’s standards and integrity can be challenging–it’s been the hardest thing I’ve had to do–but it is a fundamental aspect of being true to oneself.

It’s fitting to have just passed Martin Luther King day. MLK wrote:

“The ultimate measure of a man is not where he stands in moments of comfort and convenience, but where he stands at times of challenge and controversy.”

Character is revealed not in moments of ease, but in times of adversity, and when difficult choices and decisions are made. This quote reminds me that true strength and honor come from standing up for what is right, even when it is hard and uncomfortable, and that we should strive to be true to our principles and values, even in the face of opposition. All power is moral power and all strength requires the willingness to walk the hard road, even when it isn’t where you want to go.

By 6 Comments

Pairing Philosophers in 2023

Søren Kierkegaard and Friedrich Nietzsche are valued teachers and they generated many of the ideas bumping into each other in the culture today.

Søren Kierkegaard was a Danish philosopher, theologian, and social critic who is known for his contributions to the field of existentialism. He believed that the individual’s relationship to God was the most important aspect of human life, and that the search for meaning and purpose was an essential part of the human experience. Kierkegaard argued that the traditional institutions of society, such as the church and the state, were inadequate for helping individuals to find meaning and fulfillment in life, and he called for a return to a more personal and inward-looking approach to faith and spirituality.

Friedrich Nietzsche was a German philosopher who is known for his critiques of traditional values and his celebration of the individual. He argued that traditional morality, with its emphasis on self-denial and restraint, was destructive to the human spirit and hindered the development of truly great individuals. Nietzsche believed that people should embrace their own desires and passions, and strive to become what he called “overmen,” or individuals who had fully realized their own potential and lived life to the fullest.

These two philosophers define authentic to me. Neither of them would have been comfortable in my Church or in my society, but I can’t escape how much I would love to host a cup of coffee with these two thinkers.

Two Gents talking

Authenticity is really hard because we can’t escape our obsession with status no matter how hard we try. It’s better to not think about status too much because focusing on it can compromise authenticity. Kierkegaard and Nietzsche are good to match up because they both were independent thinkers who didn’t care about others’ opinions, yet were deeply wounded by the world’s rejection.

One stark difference: Kierkegaard embraced faith, while Nietzsche rejected the idea of a greater meaning in life.

Both Kierkegaard and Nietzsche were obsessed with finding the truth, wherever that quest went, and were both deeply troubled by what they found and by the process of finding it.

Desiring truth not consistency is probably the hardest intellectual challenge and it can be a lonely and troubling journey. Since I know that I’m not wiser than the weight of history or the leaders of my faith community, I tend to side on tradition when I don’t understand things. Yet I strive to overcome the temptation to prioritize consistency in my beliefs over seeking new information that may challenge them. Consistency is a good default, but it can prevent us from fully understanding the world around us and making informed decisions. An open and certain mind is a rare thing and both do and don’t have one.

Nietzsche and Kierkegaard inspire me on this point. They were concerned with the nature of human existence and the meaning of life, and they both sought to fundamentally re-think the traditional Western philosophical tradition. This makes them good foils to consider what they might think about three significant developments in the modern world: the rise of populism, the decrease in organized religion, and the rise of artificial intelligence.

The Rise of Populism

Nietzsche and Kierkegaard were both critical of the values of the Enlightenment and the modern world, and they both argued that the modern world had lost touch with the deeper meanings and values of life. In this sense, they might both view the rise of populism with a certain degree of skepticism. Populism is often associated with a rejection of traditional political and social elites and a focus on the needs and concerns of ordinary people. Both Nietzsche and Kierkegaard would likely argue that this focus on the needs and desires of the masses can lead to a superficial and shallow understanding of the world, and they would both caution against a reliance on the “tyranny of the majority” as a guiding principle for society.

At the same time, however, both Nietzsche and Kierkegaard placed a strong emphasis on the importance of individuality and the need for individuals to be true to themselves and their own values. In this sense, they might both see the rise of populism as an opportunity for individuals to reclaim their own autonomy and agency, and to resist the homogenizing forces of modernity.

Nietzsche the populist?

The Decrease in Organized Religion

Both Nietzsche and Kierkegaard were deeply concerned with the role of religion in human life, and they both grappled with the question of how individuals can find meaning and purpose in the absence of traditional religious beliefs. Nietzsche was highly critical of traditional Christianity and other monotheistic religions, and he is known for his arguments against the existence of God and his rejection of traditional moral values. He argued that individuals should create their own values and meaning rather than relying on traditional sources of authority.

Kierkegaard, on the other hand, was deeply religious and saw faith as a central aspect of human life. He argued that belief in God was not a matter of reason, but rather a matter of the heart, and he developed the concept of the “leap of faith” to describe the idea that individuals must make a leap of faith in order to truly believe in something.

In the modern world, we are seeing a decline in organized religion and a shift away from traditional religious beliefs. Nietzsche might view this trend as a positive development, as he rejected traditional religious beliefs and saw them as a source of oppression and illusion. Kierkegaard, on the other hand, might view the decline in organized religion with concern, as he saw faith as a central aspect of human life and argued that individuals need a sense of transcendence and meaning beyond the material world.

The Rise of Artificial Intelligence

In the 21st century, we are seeing a rapid development of artificial intelligence and the increasing integration of technology into all aspects of our lives. Nietzsche and Kierkegaard would likely have very different perspectives on the rise of artificial intelligence.

Kierkegaard and AI (picture generated by Dall-E 2)

Nietzsche might view the development of artificial intelligence with a certain degree of skepticism, as he placed a strong emphasis on the value of human creativity and individuality. He might argue that the increasing reliance on artificial intelligence could lead to a dehumanization of society and a loss of the unique qualities that make humans special. But! Nietzsche was interested in the potential of technology to enhance human life and enable individuals to overcome their limitations, and he might have seen the development of artificial intelligence as a potential way to achieve this.

Kierkegaard, on the other hand, might have been more skeptical of the role of technology in society and could have seen it as a threat to human dignity and autonomy. He might have argued that the increasing reliance on technology was a symptom of a deeper spiritual malaise in modern society and could lead to a loss of meaning and purpose in life. (Good grief, how much I love Kierkegaard.)

Who Else?

All this had me thinking, what other pair might be an interesting lens to view society? And I think five other pairings would be super fun to meet up with:

Jean-Jacques Rousseau and John Locke

Best friends?

These two philosophers had very different views on the nature of the state and the role of the individual in society. Rousseau argued for the primacy of the common good and the need for the state to exert control over the lives of individuals, while Locke argued for the importance of individual rights and the need for limited government. Comparing these two philosophers could provide a useful framework for thinking about issues related to the balance between individual freedom and the role of the state in modern society.

Karl Marx and Adam Smith

These two philosophers had very different views on the nature of economic systems and the role of the state in regulating them. Marx argued for the abolition of private property and the need for a socialist economic system, while Smith argued for the importance of free markets and the role of self-interest in driving economic growth. Comparing these two philosophers could provide a useful framework for thinking about issues related to economic policy and the role of the state in the economy.

Michel Foucault and John Rawls

Focualt and Rawls on the March

These two philosophers had very different views on the nature of justice and the foundations of moral and political theory. They pretty much define the camps in the American left today. Foucault argued that power relations are a fundamental aspect of society (#BLM, Woke!) and that justice is not an objective concept, while Rawls argued for the importance of a social contract based on fairness and equality (think Clinton/Blair). Comparing these two philosophers could provide a useful framework for thinking about issues related to social justice and the foundations of political theory.

Thomas Hobbes and John Locke

These two philosophers had very different views on the nature of the state and the role of the individual in society. Hobbes argued for the need for a strong, centralized state in order to maintain order and prevent anarchy, while Locke argued for the importance of individual rights and the need for limited government. Comparing these two philosophers could provide a useful framework for thinking about issues related to the balance between individual freedom and the role of the state in modern society.

Finally, Kant and Hegel!

Immanuel Kant, an 18th-century philosopher has had the same level of influence as Kierkegaard on me. I consider myself a Kantian. I love the idea that the moral worth of an action is determined by the motivation behind it, rather than the consequences that it produces. An action is considered morally right if it is done out of a sense of duty or respect for moral law, rather than as a means to achieve some other end or goal. Also, the moral law is universal and applies to all people, regardless of their individual circumstances or desires.

Kantian ethicists argue that we have a moral duty to treat others with respect and to always act in accordance with moral principles, even when it is difficult or inconvenient to do so. They believe that this is the only way to create a just and moral society, and that failure to live up to these standards can have serious consequences for individuals and for society as a whole.

Georg Wilhelm Friedrich Hegel, a 19th-century philosopher, would be a great contrast to Kant. Some specific areas of disagreement between the two philosophers include:

The nature of history and the role of reason: Kant argued that human reason was a universal and timeless principle, while Hegel argued that reason was an inherent part of the historical process and that the world was shaped by the interplay of opposing forces. Hegel used organic metaphors and language to describe the way in which history unfolds and develops over time. For example, he referred to the process of historical development as a “world-historical process” and described the different periods of history as “stages” in the development of human consciousness.

The nature of the state and the role of the individual: Kant argued for the importance of individual rights and the need for limited government, while Hegel argued for the primacy of the state and the idea that individuals should be subservient to the state.

The nature of knowledge and the foundations of moral and political theory: Kant argued for the importance of reason and the a priori principles that structure our experience, while Hegel argued that knowledge was a product of the historical process and that the ultimate goal of human development was the realization of the “Absolute.”

How fun it would be to pair all the philosophers mentioned above about up. They probably would find my company pretty boring, but it would be fun to tell them about the 1900s and answer their questions about what we believe today. Ah, well, I get to read their books, write this stuff up and, even better, talk to you about this stuff.

By 3 Comments

Review: From Strength to Strength

Devote the back half of your life to serving others with your wisdom. Get old sharing the things you believe are most important. Excellence is always its own reward, and this is how you can be most excellent as you age.

Arthur C. Brooks

How do you ensure you don’t get the most out of aging well: cultivate gratitude, practice compassion, build relationships, and create beauty. I love the simplicity and truth of that.

In “From Strength to Strength: Finding Success, Happiness, and Deep Purpose in the Second Half of Life,” Arthur Brooks addresses a common problem faced by many successful individuals (who he calls “strivers”) as they enter the second half of their lives.

Yes, but I’m 46, should I care about this? Well, yes, according to most research I’m past my professional prime. In the world of tech/science the most common age for producing a magnum opus is the late 30s. The likelihood of a major discovery increases steadily through one’s 20s and 30s and then declines through one’s 40s, 50s, and 60s. Research shows that the likelihood of producing a major innovation at age 70 is approximately what it was at age 20—almost nonexistent.

My brand is innovation and innovators typically have an abundance of fluid intelligence. It is highest relatively early in adulthood and diminishes starting in one’s 30s and 40s. This is why tech entrepreneurs, for instance, do so well so early, and why older people have a much harder time innovating.

Crystallized intelligence, in contrast, is the ability to use knowledge gained in the past. Think of it as possessing a vast library and understanding how to use it. It is the essence of wisdom. Because crystallized intelligence relies on an accumulating stock of knowledge, it tends to increase through one’s 40s, and does not diminish until very late in life.

Careers that rely primarily on fluid intelligence tend to peak early, while those that use more crystallized intelligence peak later. For example, Dean Keith Simonton has found that poets—highly fluid in their creativity—tend to have produced half their lifetime creative output by age 40 or so. Historians—who rely on a crystallized stock of knowledge—don’t reach this milestone until about 60.

No matter what mix of intelligence your field requires, you can always endeavor to weight your career away from innovation and toward the strengths that persist, or even increase, later in life.

This book underscores the tragedy of “peaking early” and failing to grow and adjust to life’s stages. But this is the lot of the striver. There will be a point where worldly accomplishment diminishes or even stops. What happens then when the most difficult task is the daily struggle with a sense of failure and despondency.

To address this issue, Brooks suggests that it is necessary for individuals to find a deep purpose in their second half of life. This can be achieved through the cultivation of gratitude, the practice of compassion, the building of relationships, and the creation of beauty. By focusing on these actions, individuals can find a sense of fulfillment and purpose that will carry them through the second half of life and enable them to finish well.

One of the most compelling aspects of Brooks’ book is his emphasis on the importance of gratitude. He argues that cultivating gratitude allows individuals to find joy and purpose in their lives, even in the midst of challenges and setbacks. By focusing on the things we are thankful for, we can find meaning and fulfillment that is not dependent on external circumstances or accomplishments.

But gratitude is empty without the practice of compassion. By seeking to understand and care for others, we can find a sense of purpose and meaning that goes beyond our own individual accomplishments. Brooks makes it clear that this can be especially meaningful in the second half of life, as it allows us to contribute to the greater good and make a positive impact on the world around us.

In addition to cultivating gratitude and practicing compassion, Brooks also emphasizes the importance of building relationships. He argues that strong relationships with others can provide us with a sense of belonging and purpose that is essential for a fulfilling life. By investing in these relationships and seeking to connect with others, we can find meaning and joy in the second half of our lives.

Finally, Brooks suggests that creating beauty is another key way in which individuals can find purpose and meaning in the second half of life. Whether through art, music, or other creative endeavors, the act of creating beauty or building beautiful things allows us to connect with something greater than ourselves and find a sense of fulfillment and joy.

Winston Churchill Painting as a Pastime

On a personal note, I have to contrast Brooks message with the story of the apostle Paul. Writing from prison, he emphasized the importance of pressing on towards the goal and finishing his course. However, this is not a problem unique to Paul, as many people struggle with a sense of failure and despondency in the second half of life. What would Paul think of the danger of peaking in one’s career between the ages of 30 and 50, and the potential for a sense of failure and despondency in the latter half of life.

Brooks suggests that in order to avoid this sense of failure, it is necessary for individuals to find a deep purpose in their second half of life. Paul would say it is also important for individuals to recognize that their identity should not be solely based on their accomplishments or successes. Instead, our identity should be rooted in our relationship with God and the gospel of Jesus Christ. This allows us to find a sense of purpose and meaning that is not dependent on our external circumstances or accomplishments.

I also found wisdom from a completely different angle: the varnasrama system of Hinduism which splits up our lives into four distinct stages, each happening every 20-25 years—with vanaprastha (वनप्रस्थ) being the all-important third stage.

After our youthful first stage (“figure out who I am”), in our early-20s we move to a second stage (“prove yourself”) that lasts until we are about 50 years of age. In the second stage, we are driven by the pursuit of pleasure, sex, money, and accomplishments. But by the third stage (“give back”), at around age 50, we begin to pull back from a focus on professional and social advancement. Instead, we become more interested in spirituality and faith.

The important transition is from stage 2 to 3, which typically occurs around age 50, can be difficult for many people, especially in Western societies, where there is often a strong emphasis on professional and social advancement. According to Arthur C. Brooks, this transition is important because it can lead to increased happiness and contentment, as well as better physical health. Additionally, as people age, they tend to become wiser, with greater ability to combine and express complex ideas, interpret the ideas of others, and use the knowledge they have gained throughout their lives. This “crystallized intelligence” can be put to good use by sharing wisdom with others and becoming more devoted to spiritual growth. It is important to let go of the things that once defined us in the eyes of the world and embrace this new stage of life in order to truly thrive in the latter half of adulthood. In summary, the older we get, the better we get at:

  • Combining and using complex ideas and expressing them to others
  • Interpreting the ideas others have (even if we didn’t create them ourselves)
  • Using the knowledge we have gained during our lives

As we approach the finish line of our journey, it is essential that we strive to finish well. This involves finding a deep purpose in the second half of life, cultivating gratitude, practicing compassion, building relationships, and creating beauty. By doing so, we can navigate the challenges and winds of life with a focus on the ultimate goal of glorifying God and finishing our course with joy and purpose.

“From Strength to Strength” is a thought-provoking and insightful book that offers valuable insights and strategies for finding success, happiness, and deep purpose in the second half of life. Whether you are just entering this phase of your journey or are well into the second half, this book offers valuable guidance and encouragement for navigating the challenges and winds of life with a focus on the ultimate goal of finishing well.

By 2 Comments