Skip to main content
Posts by:

Tim Booher

Review: Our Kids

Following a polarizing election that has left many in my community surprised and confused, Robert Putnam’s “Our Kids” provides a refreshing collection of research and engaging anecdotes told with generosity of spirit. Perhaps more than any other modern voice, Putnam excels at describing American society in a bi-partisan way. He and his previous best-seller, “Bowling Alone”, has been influential in multiple administrations. His emphasis on improving society appeals to traditionalist conservatives who emphasize historic institutions and to progressives who value his call for a more proactive government.

Increasing economic inequality and decreasing upward mobility will hurt our children

While his previous work provided a critique of declining civic life, “Our Kids” fits nicely with the narrative of “Coming Apart” by Charles Murray and “Hillbilly Elegy” that describe the dynamics and effects of widening income disparity and decreasing economic mobility. Where Putnam differs is his emphasis on a topic unprecedented in its importance and urgency to a parent: what is happening to the dreams of our children?

To answer this question, he focuses on state of upward mobility and its consequences on the future of our children through an emphasis on recent subtle, but profound changes to three parts of society: family life, neighborhoods and schools. In all his analysis, he defines rich and poor simply: “rich” parents finished college; “poor” parents have high school degrees or less. In the end, he describes a radical reordering of social capital towards those with the resources, networks and know-how to use it. He labels this as an “an incipient class apartheid”. His narrative is a constant meta-analysis of academic studies along with contrasting portraits of high- and low-­income families.

While “The Second Machine Age” is a much better book to understand how we got here, Putnam makes a compelling and personal appeal to understand just how bleak life is and is becoming for those without the resources to buffer their children from life’s dangers. While the rich children profiled faced plenty of challenges (divorce, substance and physical abuse, academic challenges), their parents were able to change their environment to help, often with dramatically different circumstances than the poor children presented. When life became difficult, their networks responded and their resources opened doors closed to the poor.


With the benefit of specific names and locations, he describes the differing ways that rich and poor experience school sports, obesity, maternal employment, single parenthood, financial stress, college graduation, church attendance, friendship networks, college graduation and family dinners.

Every story sets the stage for a set of new statistics. An example: affluent kids with low high-school test scores are as likely to get a college degree (30{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}) as high-scoring kids from poor families (29{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}). Education is supposed to help level the playing field, but it is increasingly achieving the opposite.

Another example is that rich kids get almost 50 percent more nurturing time from their parents, when there used to be no class difference. In fact, many factors are diverging that used to be independent of class (e.g. church attendance rates and political engagement). Most troubling to me is that increasingly only the rich retain hope and pass it on to their children. While Horatio Alger marked the aspirations of a generation of upwardly mobile children at the beginning of the 20th century, Putnam makes the case that apathy and pessimism reign at the bottom of the social order at the end.

Some of the stories were heart-wrenching to a degree that distracts from the larger story. One subject, “Elijah”, was beaten by his father after the son was jailed for arson, thrown out of the house by his mother because of drinking and drugs, and unable to escape the lure of the streets. Such stories arise my passion and compassion. They take up an elevated place in my mind, and leave me hungry for more information and for statistics to know how unique their stories are.

Current social dynamics are making it worse

Putnam makes it clear that things aren’t getting better. Everyone knows rich kids have advantages, but he shows that their advantages are large and growing with no bounds to stop the trend. He shows the growing gap through numerous “scissor charts”. One way to look at the diverging trends is through the Adverse Childhood Experiences Scale:

  1. Household adult humiliated or threatened you physically
  2. Household adult hit, slapped, or injured you
  3. Adult sexually abused you
  4. Felt no one in family loved or supported you
  5. Parents separated/divorced
  6. You lacked food or clothes or your parents were too drunk or high to care for you
  7. Mother/stepmother was physically abused
  8. Lived with an alcoholic or drug user
  9. Household member depressed or suicidal
  10. Household member imprisoned

Each of these factors are becoming more frequent in poor families and he has both charts to convince the left side of our brain and stories to convince the right.

One chart that he highly emphasizes is the trend in family dinners. From the mid-1970s to the early 1990s family dinners became rarer in all social echelons, but since then that trend leveled off for households led by college-educated parents, and only continued downward for high school–only educated families.

Causes

In the end, he argues that inadequate empathy and weakened civic institutions are the primary cause. While “Bowling Alone” made a solid case for the decline of civic institutions, he makes a strong argument for growing inadequate understanding reenforced by self selectivity and the dynamics of technology and productivity.

Current dynamics provide big advantages to children at the top and restrict the capability of those without resources to work their way up the social ladder. This has cascading effects and is a particularly difficult problem with long time constants. Rich parents have a growing edge in access to good day care, then better schools with a suite of extracurricular activities and the spending gap has tripled on activities. Malcolm Gladwell covered these dynamics well in Outliers.

Potential Improvements

While his reliance on anecdotes risks informing our emotions more than our minds, the biggest weakness in his book was the absence of a discussion of the political forces that shape the world he so aptly described. The stories of the poor are heartbreaking. It is clear that Putnam was moved by the people he met and the stories are moving, but how real do the stats hold up? It is nice that he goes beyond raw data and listens to those who are otherwise voiceless in our society. By blending portraits of individual people with aggregate data, he gives us a remarkably clear picture of inequality in the United States. But is this clear picture an accurate picture? That said, I’m ready to avoid politics for a bit and the dialogue was definitely enhanced from his refusal to fall into the neat bins of our current culture wars.

He never tackles the finances of the individuals involved. His reluctance to wade in politics prevents him from politically inconvenient observations such that that rich kids still grow up with two parents, and poor kids don’t. For me, he also invites cognitive dissonance between the narrative of victimhood/entitlement and the gumption (maybe even intentional obliviousness) that characterized others who have risen above their class.

Most maddeningly, he omits a discussion of the political or economic forces driving the changes he laments. In particular, I don’t think any story about inequality is possible without describing the intersection of economics, productivity, globalization and technology.

Potential Solutions

I resonate with his emphasis on importance of family dinners. His policy suggestions include expanded tax credits for the poor, greater access to quality day care and more money for community colleges. He also highlights increased investment in early education, expanded child care options and formal mentoring programs.

While logical, these sound a bit like the status quo and might be missing an emphasis on the moral foundations which provided the glue for the institutions whose importance he highlights.

In all he provides a vivid description of the diverging life chances of children in rich and poor families. I’m hoping that the main impact of this book will be the many conversations it creates and the poignancy of its character-based narratives will force me to think of ways I can help counter the trends described.

References

By 0 Comments

Endurance and Altitude for Propeller-driven Aircraft

Lower altitude = higher endurance?

At first glance this didn’t make sense to me. Air Force pilots will be the first to tell you that you don’t hug the earth en route to maximize your time on station, it is harder slogging in thicker air. At higher altitudes you have lower air pressure and colder air temperatures, low pressure is bad for thrust but low temperatures are more favorable. How does this all shake out? And what are the differences between jet- and prop-driven aircraft?

My initial hunch was that for jets, higher altitude requires more speed for level flight and for every gram of fuel burned, you would cover more unit of range in more rarefied air, but endurance is really about turbine efficiency. Higher altitudes decrease the mass flow through the engine, but engine cycle performance is mostly correlated with the ratio of maximum temperature achievable in the turbine $T_{max}$ to ambient temperatures. $T_{max}$ is fixed by engine material properties so the main way to increase efficiency with altitude is to decrease your ambient temperature. With a standard day, the beginning of the isotherm is around 36,089 feet where temperatures remains close to -70 F for about 30k more feet. At 70kft, the outside temperature actually begins to increase. Pressure decreases at a rate that is not subjected to the same complex interplay between the sun’s incoming radiation and the earths outbound radiation, which means any altitude above 36kft should really not give you a performance advantage.

However, modern high endurance unmanned platforms are propeller-driven aircraft. While I haven’t done to math to show how much more efficient these are when compared to jets, I want to explore particularly how the efficiency of propeller-driven aircraft is affected by altitude.

All aircraft share common interactions with the air, so I had to start with the basics of the airfoil. The lift coefficient is traditionally defined as the ratio of force is that pulling an aircraft up over resistive forces. Dynamic pressure (the difference between the stagnation and static pressures) is simply the pressure due to the motion of air particles. When applied against a given area, it is the force that must be overcome by lift for upward motion. If $L$ is the the lift force, and $q$ is the dynamic pressure applied across a planform area $A$ for an air density of $\rho$, by rearranging the lift equation, we have,

$$ C_L = \frac{L}{q_{\infty}\,A}=\frac{2\,L}{\rho\,v^2 A}. $$

Solving for velocity gives us our first insight on how altitude is going to impact the performance of any airfoil:

$$ v = \sqrt{\frac{2\,L}{A}}\,\sqrt{\frac{1}{\rho}}. $$

since, lower altitudes require lower velocity to generate the same lift force. But how does lower velocity equate to more endurance?

Using climatic data from MIL-STD-210C under the conservative assumption of a high density which occurs 20{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} of the time we have a basic 1/x relationship with density decreasing dramatically with altitude.

From, this alone, we can then plot the velocity needed to keep $C_L$ constant.

To understand the impact this has on endurance, we have to look at brake specific fuel consumption (BSFC). This is a measure of fuel efficiency within a shaft reciprocating engine. It simply the rate of fuel consumption divided by the power produced and can also be considered a as power-specific fuel consumption. If we consume a given number of grams per second of fuel at rate, $r$ for a given power, $P$,

$$ \text{BFSC} = \frac{r}{P} = \frac{r}{\tau \, \omega}, $$

with $\tau$ as engine torque $N·m$ and ($\omega$) as engine speed (rad/s).

$BFSC$ varies from complex interactions with engine speed and pressure:

To understand how this affects endurance, let’s just consider how much fuel we have divided by the rate we use it. Because fuel consumption for piston engines is proportional to power output, we use the average value method to predict the endurance of a propeller-driven aircraft. This is very simply looking at endurance as the fuel weight you have divided by the rate that fuel is spent, all the way to zero fuel. If $W_f$ is the weight of fuel available to be burned:

$$ E = \frac{\Delta W_f}{\dot W_f} = \frac{\Delta W_f}{(\text{BSFC}/\eta_{prop})\,D_{avg}\,V_{\infty}}, $$

where $\eta_{prop}$ is the propeller efficiency factor and $V_{\infty}$ is the speed of the air in the free-stream. The speed for maximum endurance needs to consider flight at the minimum power available,

and at this power, we should be able to maximize our endurance at the appropriate velocity. For increased accuracy, let’s consider that fuel is burned continuously,
$$ E = \int_{W_2}^{W_1} \frac{\eta_{prop}}{\text{BSFC}} \frac{dW}{D\,V_{\infty}}. $$

If $C_L$ is constant and $W=L$, then we can substitute $V_{\infty} = \sqrt{2W/(\rho\,A\,f\,C_L)}$,

$$ E = \frac{\eta_{prop}\,C_L^{3/2}}{\text{BSFC}\,C_D} \sqrt{\frac{\rho,A}{2}} \int_{W_2}^{W_1} \frac{dW}{W^{3/2}} = \frac{\eta_{prop}\,C_L^{3/2}}{\text{BSFC}\,C_D} \sqrt{2 \rho A} \left( W_2^{-1/2} – W_1^{1/2} \right)$$

This tells us that if we want maximum endurance, we want high propeller efficiency, low BSFC, high density (both low altitude and temperature) high amount (weight) of fuel available, and a maximum value of the ratio of $C_L^{3/2}/C_D$. Naturally, the higher density is going to directly assist, but the coefficients of lift over drag is maximized when we minimize the power output required,

$$ P_R = V_{\infty} \, D = V_{\infty} , \frac{W}{C_L / C_D} = \sqrt{\frac{2\,W^3}{\rho S}}\frac{1}{C_L^{3/2}/C_D} = \frac{\text{constant}\,C_D}{C_L^{3/2}} $$

So we want to minimize $P_R$. For an assumed drag polar, the condition for minimizing $C_D/C_L^{3/2}$ is found by expressing the ratio in terms of the drag polar, taking the derivative, and setting it equal to zero:

$$ 3 C_{D_0}=k C_L^2 $$

This has the interesting result that the velocity which results in minimum power required has an induced drag three times higher than parasitic drag.

In all, it seems that platform endurance is a function of a number of design parameters and the air density, so in general the higher the air density, the higher the endurance. Please comment to improve my thoughts on this.

By 0 Comments

Review: Sapiens

Yuval Harari (יובל נח הררי) has written a scholarly, thought-provoking and crisply written survey of “big history” that he uses to preach his starkly different views of philosophy and economics. Dr. Harari teaches at the Hebrew University of Jerusalem where he wrote Sapiens in Hebrew and accomplished the very idiomatic translation himself. Provocative and broad, he argues for a different perspective on history with confidence and swagger that he has it all figured out. Certainly, his very use of “Sapiens” for the title is in itself provocative. It reminds us that, long ago, the world held half a dozen species of human, of which only Homo sapiens survives. By highlighting the remarkably unpredictable path of history, he continually emphasizes that our present state is merely one of many possible — and he applies this both to the nature of our species and our beliefs (what he calls “shared myths”). This contrasts with my worldview which is the Christian narrative that we were uniquely created in God’s likeness and image and uniquely entrusted with a creation mandate.

Despite my differences, I appreciate any book that unleashes my imagination and challenges my worldview. Sapiens delivers with surveys of the impact of language, agriculture, economics, and science and technology against the unique backdrop of our species. The final conclusion is that we have won. While we are no longer competing for dominance on the planet, our greatest danger is our own inability to govern ourselves and anticipate the impact of our technology. Thus, for all our advances, our victory may be Pyrrhic: we have specularly failed to use our powers to increase our well-being and happiness. Because of this, Dr Harari predicts that we will vanish within a few centuries, either because we’ve gained such godlike powers as to become unrecognizable or because we’ve destroyed ourselves through environmental mismanagement.

From all this, I found Sapiens contains several new and powerful ideas: that Sapiens’ takeover of the planet is complete, that our greatest power is the ability to collectively create, believe and share ideas, that technology often enslaves us and can’t be resisted, and that the scientific revolution is the result of a willingness to admit ignorance.

We win! Sapiens now dominate the Planet

While the image below is not from Sapiens (it is from xkcd), it makes the point well that our species has dominated the planet. If you consider humans and our domesticated animals, our takeover is complete and the remaining wildlife is simply there to amuse us and give meaning to the pictures in our children’s books.

I wish he would have calculated the date Sapiens “won”, or the date where our species could basically do whatever we wanted with nature. While the industrial revolution might be seen by many as the period we conquered the world, I suspect the rise of the agrarian economy is where we took a different route and established a dominant position over nature. For most of its history, Homo sapiens lived a nomadic lifestyle. The vast majority of our ancestors spent their lives hunting prey and gathering vegetation. Rather than settling in one area, they travelled to wherever food was plentiful. Around 12,000 years ago, however, that all changed and that is when our population started to explode. On a small patch of land, farmers could grow a mass of edible plants. The consequent increase in the food supply meant that human societies could sustain much higher populations, but that required the systems we use today: money, government, war and politics.

However, the systems that provide us this dominance may provide the means for our downfall. Sapiens reminds us that we may be collectively acting in a way that is harmful to our future as a species by establishing such a dominant position on our planet without a matching increase in our wisdom or ability to govern. I often worry that we have developed several generations of technology, but are morally equivalent to our ancestors dating back 1000s of years. What politician today has the gravitas or ethics of Cicero? Clearly, our iPads don’t make us better people.

This wouldn’t be a problem, except the interdependent economic systems and the explosion of the money supply (i.e. leverage) makes our society dependent on high expectations about the future and how bank interdependencies have increased systemic risk to a level unprecedented in history.

Sapiens’ power is our ability to shared and jointly believe in ideas

Sapiens have accomplished this domination because we can uniquely cooperate in very large numbers across vast distances. This is Harari’s most strident theme: the physical un-reality of all our ideas. He claims that all large-scale human cooperation is based on myths, which he highlights are fiction. He describes shared concepts like the nation, money, and human rights as fabrications that have no relation to our biological reality as they don’t exist objectively. He claims that the fact that we share and ascribe such a lofty status to fictions is our most unique characteristic as a species.

While he remarks that we tend to over-value the reality of all our ideas, he reserves his sharpest criticism for Religion and its role in forming a shared human story. He covers Zoroastrian sacred texts, the Book of Genesis, the Popul Vuh. To him, gossip forms the most fundamental bond between local groups, but larger groups require religion which can sweep past trifling details and unite nations. In his narrative, religion was the necessary glue for human society until the 19th century, when scientific knowledge was able to create a standardized set of world beliefs. However, he notes that, without religion, there is no basis for many of the values we hold dear such as human rights.

This denigration of reality of our ideas and institutions is one where Harari overplays his hand. I believe our ideas and institutions, what Harari calls myths, have a complex ontological status. While Nationhood, the Pythagorean Theorem and the fundamental equality of human beings before the law are all non-physical notions formed in human brains, to label them all as fictions as Harari does, without distinguishing more carefully, diminishes the entire book.

Technology and Progress are a Mixed Blessing

When he talks technology and the Scientific revolution, Harari is now talking an area I’m much more familiar with. He makes clear that our obsession with technology is a modern phenomenon. To him, the relationship between science and technology itself is a recent development and when Bacon connected the two in the early seventeenth century, it was a revolutionary idea.

While our current society looks at the blessings of technology as an absolute win, Harari highlights the darker shadow behind technical advances. Namely, despite appearances consumers don’t have a choice to adopt new technology. Someone can decide to forgo email and credit cards, but they will not be able to participate in the modern economy. New technology thus creates addictions and what he calls a “luxury trap”.

For example, examine the rise of farming. Agriculture increased the amount of available food, yet the result of prosperity was not happiness but “population explosions and pampered elites.” Farmers worked harder than foragers and had a worse diet and poorer health, but the foragers had to adopt to the new economy. The surplus went to the privileged few, who used it to oppress. “The Agricultural Revolution,” Harari says, “was history’s biggest fraud.”

At the end of the book, Harari expresses an ambivalence about what we consider today to be a species-wide increase in well-being. “Unfortunately,” he says, “the Sapiens regime on earth has so far produced little that we can be proud of.” While I’m personally in awe of quantum electrodynamics, the modern financial system and anatheisa, Harari is arguing that living better has not made us more content. Citing recent research in psychology, he states that happiness “depends on the correlation between objective conditions and subjective expectations.” He cites that one may win the lottery and another become disabled. While one will most likely experience a short-term happiness and the other depression, research has shown that will be equally happy in a year, even though their circumstances remain different.

More worrying, our current dependence on technology may be our downfall. Not only are we interconnected to an unprecedented degree, but we also are addicted to growth and near-impossible expectations for technology to increase productivity and make scarce resources abundant. While this has historically avoided our malthusian collapse as population has grown, Harari persuasively argues that history is notoriously difficult to predict.

We might be at the beginning of the end of our species

At DARPA, we are starting to understand and experiment with human-machine symbiosis. Recently, we have not only wired a sense of touch from a mechanical hand directly into the brain, but have also figured out how to connect the brain to sensors that feel natural. Sapiens highlights the transplantation of ears onto mice and some of the fascinating and terrifying implications of stem cell manipulation.

Harari notes that for the first time in history, “we will see real changes in humans themselves – in their biology, in their physical and cognitive abilities”. History reveals that while we have enough imagination to invent new technologies, we are unable to foresee their consequences. Harari states:

It was the same with the agricultural revolution about 10,000 years ago. Nobody sat down and had a vision: ‘This is what agriculture is going to be for humankind and for the rest of the planet.’ It was an incremental process, step by step, taking centuries, even thousands of years, which nobody really understood and nobody could foresee the consequences.

The Scientific Revolution is the result of an Admission of Ignorance

Harari takes a stab at what caused the scientific revolution: a willingness to admit ignorance. Before the modern scientific era, the State (the King) and the Church were the source of all truth. There was no excuse to be ignorant. With pith and awe, Harari describes the Scientific Revolution as the point in history when “humankind admits its ignorance and begins to acquire unprecedented power.”

It is worth quoting him here. He writes:

“But modern science differs from all previous traditions of knowledge in three critical ways:

a) The willingness to admit ignorance. Modern science is based on the Latin injunction ignoramus – ‘we do not know’. It assumes that we don’t know everything. Even more critically, it accepts that the things that we think we know could be proven wrong as we gain more knowledge. No concept, idea or theory is sacred and beyond challenge.

b) The centrality of observation and mathematics. Having admitted ignorance, modern science aims to obtain new knowledge. It does so by gathering observations and then using mathematical reels to connect these observations into comprehensive theories.

c)The acquisition of new powers. Modern science is not content with creating theories. It uses these theories in order to acquire new powers, and in particular to develop new technologies.

The Scientific Revolution has not been a revolution of knowledge. It has been above all a revolution of ignorance. The great discovery that hunched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions.”

Also, this scientific progress, he asserts, was fueled by the twin forces of imperialism and capitalism.

He writes:

“What forged the historical bond between modern science and European imperialism? Technology was an important factor in the nineteenth and twentieth centuries, but in the early modern era it was of limited importance. The key factor was that the plant-seeking botanist and the colony-seeking naval officer shared a similar mindset. Both scientist and conqueror began by admitting ignorance – they both said, ‘I don’t know what’s out there.’ They both felt compelled to go out and make new discoveries. And they both hoped the new knowledge thus acquired would make them masters of the world.”

While I find his views fascinating here, they still don’t match with my worldview. I consider the grand vision of using the scientific method to gain mastery over the physical world arose from the long-standing Christian vision—dating back at least to St. Augustine in the fourth century. My view is that nature as the second book through which God made himself known to humanity (the first being the Bible). Galileo justified science as an attempt to know the mind of God through his handiwork.

To miss this connection is only possible by forcing to remove the lens of current accepted groupthink. This is where Harari disappoints. He defers too much to current orthodoxies often resisting the logic of his own arguments for fear of affronting feminists or avoids conclusions that criticize his gay lifestyle, vegan sensibilities or postmodern worldview. There seems to be an inner conflict between the author’s freethinking scientific mind and a fuzzier worldview hobbled by political correctness.

In any case, I find Sapiens breathtaking in its scope and fascinating in its perspective.


There are some great quotes from the book here.

By 3 Comments

July 4th Home Project: Thermostat Bracket

This post is about how to use design tools to save time and build nice stuff at home using computer controlled machines (CNC). In addition to describing my design process, I’ve also included the helpful references I found along the way.

Our old thermostat was too small for our wall. I could have replaced the drywall, but I needed a starter project to help me understand 3D CNC work with wood. Replacing the drywall would have taken a good bit of time because of the lath behind the old thermostat. The bracket took a long time because I had to learn wood CNC and spent way too long finishing the surface. In the end, this was a great use of a wood CNC machine. It would have been difficult to get the corners right and route out the pocket inside. Additionally, I could prototype and correct parts of my design with the rapid iteration that CNC machines provided.

We have a programmable thermostat with z-wave, the 2gig CT100 Z-Wave Touch Screen Programmable Thermostat. It works perfectly and is easy to control with our Mi Casa Verde VeraLite Home Controller. This gives us the ability to set our temperature from our phones or do nest-like things like learn our patterns and adjust temperature. We can also set up multiple thermostats to regulate temperature throughout the different regions of our house.

In case you are working with the CT100 or the VeraLite, you might find the following links helpful:

Design

I designed the bracket in Fusion 360. I’m still figuring out how to use Fusion, but it is a computer-aided design application for creating 3D digital prototypes including design, visualization and simulation. Fusion 360 is easy to use and it provides the ability to go from design, render, analysis and production in one tool. Most important, it is free for hobbyists.

The design was pretty straightforward. It is a one inch offset with fillets that matched the radius of the CT100. One problem with CNC routing is that I tend to design features that take advantage of the CNC features and this tends to lead to more curves. I just had to get the measurements right. I shouldn’t need to do this, but I used a laser cutter to cut out the frame from a piece of cardboard to check the fit. I’m glad I did, because I hadn’t accounted for some of the curves and the opening was too small. In general, I love using the laser-cutter to prototype designs. The prototype let me see how the final design would look on the wall. This would have been helpful to test different designs. Chrissy and I tend to like 18th-century English and 19th-century neoclassic millwork, but I didn’t put too much thought into this design, partly because I could change it so easily.

Here is the final, dimensioned, design:

Screenshot 2016-07-03 10.16.37

Construction

I found a piece of scrap plywood at TechShop that I cut on the ShopBot buddy.

ShopBot Buddy

To cut the workpiece I used the 1/4″ High Speed Steel Two Flute Downcut. You can see the purchase page here. As this was my first cut, I had to understand the definitions and the different cutter parameters to build the tool in fusion.

For the High Speed Steel Two Flute Downcut I have the parameters are:

  • CED: 1/4
  • CEL: 1
  • SHK: 1/4
  • OAL: 3

Here are some terms that helped me:

CED: CED is abbreviated for cutting edge diameter or the width of the cut the tool should make through the work piece. CED has a tolerance in thousandths of an inch or .xxx decimal places.

CEL: CEL is abbreviated for cutting edge length and is the maximum thickness of the material it can cut. CEL has a tolerance in hundredths of an inch or .xx decimal places.

SHK: SHK is abbreviated for shank diameter and is the nominal size of the shank which should match the collet size of the spindle the tool will be used in. SHK has tolerance in the ten-thousandths of an inch or .xxxx decimal places.

OAL: OAL is abbreviated for overall length and is the total nominal length of the tool from end to end. OAL has a tolerance in hundredths of an inch or .xx decimal places.

HSS: High Speed Steel, typical applications in Non-Abrasive Plastic, Solid Wood & Aluminum where keen edges perform best. High Speed Steel tools perform well in hand routing applications where a tough core is necessary to prevent tool breakage.

Carbide Tipped: Used for a variety of applications in composite woods, hardwoods, abrasive plastics and composites plastics to hard aluminum. Limited by geometry in some applications due to the brazed nature of the tool. Carbide Tipped tools work well in hand routing applications due to the tough HSS core and hard carbide cutting edges.

Solid Carbide: Typically used for widest variety of applications including man-made board, solid wood, abrasive plastics, and some aluminum’s. Solid Carbide does not deflect allowing the tool to be fed at higher feedrates than PCD or straight insert cutters decreasing cycle times. s typically. Solid tools also have major edge keenness advantage thought only possible in HSS until a few years ago.

Chipload: Chipload is simply defined as the thickness of a chip which is formed during the machining of a material. Chipload is critical because if the chip is the proper size, the chip will carry away the heat promoting long tool life. If the chip is too small, the heat is transferred to the cutting tool causing prematurely dulling. Too high of a chipload will cause an unsatisfactory edge finish, or part movement.

The most important reason to understand cutter parameters, is to set the correct feed rates, which is a combination of rpm and cutting speed. In order to get this right, I consulted this reference from ShopBot and read up on end mills in general at makezine. I also was able to incorporate some information from destiny tool that was helpful to verify my settings.

These links also helped:
* hardwood cutting data from Onsrud
* A great video tutorial from ShopBot

After understanding endmills, I had to get everything in the right format for the machine. I found the open sbp reference to be very helpful and the command reference also taught me how to understand the resultant g-code.

I summarized my research below:

Table

Name SB# Onsrud Series Cut Chip Load per leading edge Flutes Feed rate (ips) RPM Max Cut
1/4″ Downcut Carbide End Mill 13507 57-910 1 x D 0.005-0.007 2 3.0-4.2 18,000

You can see the final product here:

20160703_180847

By One Comment

Satisfiability modulo theories and their relevance to cyber-security

Cybersecurity and cryptoanalysis is a field filled with logic puzzles, math and numerical techniques. One of the most interesting technical areas I’ve worked goes by the name of satisfiability modulo theories (SMT) and their associated solvers. This post provides a layman’s introduction to SMT and its applications to computer science and the modern practice of learning how to break code hack.

Some background theory

Satisfiability modulo theories are a type of constraint-satisfaction problems that arise many places from software and hardware verification, to static program analysis and graph problems. They apply where logical formulas can describe a system’s states and their associated transformations. If you look under the hood for most tools used today for computer security, you will find they are based on mathematical logic as the calculus of computation. The most common constraint-satisfaction problem is propositional satisfiability (commonly called SAT) which aims to decide whether a formula composed of Boolean variables, formed using logical connectives, can be made true by choosing true/false values for its variables. In this sense, those familiar with Integer Programming will find a lot of similarities with SAT. SAT has been widely used in verification, artificial intelligence and many other areas.

As powerful as SAT problems are, what if instead of boolean constraints, we use arithmetic in a more general application to build our constraints? Often constraints are best described as linear relationships among integer or real variables. In order to understand and rigorously treat the sets involved domain theory is combined with propositional satisfiability to arrive at satisfiability modulo theory (SMT).

The satisfiability modulo theories problem is a decision problem for logical formulas with respect to combinations of background theories expressed in classical first-order logic with equality. An SMT solver decides the satisfiability of propositionally complex formulas in theories such as arithmetic and uninterpreted functions with equality. SMT solving has numerous applications in automated theorem proving, in hardware and software verification, and in scheduling and planning problems. SMT can be thought of as a form of the constraint satisfaction problem and thus a certain formalized approach to constraint programming.

The solvers developed under SMT have proven very useful in situations where linear constraints and other types of constraints are required with artificial intelligence and verification often presented as exemplars. An SMT solver can solve a SAT problem, but not vice-versa. SMT solvers draw on some of the most fundamental areas of computer science, as well as a century of symbolic logic. They combine the problem of Boolean satisfiability with domains (such as those studied in convex optimization and term-manipulating symbolic systems). Implementing SAT solvers requires an implementation of the decision problem, completeness and incompleteness of logical theories, and complexity theory.

The process of SMT solving is a procedure of finding an satisfying assignment for a quantifier-free formula for $F$ with predicates on a certain background theory $T$. Alternatively the SMT solving process could show such an assignment doesn’t exist. An assignment on all variables that satisfies these constraints is the model or $M$. $M$ will be satisified when $F$ evaluates to $\text{true}$ under a given background theory $T$. In this sense, $M$ entails $F$ under theory $T$ which is commonly expressed as: $ M \vDash_T F$. If theory $T$ is not decidable, then the underlying SMT problem is undecidable and no solver could exist. This means that given a conjunction of constraints in $T$, there must exist a procedure of finite steps that can test the existence of a satisfying assignment for these constraints.

Ok, that is a lot of jargon. What is this good for?

SMT solvers have been used since the 1970s, albeit in very specific contexts, most commonly theorem-proving cf ACL2 for some examples. More recently, SMT solvers have been helpful in test-case generation, model-based testing and static program analysis. In order to make this more real with a concrete example, let’s consider one of the most well-known operations research problems: job-shop scheduling.

If there are $n$ jobs to complete, each composed of $m$ tasks with different durations on $m$ machines. The start of a new task can be delayed indefinitely, but you can’t stop a task once it has started. For this problem, there are two constraints: precedence and resource constraints. Precedence specifies that one job has to happen before another and the resource constraint specifies that no two different tasks requiring the same machine are able to execute at the same time. If you are given a total maximum time $max$ and the duration of each task, the task is to decide if a schedule exists where the end time of every task is less than or equal to $max$ units of time. The duration of the $j$th task of job $i$ is $d_{i,j}$ and each task starts at $t_{i,j}$.

I’ve solved this problem before with heuristics such as Simulated_annealing, but you can encode the solution to this problem in SMT using the theory of linear arithmetic. First, you have to encode the precedence constraint:

$$ t_{i,j}+1 \geq t_{i,j} + d_{i,j} $$

This states that the start-time of task $j+1$ must be greater or equal to the start time of task $j$ plus its duration. The resource constraint ensures that jobs don’t overlap. Between job $i$ and job $i’$ this constraint says:

$$ (t_{i,j} \geq t_{i’,j}+d_{i’,j}) \vee (t_{i’,j} \geq t_{i,j} + d_{i,j}) $$

Lastly, each time must be non-negative, or $ t_{i,1} \geq 0 $ and the end time of the last task must be less than or equal to $max$ or $t_{i,m} + d_{i,m} \leq max$. Together, these constraints forms a logical formula that combines logical connectives (conjunctions, disjunction and negation) with atomic formulas in the form of linear arithmetic inequalities. This is the SMT formula and the solution is a mapping from the variables $t_{i,j}$ to values that make this formula $\text{true}$.

So how is this relevant to software security?

Since software uses logical formulas to describe program states and transformations between them, SMT has proven very useful to analyze, verify or test programs. In theory, if we tried every possible input to a computer program, and we could observe and understand every resultant behavior, we would know with certainty all possible vulnerabilities in a software program. The challenge of using formal methods to verify (exploit) software is to accomplish this certainty in a reasonable amount of time and this generally distills down to clever ways to reduce the state space.

For example, consider dynamic symbolic execution. In computational mathematics, algebraic or symbolic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. This is in contrast to scientific computing which is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that are manipulated as symbols.

The software that performs symbolic calculations is called a computer algebra system. At the beginning of computer algebra, circa 1970, when common algorithms were translated into computer code, they turned out to be highly inefficient. This motivated the application of classical algebra in order to make it effective and to discover efficient algorithms. For example, Euclid’s algorithm had been known for centuries to compute polynomial greatest common divisors, but directly coding this algorithm turned out to be inefficient for polynomials over infinite fields.

Computer algebra is widely used to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, like in public key cryptography or for some classes of non-linear problems.

To understand some of the challenges of symbolic computation, consider basic associative operations like addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands, that is that $a + b + c$ is represented as $”+”(a, b, c)$. Thus $a + (b + c)$ and $(a + b) + c$ are both simplified to $”+”(a, b, c)$. However, what about subtraction, say $(a − b + c)$? The simplest solution is to rewrite systematically, so $(a + (-1)\cdot b + c)$. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. New Speak for mathematical operations!

A number of tools used in industry are based on symbolic execution. (cf CUTE, Klee, DART, etc). What these programs have in common is they collect explored program paths as formulas and use solvers to identify new test inputs with the potential to guide execution into new branches. SMT applies well for this problem, because instead of the random walks of fuzz testing, “white-box” fuzzing which combines symbolic execution with conventional fuzz testing exposes the interactions of the system under test. Of course, directed search can be much more efficient than random search.

However, as helpful as white-box testing is in finding subtle security-critical bugs, it doesn’t guarantee that programs are free of all the possible errors. This is where model checking helps out. Model checking seeks to automatically check for the absence of categories of errors. The fundamental idea is to explore all possible executions using a finite and sufficiently small abstraction of the program state space. I often think of this as pruning away the state spaces that don’t matter.

For example, consider the statement $a = a + 1$. The abstraction is essentially a relation between the current and new values of the boolean variable $b$. SMT solvers are used to compute the relation by proving theorems, as in:

$$ a == a_{old} \rightarrow a+1 \neq a_{old} $$ is equavalient to checking the unsatisfiability of the negation $ a == a_{old} \wedge a + 1 == a_{old} $.

The theorem says if the current value of $b$ is $\text{true}$, then after executing the statement $ a = a + 1$, the value of $b$ will be $\text{false}$. Now, if $b$ is $\text{false}$, then neither of these conjectures are valid:

$$
a \neq a_{old} \rightarrow a + 1 == a_{old}
$$
or
$$
a \neq a_{old} \rightarrow a + 1 \neq a_{old}
$$

In practice, each SMT solver will produce a model for the negation of the conjecture. In this sense, the model is a counter-example of the conjecture, and when the current value of $b$ is false, nothing can be said about its value after the execution of the statement. The end result of these three proof attempts is then used to replace the statement $a = a + 1$ by:

 if b then
   b = false;
 else
   b = *;
 end

A finite state model checker can now be used on the Boolean program and will establish that $b$ is always $\text{true}$ when control reaches this statement, verifying that calls to

lock()

are balanced with calls to

unlock()

in the original program.

do {
 lock ();
 old_count = count;
 request = GetNextRequest();
 if (request != NULL) {
  unlock();
  ProcessRequest(request);
  count = count + 1;
 }
}
while (old_count != count);
unlock();

becomes:

do {
 lock ();
 b = true;
 request = GetNextRequest();
 if (request != NULL) {
   unlock();
   ProcessRequest(request);
   if (b) b = false; else b = ∗;
 }
}
while (!b);
unlock();

Application to static analysis

Static analysis tools work the same way as white-box fuzzing or directed search while ensuring the feasibility of the program through, in turn, checking the feasibility of program paths. However, static analysis never requires actually running the program and can therefore analyze software libraries and utilities without instantiating all the details of their implementation. SMT applies to static analysis because they accurately capture the semantics of most basic operations used by mainstream programming languages. While this fits nicely for functional programming languages, this isn’t always a perfect fit languages such as Java, C#, and C/C++ which all use fixed-width bitvectors as representation for values of type int. In this case, the accurate theory for int is two-complements modular arithmetic. Assuming a bit-width of 32b, the maximal positive 32b integer is 231−1, and the smallest negative 32b integer is −231. If both low and high are 230, low + high evaluates to 231, which is treated as the negative number −231. The presumed assertion 0 ≤ mid < high does therefore not hold. Fortunately, several modern SMT solvers support the theory of “bit-vectors,” accurately capturing the semantics of modular arithmetic.

Let’s look at an example from a binary search algorithm:

int binary_search(
int[] arr, int low, int high, int key) {
 assert (low > high || 0 <= low < high);
 while (low <= high) {
 //Find middle value
 int mid = (low + high)/2;
 assert (0 <= mid < high); int val = arr[mid]; //Refine range if (key == val) return mid; if (val > key) low = mid+1;
   else high = mid–1;
 }
 return –1;
}

 

Summary

SMT solvers combine SAT reasoning with specialized theory solvers either to find a feasible solution to a set of constraints or to prove that no such solution exists. Linear programming (LP) solvers are designed to find feasible solutions that are optimal with respect to some optimization function. Both are powerful tools that can be incredibly helpful to solve hard and practical problems in computer science.

One of the applications I follow closely is symbolic-execution based analysis and testing. Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including HP Fortify, NVIDIA, and IBM. Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities.

 

 

By 0 Comments

Basement Framing with the Shopbot

Framing around bulkheads is painful. It is hard to get everything straight and aligned. I found the Shopbot to be very helpful. There are three problems I was trying to solve: (1) Getting multiple corners straight across 30 feet, (2) nearly no time and (3) basic pine wood framing would sag over a 28″ run.

In fairness, the cuts did take a lot of time (about 2.5 hours of cutting), but I could do other work while the ShopBot milled out the pieces. I also had several hours of prep and installation, so I’m definitely slower than a skilled carpenter would be, but probably faster off by using this solution. Plus, I think I’m definitely more straight and accurate. I especially need this, because my lack of skill means that I don’t have the bag of tricks available to deal with non-straight surfaces.

First, Autodesk Revit makes drawing ducts easy as part of an overall project model. The problem was the way the ducts were situated, the team working on the basement couldn’t simply make a frame that went all the way to the wall, because of an annoying placed door.

I was able to make a quick drawing in the model and print out frames on the shopbot. They only had to be aligned vertically which was easy to do with the help of a laser level.

second-ducts-v4

These were easy to cut out while I also had to make some parts for my daughters school project.
20160613_210346

By 0 Comments

DIY Roulette Wheel and Probability for Kids

My daughter had to make a “probability game” for her fifth grade math class. She has been learning javascript and digital design so we worked together to design and build a roulette wheel for her class.

First, she drew out a series of sketches and we talked it over. She wanted a 0.5 inch thick plywood 2 foot diameter, but after some quick calculations we decided on a $1/4$ inch thick wheel and a 18″ diameter. I had to talk her into adding pegs and a ball bearing from a skateboard from amazon for $2. The inner diameter is $1/4$ inch so I also bought a package of dowels for $5 to make the pegs. I also bought a 1/2 sheet of plywood (that I used about 1/3 of) and some hardware from Home Depot.

She wanted 10 sections with combinations of the following outcomes: Small, Large and Tiny prizes as well as two event outcomes: Spin Again and Lose a Spin. Each student would have at most three turns. We had the following frequencies (out of 10):

Outcome Frequency
Small 3
Large 2
Tiny 1
Lose a spin 3
Spin Again 1

This led to a ~~fun~~ (frustrating) discussion monte carlo code,  conditional probabilities and cumulative probabilities. Good job teacher! We got to answer questions like:

  • What is the probability of getting a large prize in a game (three spins)?
  • What is the probability you get no prize?
  • What is the expected number of spins?

She really threw the math for a loop with the Spin Again and Lose a Spin options. We had to talk about systems with a random number of trials. My favorite part was exposing her to true randomness. She was convinced the wheel was biased because she got three larges in a row. I had to teach her that true random behavior was more unbalanced than her intuition might lead her to believe.

In order to understand a problem like this, it is all about the state space. There are four possible outcomes: three different prizes or no prize. To explain the effect the spin skips have on the outcomes, I had to make the diagram below. Each column represents one of the three spins, each circle represents a terminal outcome and each rectangle represents a result of a spin.

Drawing1

From this, we can compute the probabilities for each of the 17 outcomes:

1 2 3 Prob
1 $P_L$ 0.200
2 $P_S$ 0.300
3 $P_T$ 0.100
4 L $P_L$ 0.060
5 L $P_S$ 0.090
6 L $P_T$ 0.030
7 L L 0.090
8 L S 0.030
9 S $P_L$ 0.020
10 S $P_S$ 0.030
11 S $P_T$ 0.010
12 S L 0.030
13 S S $P_L$ 0.002
14 S S $P_S$ 0.003
15 S S $P_T$ 0.001
16 S S L 0.003
17 S S S 0.001

I would love to find a more elegant solution, but the strange movements of the state-space left me with little structure I could exploit.

And we can add these to get the event probabilities and (her homework) to generate the expected values of prizes she needs to bring when 20 students are going to play the game:

Probability Expected Value Ceiling
$P_L$ 28{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 5.64 6
$P_S$ 42{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 8.44 9
$P_T$ 14{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 2.82 3
NP 16{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 3.10 4

We can also get the probabilities for the number of spins:

Count Probability
One spin 0.600
Two 0.390
Three 0.010

Simulation

When the probability gets hard . . . simulate, and let the law of large numbers work this out.

This demonstrated the probability of getting a prize was:

Probability Expected Value Ceiling
$P_L$ 28{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 5.64 6
$P_S$ 42{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 8.46 9
$P_T$ 14{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 2.82 3
NP 15{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 3.08 4

Design

So I took her designs and helped her write the following code to draw the wheel in Adobe Illustrator. This didn’t take long to write, because I had written similar code to make a series of clocks for my 5 year old to teach him how to tell time. The code was important to auto-generate the designs, because we must have tried 10 different iterations of the game.

Which produced this Adobe Illustrator file that I could laser-cut:

spinner2

From here, I designed a basic structure in Fusion 360. I cut the base and frame from $1/2$ inch birch plywood with a $1/4$ inch downcut endmill on a ShopBot.

A render:

the wheel v19

And a design:

2016-06-09 (1)

If you want the fusion file, request in comments and I’ll post.

Please let me know if you have any questions and I’ll share my design. Next up? We are going to print a new wheel to decide who washes the dishes! Kids get double the frequency.

By One Comment

Weaponizing the Weather

“Intervention in atmospheric and climatic matters . . . will unfold on a scale difficult to imagine at present. . . . this will merge each nation’s affairs with those of every other, more thoroughly than the threat of a nuclear or any other war would have done.” — J. von Neumann

Disclaimer: This is just me exploring a topic that I’m generally clueless on, explicitly because I’m clueless on it. My views and the research discussed here has nothing to do with my work for the DoD.

Why do we care?

Attempting to control the weather is older than science itself. While it is common today to perform cloud seeding to increase rain or snow, weather modification has the potential to prevent damaging weather from occurring; or to provoke damaging weather as a tactic of military or economic warfare. This scares all of us, including the UN who banned weather modification for the purposes of warfare in response to US actions in Vietnam to induce rain and extend the East Asian monsoon season (see operation popeye). Unfortunately, this hasn’t stopped Russia and China from pursuing active weather modification programs with China generally regarded as the largest and most active. While Russia is famous for sophisticated cloud-seeding in 1986 to prevent radioactive rain from the Chernobyl reactor accident from reaching Moscow, see China Leads the Weather Control Race and China plans to halt rain for Olympics to understand the extent of China’s efforts in this area.

The Chinese have been tinkering with the weather since the late 1950s, trying to bring rains to the desert terrain of the northern provinces. Their bureau of weather modification was established in the 1980s and is now believed to be the largest in the world. It has a reserve army of 37,000 people, which might sound like a lot, until we consider the scale of an average storm. The numbers that describe weather are big. At any instant there are approximately 2,000 thunderstorms and every day there are 45,000 thunderstorms, which contain some combination of heavy rain, hail, microbursts, wind shear, and lightning. The energy involved is staggering: a tropical storm can have an energy equal to 10,000 one-megaton hydrogen bombs. A single cloud contains about a million pounds of water so a mid-size storm would contain about 3 billion pounds of water. If anyone ever figures out how to control all this mass and energy they would make an excellent bond villain.

The US government has conducted research in weather modification as well. In 1970, then ARPA Director Stephen J. Lukasik told the Senate Appropriations Committee: “Since it now appears highly probable that major world powers have the ability to create modifications of climate that might be seriously detrimental to the security of this country, Nile Blue [a computer simulation] was established in FY 70 to achieve a US capability to (1) evaluate all consequences of of a variety of possible actions … (2) detect trends in in the global circulation which foretell changes … and (3) determine if possible , means to counter potentially deleterious climatic changes … What this means is learning how much you have to tickle the atmosphere to perturb the earth’s climate.” Sounds like a reasonable program for the time.

Military applications are easy to think up. If you could create a localized could layer, you could decrease performance of ground and airborne IRSTs particularly in the long-wave. (Cloud mean-diameter is typically 10 to 15 microns.) You could send hurricanes toward your adversary or increase the impact of an all weather advantage. (Sweet.) You could also do more subtle effects such as inuring the atmosphere towards your communications technology or degrade the environment to a state less optimal for an adversary’s communications or sensors. Another key advantage would be to make the environment unpredictable. Future ground-based sensing and fusing architectures such as multi-static and passive radars rely on a correctly characterized environment that could be impacted by both intense and unpredictable weather.

Aside from military uses, climate change (both perception and fact) may drive some nations to seek engineered solutions. Commercial interests would welcome the chance to make money cleaning up the mess they made money making. And how are we going to sort out and regulate that without options and deep understanding? Many of these proposals could have dual civilian and military purposes as they originate in Cold War technologies. As the science advances, will we be able to prevent their renewed use as weapons? Could the future hold climatological conflicts, just as we’ve seen cyber warfare used to presage invasion as recently seen between Ukraine and Russia? If so, climate influence would be a way for a large state to exert an influence on smaller states.

Considering all of this, it would be prudent to have a national security policy that accounts for weather modification and manipulation. Solar radiation management, called albedo modification, is considered to be a potential option for addressing climate change and one that may get increased attention. There are many research opportunities that would allow the scientific community to learn more about the risks and benefits of albedo modification, knowledge which could better inform societal decisions without imposing the risks associated with large-scale deployment. According to Carbon Dioxide Removal and Reliable Sequestration (2015) by the National Academy of Sciences, there are several hypothetical, but plausible, scenarios under which this information would be useful. They claim (quoting them verbatim):

  1. If, despite mitigation and adaptation, the impacts of climate change still become intolerable (e.g., massive crop failures throughout the tropics), society would face very tough choices regarding whether and how to deploy albedo modification until such time as mitigation, carbon dioxide removal, and adaptation actions could significantly reduce the impacts of climate change.
  2. The international community might consider a gradual phase-in of albedo modification to a level expected to create a detectable modification of Earth’s climate, as a large-scale field trial aimed at gaining experience with albedo modification in case it needs to be scaled up in response to a climate emergency. This might be considered as part of a portfolio of actions to reduce the risks of climate change.
  3. If an unsanctioned act of albedo modification were to occur, scientific research would be needed to understand how best to detect and quantify the act and its consequences and impacts.

What has been done in the past?

Weather modification was limited to magic and prayers until the 18th century when hail cannons were fired into the air to break up storms. There is still an industrial base today if you would like to have your own hail cannon. Just don’t move in next door if you plan on practicing.

(Not so useful) Hail Cannons

Despite their use on a large scale, there is no evidence in favor of the effectiveness of these devices. A 2006 review by Jon Wieringa and Iwan Holleman in the journal Meteorologische Zeitschrift summarized a variety of negative and inconclusive scientific measurements, concluding that “the use of cannons or explosive rockets is waste of money and effort”. In the 1950s to 1960s, Wilhelm Reich performed cloudbusting experiments, the results of which are controversial and not widely accepted by mainstream science.

However, during the cold war the US government committed to a ambitious experimental program named Project Stormfury for nearly 20 years (1962 to 1983). The DoD and NOAA attempted to weaken tropical cyclones by flying aircraft into them and seeding them with silver iodide. The proposed modification technique involved artificial stimulation of convection outside the eye wall through seeding with silver iodide. The artificially invigorated convection, it was argued, would compete with the convection in the original eye wall, lead to reformation of the eye wall at larger radius, and thus produce a decrease in the maximum wind. Since a hurricane’s destructive potential increases rapidly as its maximum wind becomes stronger, a reduction as small as 10{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} would have been worthwhile. Modification was attempted in four hurricanes on eight different days. On four of these days, the winds decreased by between 10 and 30{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}. The lack of response on the other days was interpreted to be the result of faulty execution of the experiment or poorly selected subjects.

These promising results have, however, come into question because recent observations of unmodified hurricanes indicate: I) that cloud seeding has little prospect of success because hurricanes contain too much natural ice and too little super cooled water, and 2) that the positive results inferred from the seeding experiments in the 1960s probably stemmed from inability to discriminate between the expected effect of human intervention and the natural behavior of hurricanes. The legacy of this program is the large global infrastructure today that routinely flies to inject silver iodide to cause localized rain with over 40 countries actively seeding clouds to control rainfall. Unfortunately, we are still pretty much helpless in the face of a large hurricane.

That doesn’t mean the Chinese aren’t trying. In 2008, China assigned 30 airplanes, 4,000 rocket launchers, and 7,000 anti-aircraft guns in an attempt to stop rain from disrupting the 2008 Olympics by shooting various chemicals into the air at any threatening clouds in the hopes of shrinking rain drops before they reached the stadium. Due to the difficulty of conducting controlled experiments at this scale, there is no way to know if this was effective. (Yes, this is the country that routinely bulldozes entire mountain ranges to make economic regions.)

But the Chinese aren’t the only ones. In January, 2011, several newspapers and magazines, including the UK’s Sunday Times and Arabian Business, reported that scientists backed by Abu Dhabi had created over 50 artificial rainstorms between July and August 2010 near Al Ain. The artificial rainstorms were said to have sometimes caused hail, gales and thunderstorms, baffling local residents. The scientists reportedly used ionizers to create the rainstorms, and although the results are disputed, the large number of times it is recorded to have rained right after the ionizers were switched on during a usually dry season is encouraging to those who support the experiment.

While we would have to understand the technology very well first and have a good risk mitigation strategy, I think there are several promising technical areas that merit further research.

What are the technical approaches?

So while past experiments are hard to learn much from and far from providing the buttons to control the weather, there are some promising technologies I’m going to be watching. There are five different technical approaches I was able to find:

  1. Altering the available solar energy by introducing materials to absorb or reflect sunshine
  2. Adding heat to the atmosphere by artificial means from the surface
  3. Altering air motion by artificial means
  4. Influencing the humidity by increasing or retarding evaporation
  5. Changing the processes by which clouds form and causing precipitation by using chemicals or inserting additional water into the clouds

In these five areas, I see several technical applications that are both interesting and have some degree of potential utility.

Modeling

Below is the 23-year accuracy of the U.S. GFS, the European ECMWF, the U.K. Government’s UKMET, and a model called CDAS which has never been modified, to serve as a “constant.” As you would expect, model accuracy is gradually increasing (1.0 is 100{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} accurate). Weather models are limited by computation and the scale of input data: for a fixed amount of computing power, the smaller the grid (and more accurate the prediction), the smaller time horizon for predictions. As more sensors are added and fused together, accuracy will keep improving.

Weather requires satellite and radar imagery that are truly on the very small scale. Current accuracy is an effective observation spacing of around 5 km. Radar data are only available to a fairly short distance from the coast. Satellite wind measurements can only resolve detail on about a 25 km scale. Over land, data from radar can be used to help predict small scale and short lived detail.

Weather Model Accuracy over Time

Weather Model Accuracy over Time

Modeling is important, because understanding is necessary for control. With increased accuracy, we can understand weather’s leverage points and feedback loops. This knowledge is important, because increased understanding would enable applying the least amount of energy where it matters most. Interacting with weather on a macro scale is both cost prohibitive and extremely complex.

Ionospheric Augmentation

Over the horizon radars (commonly called OTHR) have the potential to see targets hundreds of miles away because they aren’t limited by their line of sight like conventional microwave radars. They accomplish this by bouncing off the horizon, but this requires a sufficiently dense ionosphere that isn’t always there. Since the ionosphere is ionized by solar radiation, the solar radiation is stronger when the earth is more tilted towards the sun in the summer. To compensate for this, artificial ionospheric mirrors could bounce HF signals more consistently and precisely over broader frequencies. Tests have shown that these mirrors could theoretically reflect radio waves with frequencies up to 2 GHz, which is nearly two orders of magnitude higher than waves reflected by the natural ionosphere. This could have significant military applications such as low frequency (LF) communications, HF ducted communications, and increased OTHR performance.

This concept has been described in detail by Paul A. Kossey, et al. in a paper entitled “Artificial Ionospheric Mirrors.” The authors describe how one could precisely control the location and height of the region of artificially produced ionization using crossed microwave beams, which produce atmospheric breakdown. The implications of such control are enormous: one would no longer be subject to the vagaries of the natural ionosphere but would instead have direct control of the propagation environment. Ideally, these artificial mirrors could be rapidly created and then would be maintained only for a brief operational period.

Local Introduction of Clouds

There are several methods for seeding clouds. The best-known dissipation technique for cold fog is to seed it from the air with agents that promote the growth of ice crystals. They include dropping pyrotechnics on lop of existing clouds. penetrating clouds, with pyrotechnics and liquid generators, shooting rockets into clouds, and working from ground-based generators. Silver iodide is frequently used lo cause precipitation, and effects usually are seen in about thirty minutes. Limited success has been noted in fog dispersal and improving local visibility through introduction of hygroscopic substances.

However, all of these techniques seem like a very inexact science and 30 minutes remains far from the timescales needed for clouds on demand. From my brief look at it, we are just poking around in cloud formation. For the local introduction of clouds to be useful in military applications, there have to be a suite of techniques robust to changing weather. More research in this area might be able to effect chain reactions to cause massive cloud formations. Real research in this area could help it emerge from pseudo-science. There is a lot of it in this area. This Atlantic article titled Dr. Wilhelm Reich’s Orgasm-Powered Cloudbuster is pretty amusing and pretty indicative of the genre.

A cloud gun that taps into an “omnipresent libidinal life force responsible for gravity, weather patterns, emotions, and health”

Fog Removal

Ok, so no-one can make clouds appear on demand in a wide range of environments, but is technology better when it comes to removing fog? The best-known dissipation technique is heating because a small temperature increase is usually sufficient to evaporate fog. Since heating over a very wide scale usually isn’t practical, the next most effective technique is hygroscopic seeding. Hygroscopic seeding uses agents that absorb water vapor. This technique is most effective when accomplished from the air but can also be accomplished from the ground. Optimal results require advance information on fog depth, liquid water content, and wind.

In the 20th century several methods have been proposed to dissipate fog. One of them is to burn fuel along the runway, heat the fog layer and evaporate droplets. It has been used in Great Britain during World War II to allow British bombers returning from Germany to land safely in fog conditions. Helicopters can dissipate fog by flying slowly across the top surface and mix warm dry air into the fog. The downwash action of the rotors forces air from above into the fog, where it mixes, producing lower humidity and causing the fog droplets to evaporate. Tests were carried out in Florida and Virginia, and in both places cleared areas were produced in the helicopter wakes. Seeding with polyelectrolytes causes electric charges to develop on drops and has been shown to cause drops to coalesce and fallout. Other techniques that have been tried include the use of high-frequency (ultrasonic) vibrations, heating with laser rays and seeding with carbon black to alter the radiative properties1.

However, experiments have confirmed that large-scale fog removal would require exceeding the power density exposure limit of $100 \frac{\text{watt}}{m^2}$ and would be very expensive. Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. This doesn’t mean that capability on a smaller scale isn’t possible. Generating $1 \frac{\text{watt}}{cm^2}$, which is approximately the US large power density exposure limit, raised visibility to one quarter of a mile in 20 seconds. Most efforts have been made on attempts to increase the runway visibility range on airports, since airline companies face millions of dollars loss every year due to fog appearance on the runway. This thesis examines in the issue in depth.

Emerging Enabling Technologies

In looking at this topic, I was able to find several interesting technologies that may develop and make big contributions to weather research.

Carbon Dust

Just as a black tar roof easily absorbs solar energy and subsequently radiates heat during a sunny day, carbon black also readily absorbs solar energy. When dispersed in microscopic form in the air over a large body of water, the carbon becomes hot and heats the surrounding air, thereby increasing the amount of evaporation from the water below. As the surrounding air heats up, parcels of air will rise and the water vapor contained in the rising air parcel will eventually condense to form clouds. Over time the cloud droplets increase in size as more and more water vapor condenses, and eventually they become too large and heavy to stay suspended and will fall as rain. This technology has the potential to trigger localized flooding and bog down troops and their equipment.

Nanotech

Want to think outside the box? Smart materials based on nanotechnology are currently being developed with processing capability. They could adjust their size to optimal dimensions for a given fog seeding situation and even make continual adjustments. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. If successful, they will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and could also change their temperature and polarity to improve their seeding effects.

If we combine this with high fidelity models, things can get very interesting. If we can model and understand the leverage points of a weather system, nano-clouds may be able to have an dramatic effect. Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could mimic the signatures of specific weather patterns if tailored to the parameters of weather models.

High power lasers

The development of directed radiant energy technologies, such as microwaves and lasers, could provide new possibilities. Everyone should hate firing rockets and chemicals into the atmosphere. The advent of ultrashort laser pulses and the discovery of self-guided ionized filaments (see Braun et al., 1985) might provide the opportunity. Jean-Pierre Wolf has used using ultrashort laser pulses to create lightning and cue cloud formation. Prof Wolf says, “We did it on a laboratory scale, we can already create clouds, but not on a macroscopic scale, so you don’t see a big cloud coming out because the laser is not powerful enough and because of a lot of technical parameters that we can’t yet control,” from this cnn article.

What now?

So we have all the elements of scientific discipline and could use a national strategy in this area that includes the ethics, policy, technology and military employment doctrine. The military and civilian community already invests heavily in sensors and modeling of weather effects. These should be coupled with feasible excitation mechanisms to create a tight innovation loop. Again, this area is sensitive and politically charged, but there is a clear need to pull together sensors, processing capability and excitation mechanisms to ensure we have the right responses and capabilities. With such a dubious and inconclusive past, is there a potential future for weather modification? I think we have a responsibility for pursing knowledge even in areas where the ethical boundaries are not well established. Ignorance is never a good strategy. Just because we might open Pandora’s box, doesn’t mean that a less morally responsible nation or group won’t get there first. We can always abstain from learning a new technology, but if we are caught by surprise, we won’t have the knowledge to develop a good counter-strategy.

References

  1. http://csat.au.af.mil/2025/volume3/vol3ch15.pdf
  2. http://www.wired.com/2009/12/military-science-hack-stormy-skies-to-lord-over-lightning/
  3. PROSPECTS FOR WEATHER MODIFICATION

  1. DUMBAI, MA, et al. “ORGANIC HEAT-TRANSFER AGENTS IN CHEMICAL-INDUSTRY.” KHIMICHESKAYA PROMYSHLENNOST 1 (1990): 10-15. 
By One Comment

Kids Lego table: Case study in Automation for Design

[mathjax]

Motivation

I had to upgrade the Lego table I made when my kids were much smaller. It needed to be higher and include storage options. Since I’m short on time, I used several existing automation tools to both teach my daughter the power of programming and explore our decision space. The goals were to stay low-cost and make the table as functional as possible in the shortest time possible.

Lauren and I had fun drawing the new design in SketchUp. I then went to the Arlington TechShop and build the frame easily enough from a set of 2x4s. In order to be low-cost and quick, we decided to use the IKEA TROFAST storage bins. We were inspired from lots of designs online such as this one:

lego-table-example

However, the table I designed was much bigger and build with simple right angles and a nice dado angle bracket to hold the legs on.

table_with_bracket

The hard part was figuring out the right arrangement to place the bins underneath the table. Since my background is in optimization I was thinking about setting up two-dimensional knapsack problem but decided to do brute-force enumeration since the state-space was really small. I built two scripts: one in Python to numerate the state space and sort the results and one in JavaScript, or Extendscript, to automate Adobe Illustrator to give me a good way to visually considered the options. (Extendscript just looks like an old, ES3, version of Javascript to me.)

So what are the options?

There are two TROFAST bins I found online. One costs \$3 and the other \$2. Sweet. You can see their dimensions below.

options

They both are the same height, so we just need to determine how to make the row work. We could arrange each TROFAST bin on the short or long dimension so we have 4 different options for the two bins:

Small Side Long Side
Orange 20 30
Green 30 42

First, Lauren made a set of scale drawings of the designs she liked, which allowed us to think about options. Her top left drawing, ended up being our final design.

lauren designs

I liked her designs, but it got me thinking what would all feasible designs look like and we decided to tackle this since she is learning JavaScript.

Automation

If we ignore the depth and height, we then have only three options $[20,30,42]$ with the null option of $0$ length. With these lengths we can find the maximum number of bins if the max length is $112.4 \text{cm}$. Projects like this always have me wondering how to best combine automation with intuition. I’m skeptical of technology and aware that it can be a distraction and inhibit intuition. It would have been fun to cut out the options at scale or just to make sketches and we ended up doing those as well. Because I’m a recreational programmer, it was fairly straightforward to enumerate and explore feasible options and fun to show my daughter some programming concepts.

$$ \left\lfloor
\frac{112.4}{20}
\right\rfloor = 5 $$

So there are $4^5$ or $1,024$ total options from a Cartesian product. A brute force enumeration would be $O(n^3)$, but fortunately we have $\text{itertools.product}$ in python, so we can get all our possible options easily in one command:

itertools.product([0,20,30,42], repeat=5)

and we can restrict results to feasible combinations and even solutions that don’t waste more than 15 cm. To glue Python and Illustrator together, I use JSON to store the data which I can then open in Illustrator Extendscript and print out the feasible results.

results

Later, I added some colors for clarity and picked the two options I liked:

options

These both minimized the style of bins, were symmetric and used the space well. I took these designs forward into the final design. Now to build it.

final_design

Real Math

But, wait — wrote enumeration? Sorry, yes I didn’t have much time when we did this, but there are much better ways to do this. Here are two approaches:

Generating Functions

If your options are 20, 30, and 40, then what you do is compute the coefficients of the infinite series

$$(1 + x^{20} + x^{40} + x^{60} + …)(1 + x^{30} + x^{60} + x^{90} + …)(1 + x^{40} + x^{80} + x^{120} + …)$$

I always find it amazing that polynomials happen to have the right structure for the kind of enumeration we want to do: the powers of x keep track of our length requirement, and the coefficients count the number of ways to get a given length. When we multiply out the product above we get

$$1 + x^{20} + x^{30} + 2 x^{40} + x^{50} + 3 x^{60} + 2 x^{70} + 4 x^{80} + 3 x^{90} + 5 x^{100} + …$$

This polynomial lays out the answers we want “on a clothesline”. E.g., the last term tells us there are 5 configurations with length exactly 100. If we add up the coefficients above (or just plug in “x = 1”) we have 23 configurations with length less than 110.

If you also want to know what the configurations are, then you can put in labels: say $v$, $t$, and $f$ for twenty, thirty, and forty, respectively. A compact way to write $1 + x^20 + x^40 + x^60 + … is 1/(1 – x^20)$. The labelled version is $1/(1 – v x^20)$. Okay, so now we compute

$$1/((1 – v x^{20})(1 – t x^{30})(1 – f x^{40}))$$

truncating after the $x^{100}$ term. In Mathematica the command to do this is

Normal@Series[1/((1 - v x^20) (1 - t x^30) (1 - f x^40)), {x, 0, 100}]

with the result

$$1 + v x^{20} + t x^{30} + (f + v^2) x^{40} + t v x^{50} + (t^2 + f v + v^3) x^{60} + (f t + t v^2) x^{70} + (f^2 + t^2 v + f v^2 + v^4) x^{80} + (t^3 + f t v + t v^3) x^{90} + (f t^2 + f^2 v + t^2 v^2 + f v^3 + v^5) x^{100}$$

Not pretty, but when we look at the coefficient of $x^{100}$, for example, we see that the 5 configurations are ftt, ffv, ttvv, fvvv, and vvvvv.

Time to build it

Now it is time to figure out how to build this. I figured out I had to use $1/2$ inch plywood. Since I do woodworking in metric, this is a dimension of 0.472 in or 1.19888 cm.

 $31.95 / each Sande Plywood (Common: 1/2 in. x 4 ft. x 8 ft.; Actual: 0.472 in. x 48 in. x 96 in.)

or at this link

So the dimensions of this are the side thickness $s$ and interior thickness $i$ with shelf thickness $k$. Each shelf is $k = 20-0.5 \times 2 \text{cm} = 19 \text{cm}$ wide. All together, we know:

$$w = 2\,s+5\,k+4\,i $$

and the board thickness is $t$ where $t < [s, i]$.

which gives us:

st width
s 1.20
i 3.75
k 19.00
w 112.40

Code

The code I used is below:

References

By One Comment

The Hierarchical Dirichlet Process Hidden Semi-Markov Model

In my work at DARPA, I’ve been exposed to hidden Markov models in applications as diverse as temporal pattern recognition such as speech, handwriting, gesture recognition, musical score following, and bioinformatics. My background is in stochastic modeling and optimization, and hidden Markov models are a fascinating intersection between my background and my more recent work with machine learning. Recently, I’ve come across a new twist on the Markov model: the Hierarchical Dirichlet Process Hidden Markov Model.

What is a Markov model?

Say in DC, we have three types of weather: (1) sunny, (2) rainy and (3) foggy. Lets assume for the moment that the doesnt change from rainy to sunny in the middle of the day. Weather prediction is all about trying to guess what the weather will be like tomorrow based on a history of observations of weather. If we assume the days preceding today will give you a good weather prediction for today we need the probability for each state change:

$$ P(w_n | w_{n-1}, w_{n-2},\ldots, w_1) $$

So, if the last three days were sunny, sunny, foggy, we know that the probability that tomorrow would be rainy is given by:

$$ P(w_4 = \text{rainy}| w_3 = \text{foggy}, w_2 = \text{sunny}, w_1 = \text{sunny}) $$

This all works very well, but the state space grows very quickly. Just based on the above, we would need $3^4$ histories. So fix this we make the Markov Assumption that everything really depends on the previous state alone, or:

$$ P(w_n | w_{n-1}, w_{n-2},\ldots, w_1) \approx P(w_n| w_{n-1}) $$

which allows us to calculate the joint probability of weather in one day given we know the weather of the previous day:

$$ P(w_1, \ldots, w_n) = \prod_{i=1}^n P(w_i| w_{i-1})$$

and now we only have nine numbers to characterize statistically.

What is a hidden Markov model?

In keeping with the example above, suppose you were locked in a room and asked about the weather outside and the only evidence you have is that that ceiling drips or not from the rain outside. We are still in the same world with the same assumptions and the probability of each state is still given by:

$$ P(w_1, \ldots, w_n) = \prod_{i=1}^n P(w_i| w_{i-1})$$

but we have to factor that the actual weather is hidden from you. We can do that using Bayes’ rule where $u_i$ is true if the ceiling drips on day $i$ and false otherwise:

$$P(w_1, \ldots, w_n)| u_1,\ldots,u_n)=\frac{P(u_1,\ldots,u_n | w_1, \ldots, w_n))}{P(u_1,\ldots,u_n)}$$

Here the probability $P(u_1,\ldots,u_n)$ is the prior probability of seeing a particular sequence of ceiling leak events ${True,False,True}$. With this, you can answer questions like:

Suppose the day you were locked in it was sunny. The next day the ceiling leaked. Assuming that the prior probability of the caretaker carrying an umbrella on any day is 0.5, what is the probability that the second day was rainy?

So if Markov models consider states that are directly visible to the observer, the state transition probabilities are the only parameters. By contrast, in hidden Markov models (HMMs) the state is not directly visible, but the output, dependent on the state, is visible. A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (or hidden) states. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. In this context, ‘hidden’ refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a ‘hidden’ Markov model even if these parameters are known exactly.

OK, so what is a Hierarchical Dirichlet Process Hidden Semi-Markov Model?

Hidden Markov models are generative models where the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities) are modeled. Instead of implicitly assuming a uniform prior distribution over the transition probabilities, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution.

In fact, it is possible to use a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short or it is also called the “Infinite Hidden Markov Model”.

The Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) is a natural Bayesian nonparametric extension of the traditional HMM. The single parameter of this distribution (termed the concentration parameter) controls the relative density or sparseness of the resulting transition matrix. By using the theory of Dirichlet processes it is possible to integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence.

This is really cool. If you formulate a HMMs with a countably infinite number of hidden states,
you would have infinitely many parameters in the state transition matrix. The key idea is that the theory of Dirichlet processes can implicitly integrate out all but the three parameters which define the prior over transition dynamics. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm.

So how can we use this?

A common problem in speech recognition is segmenting an audio recording of a meeting into temporal segments corresponding to individual speakers. This problem is often called speaker diarization. This is particularly challenging since you don’t know the number of people participating in the meeting and modified HDP-HMMs have been very effective at achieving state-of-the-art speaker diarization results.

Other interesting applications of HDP-HMMs have been modeling otherwise intractable linear dynamical systems which describing dynamical phenomena as diverse as human motion, financial time-series, maneuvering targets, and the dance of honey bees. (See this paper for more details. Results have shown that HDP-HMM can identify periods of higher volatility in the daily returns on the IBOVESPA stock index (Sao Paulo Stock Exchange). Most interesting to me was the application to using HDP-HMMs on a set of six dancing honey bee sequences aiming to segment the sequences into distinct dances.

You can see some other cool motion capture examples here.

By 0 Comments