Skip to main content
Posts by:

Tim Booher

Satisfiability modulo theories and their relevance to cyber-security

Cybersecurity and cryptoanalysis is a field filled with logic puzzles, math and numerical techniques. One of the most interesting technical areas I’ve worked goes by the name of satisfiability modulo theories (SMT) and their associated solvers. This post provides a layman’s introduction to SMT and its applications to computer science and the modern practice of learning how to break code hack.

Some background theory

Satisfiability modulo theories are a type of constraint-satisfaction problems that arise many places from software and hardware verification, to static program analysis and graph problems. They apply where logical formulas can describe a system’s states and their associated transformations. If you look under the hood for most tools used today for computer security, you will find they are based on mathematical logic as the calculus of computation. The most common constraint-satisfaction problem is propositional satisfiability (commonly called SAT) which aims to decide whether a formula composed of Boolean variables, formed using logical connectives, can be made true by choosing true/false values for its variables. In this sense, those familiar with Integer Programming will find a lot of similarities with SAT. SAT has been widely used in verification, artificial intelligence and many other areas.

As powerful as SAT problems are, what if instead of boolean constraints, we use arithmetic in a more general application to build our constraints? Often constraints are best described as linear relationships among integer or real variables. In order to understand and rigorously treat the sets involved domain theory is combined with propositional satisfiability to arrive at satisfiability modulo theory (SMT).

The satisfiability modulo theories problem is a decision problem for logical formulas with respect to combinations of background theories expressed in classical first-order logic with equality. An SMT solver decides the satisfiability of propositionally complex formulas in theories such as arithmetic and uninterpreted functions with equality. SMT solving has numerous applications in automated theorem proving, in hardware and software verification, and in scheduling and planning problems. SMT can be thought of as a form of the constraint satisfaction problem and thus a certain formalized approach to constraint programming.

The solvers developed under SMT have proven very useful in situations where linear constraints and other types of constraints are required with artificial intelligence and verification often presented as exemplars. An SMT solver can solve a SAT problem, but not vice-versa. SMT solvers draw on some of the most fundamental areas of computer science, as well as a century of symbolic logic. They combine the problem of Boolean satisfiability with domains (such as those studied in convex optimization and term-manipulating symbolic systems). Implementing SAT solvers requires an implementation of the decision problem, completeness and incompleteness of logical theories, and complexity theory.

The process of SMT solving is a procedure of finding an satisfying assignment for a quantifier-free formula for $F$ with predicates on a certain background theory $T$. Alternatively the SMT solving process could show such an assignment doesn’t exist. An assignment on all variables that satisfies these constraints is the model or $M$. $M$ will be satisified when $F$ evaluates to $\text{true}$ under a given background theory $T$. In this sense, $M$ entails $F$ under theory $T$ which is commonly expressed as: $ M \vDash_T F$. If theory $T$ is not decidable, then the underlying SMT problem is undecidable and no solver could exist. This means that given a conjunction of constraints in $T$, there must exist a procedure of finite steps that can test the existence of a satisfying assignment for these constraints.

Ok, that is a lot of jargon. What is this good for?

SMT solvers have been used since the 1970s, albeit in very specific contexts, most commonly theorem-proving cf ACL2 for some examples. More recently, SMT solvers have been helpful in test-case generation, model-based testing and static program analysis. In order to make this more real with a concrete example, let’s consider one of the most well-known operations research problems: job-shop scheduling.

If there are $n$ jobs to complete, each composed of $m$ tasks with different durations on $m$ machines. The start of a new task can be delayed indefinitely, but you can’t stop a task once it has started. For this problem, there are two constraints: precedence and resource constraints. Precedence specifies that one job has to happen before another and the resource constraint specifies that no two different tasks requiring the same machine are able to execute at the same time. If you are given a total maximum time $max$ and the duration of each task, the task is to decide if a schedule exists where the end time of every task is less than or equal to $max$ units of time. The duration of the $j$th task of job $i$ is $d_{i,j}$ and each task starts at $t_{i,j}$.

I’ve solved this problem before with heuristics such as Simulated_annealing, but you can encode the solution to this problem in SMT using the theory of linear arithmetic. First, you have to encode the precedence constraint:

$$ t_{i,j}+1 \geq t_{i,j} + d_{i,j} $$

This states that the start-time of task $j+1$ must be greater or equal to the start time of task $j$ plus its duration. The resource constraint ensures that jobs don’t overlap. Between job $i$ and job $i’$ this constraint says:

$$ (t_{i,j} \geq t_{i’,j}+d_{i’,j}) \vee (t_{i’,j} \geq t_{i,j} + d_{i,j}) $$

Lastly, each time must be non-negative, or $ t_{i,1} \geq 0 $ and the end time of the last task must be less than or equal to $max$ or $t_{i,m} + d_{i,m} \leq max$. Together, these constraints forms a logical formula that combines logical connectives (conjunctions, disjunction and negation) with atomic formulas in the form of linear arithmetic inequalities. This is the SMT formula and the solution is a mapping from the variables $t_{i,j}$ to values that make this formula $\text{true}$.

So how is this relevant to software security?

Since software uses logical formulas to describe program states and transformations between them, SMT has proven very useful to analyze, verify or test programs. In theory, if we tried every possible input to a computer program, and we could observe and understand every resultant behavior, we would know with certainty all possible vulnerabilities in a software program. The challenge of using formal methods to verify (exploit) software is to accomplish this certainty in a reasonable amount of time and this generally distills down to clever ways to reduce the state space.

For example, consider dynamic symbolic execution. In computational mathematics, algebraic or symbolic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. This is in contrast to scientific computing which is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that are manipulated as symbols.

The software that performs symbolic calculations is called a computer algebra system. At the beginning of computer algebra, circa 1970, when common algorithms were translated into computer code, they turned out to be highly inefficient. This motivated the application of classical algebra in order to make it effective and to discover efficient algorithms. For example, Euclid’s algorithm had been known for centuries to compute polynomial greatest common divisors, but directly coding this algorithm turned out to be inefficient for polynomials over infinite fields.

Computer algebra is widely used to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, like in public key cryptography or for some classes of non-linear problems.

To understand some of the challenges of symbolic computation, consider basic associative operations like addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands, that is that $a + b + c$ is represented as $”+”(a, b, c)$. Thus $a + (b + c)$ and $(a + b) + c$ are both simplified to $”+”(a, b, c)$. However, what about subtraction, say $(a − b + c)$? The simplest solution is to rewrite systematically, so $(a + (-1)\cdot b + c)$. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. New Speak for mathematical operations!

A number of tools used in industry are based on symbolic execution. (cf CUTE, Klee, DART, etc). What these programs have in common is they collect explored program paths as formulas and use solvers to identify new test inputs with the potential to guide execution into new branches. SMT applies well for this problem, because instead of the random walks of fuzz testing, “white-box” fuzzing which combines symbolic execution with conventional fuzz testing exposes the interactions of the system under test. Of course, directed search can be much more efficient than random search.

However, as helpful as white-box testing is in finding subtle security-critical bugs, it doesn’t guarantee that programs are free of all the possible errors. This is where model checking helps out. Model checking seeks to automatically check for the absence of categories of errors. The fundamental idea is to explore all possible executions using a finite and sufficiently small abstraction of the program state space. I often think of this as pruning away the state spaces that don’t matter.

For example, consider the statement $a = a + 1$. The abstraction is essentially a relation between the current and new values of the boolean variable $b$. SMT solvers are used to compute the relation by proving theorems, as in:

$$ a == a_{old} \rightarrow a+1 \neq a_{old} $$ is equavalient to checking the unsatisfiability of the negation $ a == a_{old} \wedge a + 1 == a_{old} $.

The theorem says if the current value of $b$ is $\text{true}$, then after executing the statement $ a = a + 1$, the value of $b$ will be $\text{false}$. Now, if $b$ is $\text{false}$, then neither of these conjectures are valid:

$$
a \neq a_{old} \rightarrow a + 1 == a_{old}
$$
or
$$
a \neq a_{old} \rightarrow a + 1 \neq a_{old}
$$

In practice, each SMT solver will produce a model for the negation of the conjecture. In this sense, the model is a counter-example of the conjecture, and when the current value of $b$ is false, nothing can be said about its value after the execution of the statement. The end result of these three proof attempts is then used to replace the statement $a = a + 1$ by:

 if b then
   b = false;
 else
   b = *;
 end

A finite state model checker can now be used on the Boolean program and will establish that $b$ is always $\text{true}$ when control reaches this statement, verifying that calls to

lock()

are balanced with calls to

unlock()

in the original program.

do {
 lock ();
 old_count = count;
 request = GetNextRequest();
 if (request != NULL) {
  unlock();
  ProcessRequest(request);
  count = count + 1;
 }
}
while (old_count != count);
unlock();

becomes:

do {
 lock ();
 b = true;
 request = GetNextRequest();
 if (request != NULL) {
   unlock();
   ProcessRequest(request);
   if (b) b = false; else b = ∗;
 }
}
while (!b);
unlock();

Application to static analysis

Static analysis tools work the same way as white-box fuzzing or directed search while ensuring the feasibility of the program through, in turn, checking the feasibility of program paths. However, static analysis never requires actually running the program and can therefore analyze software libraries and utilities without instantiating all the details of their implementation. SMT applies to static analysis because they accurately capture the semantics of most basic operations used by mainstream programming languages. While this fits nicely for functional programming languages, this isn’t always a perfect fit languages such as Java, C#, and C/C++ which all use fixed-width bitvectors as representation for values of type int. In this case, the accurate theory for int is two-complements modular arithmetic. Assuming a bit-width of 32b, the maximal positive 32b integer is 231−1, and the smallest negative 32b integer is −231. If both low and high are 230, low + high evaluates to 231, which is treated as the negative number −231. The presumed assertion 0 ≤ mid < high does therefore not hold. Fortunately, several modern SMT solvers support the theory of “bit-vectors,” accurately capturing the semantics of modular arithmetic.

Let’s look at an example from a binary search algorithm:

int binary_search(
int[] arr, int low, int high, int key) {
 assert (low > high || 0 <= low < high);
 while (low <= high) {
 //Find middle value
 int mid = (low + high)/2;
 assert (0 <= mid < high); int val = arr[mid]; //Refine range if (key == val) return mid; if (val > key) low = mid+1;
   else high = mid–1;
 }
 return –1;
}

 

Summary

SMT solvers combine SAT reasoning with specialized theory solvers either to find a feasible solution to a set of constraints or to prove that no such solution exists. Linear programming (LP) solvers are designed to find feasible solutions that are optimal with respect to some optimization function. Both are powerful tools that can be incredibly helpful to solve hard and practical problems in computer science.

One of the applications I follow closely is symbolic-execution based analysis and testing. Perhaps the most famous commercial tool that uses dynamic symbolic execution (aka concolic testing) is the SAGE tool from Microsoft. The KLEE and S2E tools (both of which are open-source tools, and use the STP constraint solver) are widely used in many companies including HP Fortify, NVIDIA, and IBM. Increasingly these technologies are being used by many security companies and hackers alike to find security vulnerabilities.

 

 

By 0 Comments

Basement Framing with the Shopbot

Framing around bulkheads is painful. It is hard to get everything straight and aligned. I found the Shopbot to be very helpful. There are three problems I was trying to solve: (1) Getting multiple corners straight across 30 feet, (2) nearly no time and (3) basic pine wood framing would sag over a 28″ run.

In fairness, the cuts did take a lot of time (about 2.5 hours of cutting), but I could do other work while the ShopBot milled out the pieces. I also had several hours of prep and installation, so I’m definitely slower than a skilled carpenter would be, but probably faster off by using this solution. Plus, I think I’m definitely more straight and accurate. I especially need this, because my lack of skill means that I don’t have the bag of tricks available to deal with non-straight surfaces.

First, Autodesk Revit makes drawing ducts easy as part of an overall project model. The problem was the way the ducts were situated, the team working on the basement couldn’t simply make a frame that went all the way to the wall, because of an annoying placed door.

I was able to make a quick drawing in the model and print out frames on the shopbot. They only had to be aligned vertically which was easy to do with the help of a laser level.

second-ducts-v4

These were easy to cut out while I also had to make some parts for my daughters school project.
20160613_210346

By 0 Comments

DIY Roulette Wheel and Probability for Kids

My daughter had to make a “probability game” for her fifth grade math class. She has been learning javascript and digital design so we worked together to design and build a roulette wheel for her class.

First, she drew out a series of sketches and we talked it over. She wanted a 0.5 inch thick plywood 2 foot diameter, but after some quick calculations we decided on a $1/4$ inch thick wheel and a 18″ diameter. I had to talk her into adding pegs and a ball bearing from a skateboard from amazon for $2. The inner diameter is $1/4$ inch so I also bought a package of dowels for $5 to make the pegs. I also bought a 1/2 sheet of plywood (that I used about 1/3 of) and some hardware from Home Depot.

She wanted 10 sections with combinations of the following outcomes: Small, Large and Tiny prizes as well as two event outcomes: Spin Again and Lose a Spin. Each student would have at most three turns. We had the following frequencies (out of 10):

Outcome Frequency
Small 3
Large 2
Tiny 1
Lose a spin 3
Spin Again 1

This led to a ~~fun~~ (frustrating) discussion monte carlo code,  conditional probabilities and cumulative probabilities. Good job teacher! We got to answer questions like:

  • What is the probability of getting a large prize in a game (three spins)?
  • What is the probability you get no prize?
  • What is the expected number of spins?

She really threw the math for a loop with the Spin Again and Lose a Spin options. We had to talk about systems with a random number of trials. My favorite part was exposing her to true randomness. She was convinced the wheel was biased because she got three larges in a row. I had to teach her that true random behavior was more unbalanced than her intuition might lead her to believe.

In order to understand a problem like this, it is all about the state space. There are four possible outcomes: three different prizes or no prize. To explain the effect the spin skips have on the outcomes, I had to make the diagram below. Each column represents one of the three spins, each circle represents a terminal outcome and each rectangle represents a result of a spin.

Drawing1

From this, we can compute the probabilities for each of the 17 outcomes:

1 2 3 Prob
1 $P_L$ 0.200
2 $P_S$ 0.300
3 $P_T$ 0.100
4 L $P_L$ 0.060
5 L $P_S$ 0.090
6 L $P_T$ 0.030
7 L L 0.090
8 L S 0.030
9 S $P_L$ 0.020
10 S $P_S$ 0.030
11 S $P_T$ 0.010
12 S L 0.030
13 S S $P_L$ 0.002
14 S S $P_S$ 0.003
15 S S $P_T$ 0.001
16 S S L 0.003
17 S S S 0.001

I would love to find a more elegant solution, but the strange movements of the state-space left me with little structure I could exploit.

And we can add these to get the event probabilities and (her homework) to generate the expected values of prizes she needs to bring when 20 students are going to play the game:

Probability Expected Value Ceiling
$P_L$ 28{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 5.64 6
$P_S$ 42{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 8.44 9
$P_T$ 14{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 2.82 3
NP 16{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 3.10 4

We can also get the probabilities for the number of spins:

Count Probability
One spin 0.600
Two 0.390
Three 0.010

Simulation

When the probability gets hard . . . simulate, and let the law of large numbers work this out.

This demonstrated the probability of getting a prize was:

Probability Expected Value Ceiling
$P_L$ 28{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 5.64 6
$P_S$ 42{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 8.46 9
$P_T$ 14{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 2.82 3
NP 15{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} 3.08 4

Design

So I took her designs and helped her write the following code to draw the wheel in Adobe Illustrator. This didn’t take long to write, because I had written similar code to make a series of clocks for my 5 year old to teach him how to tell time. The code was important to auto-generate the designs, because we must have tried 10 different iterations of the game.

Which produced this Adobe Illustrator file that I could laser-cut:

spinner2

From here, I designed a basic structure in Fusion 360. I cut the base and frame from $1/2$ inch birch plywood with a $1/4$ inch downcut endmill on a ShopBot.

A render:

the wheel v19

And a design:

2016-06-09 (1)

If you want the fusion file, request in comments and I’ll post.

Please let me know if you have any questions and I’ll share my design. Next up? We are going to print a new wheel to decide who washes the dishes! Kids get double the frequency.

By One Comment

Weaponizing the Weather

“Intervention in atmospheric and climatic matters . . . will unfold on a scale difficult to imagine at present. . . . this will merge each nation’s affairs with those of every other, more thoroughly than the threat of a nuclear or any other war would have done.” — J. von Neumann

Disclaimer: This is just me exploring a topic that I’m generally clueless on, explicitly because I’m clueless on it. My views and the research discussed here has nothing to do with my work for the DoD.

Why do we care?

Attempting to control the weather is older than science itself. While it is common today to perform cloud seeding to increase rain or snow, weather modification has the potential to prevent damaging weather from occurring; or to provoke damaging weather as a tactic of military or economic warfare. This scares all of us, including the UN who banned weather modification for the purposes of warfare in response to US actions in Vietnam to induce rain and extend the East Asian monsoon season (see operation popeye). Unfortunately, this hasn’t stopped Russia and China from pursuing active weather modification programs with China generally regarded as the largest and most active. While Russia is famous for sophisticated cloud-seeding in 1986 to prevent radioactive rain from the Chernobyl reactor accident from reaching Moscow, see China Leads the Weather Control Race and China plans to halt rain for Olympics to understand the extent of China’s efforts in this area.

The Chinese have been tinkering with the weather since the late 1950s, trying to bring rains to the desert terrain of the northern provinces. Their bureau of weather modification was established in the 1980s and is now believed to be the largest in the world. It has a reserve army of 37,000 people, which might sound like a lot, until we consider the scale of an average storm. The numbers that describe weather are big. At any instant there are approximately 2,000 thunderstorms and every day there are 45,000 thunderstorms, which contain some combination of heavy rain, hail, microbursts, wind shear, and lightning. The energy involved is staggering: a tropical storm can have an energy equal to 10,000 one-megaton hydrogen bombs. A single cloud contains about a million pounds of water so a mid-size storm would contain about 3 billion pounds of water. If anyone ever figures out how to control all this mass and energy they would make an excellent bond villain.

The US government has conducted research in weather modification as well. In 1970, then ARPA Director Stephen J. Lukasik told the Senate Appropriations Committee: “Since it now appears highly probable that major world powers have the ability to create modifications of climate that might be seriously detrimental to the security of this country, Nile Blue [a computer simulation] was established in FY 70 to achieve a US capability to (1) evaluate all consequences of of a variety of possible actions … (2) detect trends in in the global circulation which foretell changes … and (3) determine if possible , means to counter potentially deleterious climatic changes … What this means is learning how much you have to tickle the atmosphere to perturb the earth’s climate.” Sounds like a reasonable program for the time.

Military applications are easy to think up. If you could create a localized could layer, you could decrease performance of ground and airborne IRSTs particularly in the long-wave. (Cloud mean-diameter is typically 10 to 15 microns.) You could send hurricanes toward your adversary or increase the impact of an all weather advantage. (Sweet.) You could also do more subtle effects such as inuring the atmosphere towards your communications technology or degrade the environment to a state less optimal for an adversary’s communications or sensors. Another key advantage would be to make the environment unpredictable. Future ground-based sensing and fusing architectures such as multi-static and passive radars rely on a correctly characterized environment that could be impacted by both intense and unpredictable weather.

Aside from military uses, climate change (both perception and fact) may drive some nations to seek engineered solutions. Commercial interests would welcome the chance to make money cleaning up the mess they made money making. And how are we going to sort out and regulate that without options and deep understanding? Many of these proposals could have dual civilian and military purposes as they originate in Cold War technologies. As the science advances, will we be able to prevent their renewed use as weapons? Could the future hold climatological conflicts, just as we’ve seen cyber warfare used to presage invasion as recently seen between Ukraine and Russia? If so, climate influence would be a way for a large state to exert an influence on smaller states.

Considering all of this, it would be prudent to have a national security policy that accounts for weather modification and manipulation. Solar radiation management, called albedo modification, is considered to be a potential option for addressing climate change and one that may get increased attention. There are many research opportunities that would allow the scientific community to learn more about the risks and benefits of albedo modification, knowledge which could better inform societal decisions without imposing the risks associated with large-scale deployment. According to Carbon Dioxide Removal and Reliable Sequestration (2015) by the National Academy of Sciences, there are several hypothetical, but plausible, scenarios under which this information would be useful. They claim (quoting them verbatim):

  1. If, despite mitigation and adaptation, the impacts of climate change still become intolerable (e.g., massive crop failures throughout the tropics), society would face very tough choices regarding whether and how to deploy albedo modification until such time as mitigation, carbon dioxide removal, and adaptation actions could significantly reduce the impacts of climate change.
  2. The international community might consider a gradual phase-in of albedo modification to a level expected to create a detectable modification of Earth’s climate, as a large-scale field trial aimed at gaining experience with albedo modification in case it needs to be scaled up in response to a climate emergency. This might be considered as part of a portfolio of actions to reduce the risks of climate change.
  3. If an unsanctioned act of albedo modification were to occur, scientific research would be needed to understand how best to detect and quantify the act and its consequences and impacts.

What has been done in the past?

Weather modification was limited to magic and prayers until the 18th century when hail cannons were fired into the air to break up storms. There is still an industrial base today if you would like to have your own hail cannon. Just don’t move in next door if you plan on practicing.

(Not so useful) Hail Cannons

Despite their use on a large scale, there is no evidence in favor of the effectiveness of these devices. A 2006 review by Jon Wieringa and Iwan Holleman in the journal Meteorologische Zeitschrift summarized a variety of negative and inconclusive scientific measurements, concluding that “the use of cannons or explosive rockets is waste of money and effort”. In the 1950s to 1960s, Wilhelm Reich performed cloudbusting experiments, the results of which are controversial and not widely accepted by mainstream science.

However, during the cold war the US government committed to a ambitious experimental program named Project Stormfury for nearly 20 years (1962 to 1983). The DoD and NOAA attempted to weaken tropical cyclones by flying aircraft into them and seeding them with silver iodide. The proposed modification technique involved artificial stimulation of convection outside the eye wall through seeding with silver iodide. The artificially invigorated convection, it was argued, would compete with the convection in the original eye wall, lead to reformation of the eye wall at larger radius, and thus produce a decrease in the maximum wind. Since a hurricane’s destructive potential increases rapidly as its maximum wind becomes stronger, a reduction as small as 10{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} would have been worthwhile. Modification was attempted in four hurricanes on eight different days. On four of these days, the winds decreased by between 10 and 30{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}. The lack of response on the other days was interpreted to be the result of faulty execution of the experiment or poorly selected subjects.

These promising results have, however, come into question because recent observations of unmodified hurricanes indicate: I) that cloud seeding has little prospect of success because hurricanes contain too much natural ice and too little super cooled water, and 2) that the positive results inferred from the seeding experiments in the 1960s probably stemmed from inability to discriminate between the expected effect of human intervention and the natural behavior of hurricanes. The legacy of this program is the large global infrastructure today that routinely flies to inject silver iodide to cause localized rain with over 40 countries actively seeding clouds to control rainfall. Unfortunately, we are still pretty much helpless in the face of a large hurricane.

That doesn’t mean the Chinese aren’t trying. In 2008, China assigned 30 airplanes, 4,000 rocket launchers, and 7,000 anti-aircraft guns in an attempt to stop rain from disrupting the 2008 Olympics by shooting various chemicals into the air at any threatening clouds in the hopes of shrinking rain drops before they reached the stadium. Due to the difficulty of conducting controlled experiments at this scale, there is no way to know if this was effective. (Yes, this is the country that routinely bulldozes entire mountain ranges to make economic regions.)

But the Chinese aren’t the only ones. In January, 2011, several newspapers and magazines, including the UK’s Sunday Times and Arabian Business, reported that scientists backed by Abu Dhabi had created over 50 artificial rainstorms between July and August 2010 near Al Ain. The artificial rainstorms were said to have sometimes caused hail, gales and thunderstorms, baffling local residents. The scientists reportedly used ionizers to create the rainstorms, and although the results are disputed, the large number of times it is recorded to have rained right after the ionizers were switched on during a usually dry season is encouraging to those who support the experiment.

While we would have to understand the technology very well first and have a good risk mitigation strategy, I think there are several promising technical areas that merit further research.

What are the technical approaches?

So while past experiments are hard to learn much from and far from providing the buttons to control the weather, there are some promising technologies I’m going to be watching. There are five different technical approaches I was able to find:

  1. Altering the available solar energy by introducing materials to absorb or reflect sunshine
  2. Adding heat to the atmosphere by artificial means from the surface
  3. Altering air motion by artificial means
  4. Influencing the humidity by increasing or retarding evaporation
  5. Changing the processes by which clouds form and causing precipitation by using chemicals or inserting additional water into the clouds

In these five areas, I see several technical applications that are both interesting and have some degree of potential utility.

Modeling

Below is the 23-year accuracy of the U.S. GFS, the European ECMWF, the U.K. Government’s UKMET, and a model called CDAS which has never been modified, to serve as a “constant.” As you would expect, model accuracy is gradually increasing (1.0 is 100{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} accurate). Weather models are limited by computation and the scale of input data: for a fixed amount of computing power, the smaller the grid (and more accurate the prediction), the smaller time horizon for predictions. As more sensors are added and fused together, accuracy will keep improving.

Weather requires satellite and radar imagery that are truly on the very small scale. Current accuracy is an effective observation spacing of around 5 km. Radar data are only available to a fairly short distance from the coast. Satellite wind measurements can only resolve detail on about a 25 km scale. Over land, data from radar can be used to help predict small scale and short lived detail.

Weather Model Accuracy over Time

Weather Model Accuracy over Time

Modeling is important, because understanding is necessary for control. With increased accuracy, we can understand weather’s leverage points and feedback loops. This knowledge is important, because increased understanding would enable applying the least amount of energy where it matters most. Interacting with weather on a macro scale is both cost prohibitive and extremely complex.

Ionospheric Augmentation

Over the horizon radars (commonly called OTHR) have the potential to see targets hundreds of miles away because they aren’t limited by their line of sight like conventional microwave radars. They accomplish this by bouncing off the horizon, but this requires a sufficiently dense ionosphere that isn’t always there. Since the ionosphere is ionized by solar radiation, the solar radiation is stronger when the earth is more tilted towards the sun in the summer. To compensate for this, artificial ionospheric mirrors could bounce HF signals more consistently and precisely over broader frequencies. Tests have shown that these mirrors could theoretically reflect radio waves with frequencies up to 2 GHz, which is nearly two orders of magnitude higher than waves reflected by the natural ionosphere. This could have significant military applications such as low frequency (LF) communications, HF ducted communications, and increased OTHR performance.

This concept has been described in detail by Paul A. Kossey, et al. in a paper entitled “Artificial Ionospheric Mirrors.” The authors describe how one could precisely control the location and height of the region of artificially produced ionization using crossed microwave beams, which produce atmospheric breakdown. The implications of such control are enormous: one would no longer be subject to the vagaries of the natural ionosphere but would instead have direct control of the propagation environment. Ideally, these artificial mirrors could be rapidly created and then would be maintained only for a brief operational period.

Local Introduction of Clouds

There are several methods for seeding clouds. The best-known dissipation technique for cold fog is to seed it from the air with agents that promote the growth of ice crystals. They include dropping pyrotechnics on lop of existing clouds. penetrating clouds, with pyrotechnics and liquid generators, shooting rockets into clouds, and working from ground-based generators. Silver iodide is frequently used lo cause precipitation, and effects usually are seen in about thirty minutes. Limited success has been noted in fog dispersal and improving local visibility through introduction of hygroscopic substances.

However, all of these techniques seem like a very inexact science and 30 minutes remains far from the timescales needed for clouds on demand. From my brief look at it, we are just poking around in cloud formation. For the local introduction of clouds to be useful in military applications, there have to be a suite of techniques robust to changing weather. More research in this area might be able to effect chain reactions to cause massive cloud formations. Real research in this area could help it emerge from pseudo-science. There is a lot of it in this area. This Atlantic article titled Dr. Wilhelm Reich’s Orgasm-Powered Cloudbuster is pretty amusing and pretty indicative of the genre.

A cloud gun that taps into an “omnipresent libidinal life force responsible for gravity, weather patterns, emotions, and health”

Fog Removal

Ok, so no-one can make clouds appear on demand in a wide range of environments, but is technology better when it comes to removing fog? The best-known dissipation technique is heating because a small temperature increase is usually sufficient to evaporate fog. Since heating over a very wide scale usually isn’t practical, the next most effective technique is hygroscopic seeding. Hygroscopic seeding uses agents that absorb water vapor. This technique is most effective when accomplished from the air but can also be accomplished from the ground. Optimal results require advance information on fog depth, liquid water content, and wind.

In the 20th century several methods have been proposed to dissipate fog. One of them is to burn fuel along the runway, heat the fog layer and evaporate droplets. It has been used in Great Britain during World War II to allow British bombers returning from Germany to land safely in fog conditions. Helicopters can dissipate fog by flying slowly across the top surface and mix warm dry air into the fog. The downwash action of the rotors forces air from above into the fog, where it mixes, producing lower humidity and causing the fog droplets to evaporate. Tests were carried out in Florida and Virginia, and in both places cleared areas were produced in the helicopter wakes. Seeding with polyelectrolytes causes electric charges to develop on drops and has been shown to cause drops to coalesce and fallout. Other techniques that have been tried include the use of high-frequency (ultrasonic) vibrations, heating with laser rays and seeding with carbon black to alter the radiative properties1.

However, experiments have confirmed that large-scale fog removal would require exceeding the power density exposure limit of $100 \frac{\text{watt}}{m^2}$ and would be very expensive. Field experiments with lasers have demonstrated the capability to dissipate warm fog at an airfield with zero visibility. This doesn’t mean that capability on a smaller scale isn’t possible. Generating $1 \frac{\text{watt}}{cm^2}$, which is approximately the US large power density exposure limit, raised visibility to one quarter of a mile in 20 seconds. Most efforts have been made on attempts to increase the runway visibility range on airports, since airline companies face millions of dollars loss every year due to fog appearance on the runway. This thesis examines in the issue in depth.

Emerging Enabling Technologies

In looking at this topic, I was able to find several interesting technologies that may develop and make big contributions to weather research.

Carbon Dust

Just as a black tar roof easily absorbs solar energy and subsequently radiates heat during a sunny day, carbon black also readily absorbs solar energy. When dispersed in microscopic form in the air over a large body of water, the carbon becomes hot and heats the surrounding air, thereby increasing the amount of evaporation from the water below. As the surrounding air heats up, parcels of air will rise and the water vapor contained in the rising air parcel will eventually condense to form clouds. Over time the cloud droplets increase in size as more and more water vapor condenses, and eventually they become too large and heavy to stay suspended and will fall as rain. This technology has the potential to trigger localized flooding and bog down troops and their equipment.

Nanotech

Want to think outside the box? Smart materials based on nanotechnology are currently being developed with processing capability. They could adjust their size to optimal dimensions for a given fog seeding situation and even make continual adjustments. They might also enhance their dispersal qualities by adjusting their buoyancy, by communicating with each other, and by steering themselves within the fog. If successful, they will be able to provide immediate and continuous effectiveness feedback by integrating with a larger sensor network and could also change their temperature and polarity to improve their seeding effects.

If we combine this with high fidelity models, things can get very interesting. If we can model and understand the leverage points of a weather system, nano-clouds may be able to have an dramatic effect. Nanotechnology also offers possibilities for creating simulated weather. A cloud, or several clouds, of microscopic computer particles, all communicating with each other and with a larger control system could mimic the signatures of specific weather patterns if tailored to the parameters of weather models.

High power lasers

The development of directed radiant energy technologies, such as microwaves and lasers, could provide new possibilities. Everyone should hate firing rockets and chemicals into the atmosphere. The advent of ultrashort laser pulses and the discovery of self-guided ionized filaments (see Braun et al., 1985) might provide the opportunity. Jean-Pierre Wolf has used using ultrashort laser pulses to create lightning and cue cloud formation. Prof Wolf says, “We did it on a laboratory scale, we can already create clouds, but not on a macroscopic scale, so you don’t see a big cloud coming out because the laser is not powerful enough and because of a lot of technical parameters that we can’t yet control,” from this cnn article.

What now?

So we have all the elements of scientific discipline and could use a national strategy in this area that includes the ethics, policy, technology and military employment doctrine. The military and civilian community already invests heavily in sensors and modeling of weather effects. These should be coupled with feasible excitation mechanisms to create a tight innovation loop. Again, this area is sensitive and politically charged, but there is a clear need to pull together sensors, processing capability and excitation mechanisms to ensure we have the right responses and capabilities. With such a dubious and inconclusive past, is there a potential future for weather modification? I think we have a responsibility for pursing knowledge even in areas where the ethical boundaries are not well established. Ignorance is never a good strategy. Just because we might open Pandora’s box, doesn’t mean that a less morally responsible nation or group won’t get there first. We can always abstain from learning a new technology, but if we are caught by surprise, we won’t have the knowledge to develop a good counter-strategy.

References

  1. http://csat.au.af.mil/2025/volume3/vol3ch15.pdf
  2. http://www.wired.com/2009/12/military-science-hack-stormy-skies-to-lord-over-lightning/
  3. PROSPECTS FOR WEATHER MODIFICATION

  1. DUMBAI, MA, et al. “ORGANIC HEAT-TRANSFER AGENTS IN CHEMICAL-INDUSTRY.” KHIMICHESKAYA PROMYSHLENNOST 1 (1990): 10-15. 
By One Comment

Kids Lego table: Case study in Automation for Design

[mathjax]

Motivation

I had to upgrade the Lego table I made when my kids were much smaller. It needed to be higher and include storage options. Since I’m short on time, I used several existing automation tools to both teach my daughter the power of programming and explore our decision space. The goals were to stay low-cost and make the table as functional as possible in the shortest time possible.

Lauren and I had fun drawing the new design in SketchUp. I then went to the Arlington TechShop and build the frame easily enough from a set of 2x4s. In order to be low-cost and quick, we decided to use the IKEA TROFAST storage bins. We were inspired from lots of designs online such as this one:

lego-table-example

However, the table I designed was much bigger and build with simple right angles and a nice dado angle bracket to hold the legs on.

table_with_bracket

The hard part was figuring out the right arrangement to place the bins underneath the table. Since my background is in optimization I was thinking about setting up two-dimensional knapsack problem but decided to do brute-force enumeration since the state-space was really small. I built two scripts: one in Python to numerate the state space and sort the results and one in JavaScript, or Extendscript, to automate Adobe Illustrator to give me a good way to visually considered the options. (Extendscript just looks like an old, ES3, version of Javascript to me.)

So what are the options?

There are two TROFAST bins I found online. One costs \$3 and the other \$2. Sweet. You can see their dimensions below.

options

They both are the same height, so we just need to determine how to make the row work. We could arrange each TROFAST bin on the short or long dimension so we have 4 different options for the two bins:

Small Side Long Side
Orange 20 30
Green 30 42

First, Lauren made a set of scale drawings of the designs she liked, which allowed us to think about options. Her top left drawing, ended up being our final design.

lauren designs

I liked her designs, but it got me thinking what would all feasible designs look like and we decided to tackle this since she is learning JavaScript.

Automation

If we ignore the depth and height, we then have only three options $[20,30,42]$ with the null option of $0$ length. With these lengths we can find the maximum number of bins if the max length is $112.4 \text{cm}$. Projects like this always have me wondering how to best combine automation with intuition. I’m skeptical of technology and aware that it can be a distraction and inhibit intuition. It would have been fun to cut out the options at scale or just to make sketches and we ended up doing those as well. Because I’m a recreational programmer, it was fairly straightforward to enumerate and explore feasible options and fun to show my daughter some programming concepts.

$$ \left\lfloor
\frac{112.4}{20}
\right\rfloor = 5 $$

So there are $4^5$ or $1,024$ total options from a Cartesian product. A brute force enumeration would be $O(n^3)$, but fortunately we have $\text{itertools.product}$ in python, so we can get all our possible options easily in one command:

itertools.product([0,20,30,42], repeat=5)

and we can restrict results to feasible combinations and even solutions that don’t waste more than 15 cm. To glue Python and Illustrator together, I use JSON to store the data which I can then open in Illustrator Extendscript and print out the feasible results.

results

Later, I added some colors for clarity and picked the two options I liked:

options

These both minimized the style of bins, were symmetric and used the space well. I took these designs forward into the final design. Now to build it.

final_design

Real Math

But, wait — wrote enumeration? Sorry, yes I didn’t have much time when we did this, but there are much better ways to do this. Here are two approaches:

Generating Functions

If your options are 20, 30, and 40, then what you do is compute the coefficients of the infinite series

$$(1 + x^{20} + x^{40} + x^{60} + …)(1 + x^{30} + x^{60} + x^{90} + …)(1 + x^{40} + x^{80} + x^{120} + …)$$

I always find it amazing that polynomials happen to have the right structure for the kind of enumeration we want to do: the powers of x keep track of our length requirement, and the coefficients count the number of ways to get a given length. When we multiply out the product above we get

$$1 + x^{20} + x^{30} + 2 x^{40} + x^{50} + 3 x^{60} + 2 x^{70} + 4 x^{80} + 3 x^{90} + 5 x^{100} + …$$

This polynomial lays out the answers we want “on a clothesline”. E.g., the last term tells us there are 5 configurations with length exactly 100. If we add up the coefficients above (or just plug in “x = 1”) we have 23 configurations with length less than 110.

If you also want to know what the configurations are, then you can put in labels: say $v$, $t$, and $f$ for twenty, thirty, and forty, respectively. A compact way to write $1 + x^20 + x^40 + x^60 + … is 1/(1 – x^20)$. The labelled version is $1/(1 – v x^20)$. Okay, so now we compute

$$1/((1 – v x^{20})(1 – t x^{30})(1 – f x^{40}))$$

truncating after the $x^{100}$ term. In Mathematica the command to do this is

Normal@Series[1/((1 - v x^20) (1 - t x^30) (1 - f x^40)), {x, 0, 100}]

with the result

$$1 + v x^{20} + t x^{30} + (f + v^2) x^{40} + t v x^{50} + (t^2 + f v + v^3) x^{60} + (f t + t v^2) x^{70} + (f^2 + t^2 v + f v^2 + v^4) x^{80} + (t^3 + f t v + t v^3) x^{90} + (f t^2 + f^2 v + t^2 v^2 + f v^3 + v^5) x^{100}$$

Not pretty, but when we look at the coefficient of $x^{100}$, for example, we see that the 5 configurations are ftt, ffv, ttvv, fvvv, and vvvvv.

Time to build it

Now it is time to figure out how to build this. I figured out I had to use $1/2$ inch plywood. Since I do woodworking in metric, this is a dimension of 0.472 in or 1.19888 cm.

 $31.95 / each Sande Plywood (Common: 1/2 in. x 4 ft. x 8 ft.; Actual: 0.472 in. x 48 in. x 96 in.)

or at this link

So the dimensions of this are the side thickness $s$ and interior thickness $i$ with shelf thickness $k$. Each shelf is $k = 20-0.5 \times 2 \text{cm} = 19 \text{cm}$ wide. All together, we know:

$$w = 2\,s+5\,k+4\,i $$

and the board thickness is $t$ where $t < [s, i]$.

which gives us:

st width
s 1.20
i 3.75
k 19.00
w 112.40

Code

The code I used is below:

References

By One Comment

The Hierarchical Dirichlet Process Hidden Semi-Markov Model

In my work at DARPA, I’ve been exposed to hidden Markov models in applications as diverse as temporal pattern recognition such as speech, handwriting, gesture recognition, musical score following, and bioinformatics. My background is in stochastic modeling and optimization, and hidden Markov models are a fascinating intersection between my background and my more recent work with machine learning. Recently, I’ve come across a new twist on the Markov model: the Hierarchical Dirichlet Process Hidden Markov Model.

What is a Markov model?

Say in DC, we have three types of weather: (1) sunny, (2) rainy and (3) foggy. Lets assume for the moment that the doesnt change from rainy to sunny in the middle of the day. Weather prediction is all about trying to guess what the weather will be like tomorrow based on a history of observations of weather. If we assume the days preceding today will give you a good weather prediction for today we need the probability for each state change:

$$ P(w_n | w_{n-1}, w_{n-2},\ldots, w_1) $$

So, if the last three days were sunny, sunny, foggy, we know that the probability that tomorrow would be rainy is given by:

$$ P(w_4 = \text{rainy}| w_3 = \text{foggy}, w_2 = \text{sunny}, w_1 = \text{sunny}) $$

This all works very well, but the state space grows very quickly. Just based on the above, we would need $3^4$ histories. So fix this we make the Markov Assumption that everything really depends on the previous state alone, or:

$$ P(w_n | w_{n-1}, w_{n-2},\ldots, w_1) \approx P(w_n| w_{n-1}) $$

which allows us to calculate the joint probability of weather in one day given we know the weather of the previous day:

$$ P(w_1, \ldots, w_n) = \prod_{i=1}^n P(w_i| w_{i-1})$$

and now we only have nine numbers to characterize statistically.

What is a hidden Markov model?

In keeping with the example above, suppose you were locked in a room and asked about the weather outside and the only evidence you have is that that ceiling drips or not from the rain outside. We are still in the same world with the same assumptions and the probability of each state is still given by:

$$ P(w_1, \ldots, w_n) = \prod_{i=1}^n P(w_i| w_{i-1})$$

but we have to factor that the actual weather is hidden from you. We can do that using Bayes’ rule where $u_i$ is true if the ceiling drips on day $i$ and false otherwise:

$$P(w_1, \ldots, w_n)| u_1,\ldots,u_n)=\frac{P(u_1,\ldots,u_n | w_1, \ldots, w_n))}{P(u_1,\ldots,u_n)}$$

Here the probability $P(u_1,\ldots,u_n)$ is the prior probability of seeing a particular sequence of ceiling leak events ${True,False,True}$. With this, you can answer questions like:

Suppose the day you were locked in it was sunny. The next day the ceiling leaked. Assuming that the prior probability of the caretaker carrying an umbrella on any day is 0.5, what is the probability that the second day was rainy?

So if Markov models consider states that are directly visible to the observer, the state transition probabilities are the only parameters. By contrast, in hidden Markov models (HMMs) the state is not directly visible, but the output, dependent on the state, is visible. A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (or hidden) states. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. In this context, ‘hidden’ refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a ‘hidden’ Markov model even if these parameters are known exactly.

OK, so what is a Hierarchical Dirichlet Process Hidden Semi-Markov Model?

Hidden Markov models are generative models where the joint distribution of observations and hidden states, or equivalently both the prior distribution of hidden states (the transition probabilities) and conditional distribution of observations given states (the emission probabilities) are modeled. Instead of implicitly assuming a uniform prior distribution over the transition probabilities, it is also possible to create hidden Markov models with other types of prior distributions. An obvious candidate, given the categorical distribution of the transition probabilities, is the Dirichlet distribution, which is the conjugate prior distribution of the categorical distribution.

In fact, it is possible to use a Dirichlet process in place of a Dirichlet distribution. This type of model allows for an unknown and potentially infinite number of states. It is common to use a two-level Dirichlet process, similar to the previously described model with two levels of Dirichlet distributions. Such a model is called a hierarchical Dirichlet process hidden Markov model, or HDP-HMM for short or it is also called the “Infinite Hidden Markov Model”.

The Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) is a natural Bayesian nonparametric extension of the traditional HMM. The single parameter of this distribution (termed the concentration parameter) controls the relative density or sparseness of the resulting transition matrix. By using the theory of Dirichlet processes it is possible to integrate out the infinitely many transition parameters, leaving only three hyperparameters which can be learned from data. These three hyperparameters define a hierarchical Dirichlet process capable of capturing a rich set of transition dynamics. The three hyperparameters control the time scale of the dynamics, the sparsity of the underlying state-transition matrix, and the expected number of distinct hidden states in a finite sequence.

This is really cool. If you formulate a HMMs with a countably infinite number of hidden states,
you would have infinitely many parameters in the state transition matrix. The key idea is that the theory of Dirichlet processes can implicitly integrate out all but the three parameters which define the prior over transition dynamics. It is also possible to use a two-level prior Dirichlet distribution, in which one Dirichlet distribution (the upper distribution) governs the parameters of another Dirichlet distribution (the lower distribution), which in turn governs the transition probabilities. The upper distribution governs the overall distribution of states, determining how likely each state is to occur; its concentration parameter determines the density or sparseness of states. Such a two-level prior distribution, where both concentration parameters are set to produce sparse distributions, might be useful for example in unsupervised part-of-speech tagging, where some parts of speech occur much more commonly than others; learning algorithms that assume a uniform prior distribution generally perform poorly on this task. The parameters of models of this sort, with non-uniform prior distributions, can be learned using Gibbs sampling or extended versions of the expectation-maximization algorithm.

So how can we use this?

A common problem in speech recognition is segmenting an audio recording of a meeting into temporal segments corresponding to individual speakers. This problem is often called speaker diarization. This is particularly challenging since you don’t know the number of people participating in the meeting and modified HDP-HMMs have been very effective at achieving state-of-the-art speaker diarization results.

Other interesting applications of HDP-HMMs have been modeling otherwise intractable linear dynamical systems which describing dynamical phenomena as diverse as human motion, financial time-series, maneuvering targets, and the dance of honey bees. (See this paper for more details. Results have shown that HDP-HMM can identify periods of higher volatility in the daily returns on the IBOVESPA stock index (Sao Paulo Stock Exchange). Most interesting to me was the application to using HDP-HMMs on a set of six dancing honey bee sequences aiming to segment the sequences into distinct dances.

You can see some other cool motion capture examples here.

By 0 Comments

Review: Abundance

Humanity is now entering a period of radical transformation in which technology has the potential to significantly raise the basic standards of living for every man, woman and child on the planet.

The future can be a scary place

It can be easy to develop a gloomy view of the future. Malthus was the first public voice that compared population growth to the world’s diminishing resources to arrive at the conclusion that our days were numbered. Jared Diamond has argued well that we are gorging ourselves way past sustainability and flirting with our own collapse. Other books I’ve read recently to include a Short History of Nearly Everything and Sapiens take a long view of history and produce a masterful explanation that humans dominate the planet and that we are in the midst of an unprecedented experiment with our ecosystem, the world economy and even our own biology.

Add this to the angst in my conservative evangelical community that is beset with rapid culture change1, secularization and nearly complete societal swap of epistemology based on transcendent (i.e. God’s) design with a fluid soup of cultural opinion and emotion. But pessimism isn’t limited to my crowd, it’s practiced well on both sides of the aisle with Jeremiads about income inequality, environmental destruction and corporate power and malfeasance arriving daily from both the Clinton and Sanders camps. 2

Economically, the risks are also very real. The 2008 financial crisis highlighted the systemic risk, addiction to growth and optimistic future projections that are baked into our system. Just as our epistemology now rests on emotion, it seems that our economic theory does as well. It is becoming increasingly difficult to track all of the bubbles and capital mis-allocations that have resulted from 7 years of ZIRP, NIRP and QE. How much more can we print money before the serial, or parallel, and long overdue day of reckoning arrives? In 2008/9, while the equity markets went down, the bond markets compensated. What if next time, there is a concurrent bond market and equity collapse? By some calculations, interest rates are at seven hundred-year lows and a third of Europe is now at negative rates. The high yield market is precarious, and if that falls treasuries will get bid to the stratosphere and at some point you’ve got to get a real return and that is a long way down from the market’s current position.

And technology seems to make it all worse. Communication, information and transportation technology pulls us all together into one collective mush that is controlled by the market and state as we all slavishly let world-fashion trends define what we see in the mirror. Everything from the climate to the markets is influenced by a common mass of humanity participating in the same economic dance. What we are left with is an ersatz diversity based on skin-color and political preference, instead of the truly distinct cultures that marked the pre-communication and global transportation revolutions of the last 100 years.

What this perspective misses is that technology has saved our bacon many times and it might just do it again. Mr. Diamandis, the chairman and chief executive of the X Prize Foundation and the founder of more than a dozen high-tech companies, boldly makes the case that the glass is not just half-full, it is about to become much bigger. He makes his case in his latest book: Abundance.

Technology to the rescue

How awesome would it be if technology is about to solve the challenges provided by overpopulation, food, water, energy, education, health care and freedom? If we carefully look back instead of nervously forward, technology has clearly made some amazing contributions. Take one of the most talked-about societal problems that is driving a lot of the progressive tax-policy discussion: income inequality. Here Diamandis discussion of poverty is especially insightful.

If you look at the data, the number of people in the world living in absolute poverty has fallen by more than half since the 1950s. At the current rate of decline it will reach zero by around 2035. Groceries today cost 13 times less than 150 years ago in inflation-adjusted dollars. In short, the standard of living has improved: 95{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} of Americans now living below the poverty line have not only electricity and running water but also internet access, a refrigerator and a television—luxuries that Andrew Carnegie’s millions couldn’t have bought at any price a century ago.

You can make other comparisons such as information wealth. I’m eager to plot when the average citizen gained near information parity with the president. (I’m thinking that a basic citizen with an iPhone today has more access to information than George Bush had when he started his presidency.) And who would have dreamed that a family could consolidate their GPS, video camera, library and photo-albums in 112 grams in their pocket?

Through a mix of sunny-side up data and technical explanation, Diamandis makes a good point that a focus on immediate events and bad news and often blinds us to long-term trends and good news. A nice surprise of the book is that he doesn’t just preach the technology gospel, but he delves into our cognitive biases bringing in Daniel Kahne­man into the mix and explaining how our modern analytical minds aren’t incentivized by see the beautiful wake behind us, but rather focus on the potentially choppy waters ahead. While prudence is always advised, Diamandis makes the case that the resultant pessimism is easy to overstate and can diminish our potential.

Through many historical examples, he makes the point of the massive goodness results when technology transforms a scarce quantity into a plentiful one. One fun example is aluminum. In the Atlantic Sarah Lascow describes that while aluminum is the most common metal in the Earth’s crust, it binds tightly to other elements and was consequently very scarce. It wasn’t until 1825 that anyone was able to produce even a sample of aluminum, and even that wasn’t pure. Napoleon honored guests by setting their table places with aluminum silverware, even over gold. It is a fascinating story that two different chemists3 figured out how to use cryolite—an aluminum compound—in a solution that, when shot through with electricity, would produce pure aluminum. The data show the resultant price drop from \$12 a pound in 1880, to \$4.86 in 1888, to 78 cents in 1893 to, by the 1930s, just 20 cents a pound. And technology leads to more exciting technology in unanticipated ways. In 1903, the Wright Brothers used aluminum to build a lightweight and strong crankcase for their aircraft, which further connected the scientific community around the world to make even more rare things plentiful.

Diamandis certainly plays his hand well and I’m inclined to side with him on many of his arguments. I’ll always side with the definite optimists before I join the scoffer’s gallery. After all, the pessimists were the cool kids in school, but it is the nerds who get things done. I’m a big believer that engineers are the ultimate creators of all wealth, and here Diamandis is preaching to the choir.

The case for abundance from technology

To summarize his argument, he makes four basic points:

First, we are both individually and collectively terrible at predicting the future, particularly when it comes to technology, which often exceeds our expectations in producing wealth. He claims technologies in computing, energy, medicine and a host of other areas are improving at such an exponential rate that they will soon enable breakthroughs we now barely think possible. Yes, we don’t have HAL, jet-packs and our moon-base in 2015, but we do have rapid DNA sequences, an instant collection of the world’s information and weapons that can burn up whole cities under a second.

Second, these technologies have empowered do-it-yourself innovators to achieve startling advances — in vehicle engineering, medical care and even synthetic biology — with scant resources and little manpower, so we can stop depending on big corporations or national laboratories.

Third, technology has created a generation of techno-philanthropists (think Bill Gates or Mark Zuckerberg) who are pouring their billions into solving seemingly intractable problems like hunger and disease and not hoarding their wealth robber-baron style.

Fourth, “the rising billion.” These are the world’s poor, who are now (thanks again to technology) able to lessen their burdens in profound ways and start contributing. “For the first time ever,” Diamandis says, “the rising billion will have the remarkable power to identify, solve and implement their own abundance solutions.”

Ok, should we bet the farm on this?

Diamandis is banking on revolutionary changes from technology and from my perspective, expectations are already sky high. (Really, P/E ratios close to 100 for companies like Amazon and Google?) In fairness, by a future of abundance, he doesn’t mean luxury, but rather a future that will be "providing all with a life of possibility". While that sounds great, to those of us in the west, this might just be a reversion to the mean from the advances of the last 100 years.

However, I loved the vision he mapped out. Will there be enough food to feed a world population of 20 billion? What about 50 billion? Diamandis tells us about “vertical farms” within cities with the potential to provide vegetables, fruits and proteins to local consumers on a mass scale. Take that Malthus.

While he does a good job of lining up potential technical solutions with major potential problems, he doesn’t address what I consider the elephant in the room: are we developing morally in a way that leads us to use technology in a way that will broadly benefit the world? Markets are pretty uncaring instruments, and I would at least like to hear the case that the future’s bigger pie will be broadly shared. As it is, I’m pretty unconvinced.

Also, his heroes are presented as pure goodness and their stories are a big hagiographic for my tastes. For example, Dean Kamen’s water technology is presented as an imminent leap forward while in reality his technology is widely considered far too expensive for widespread adoption. While he exalts the impact of small groups of driven entrepreneurs, how much can they actually do without big corporations to scale their innovations? In all his case studies, the stories are very well told, but the take-away is not quite convincing against a backdrop of such a strong desire for technology to guide us into a future of global abundance. And even though he acknowledges the magnitude of our global problems; and he hints, in places, at the complexity of overcoming them, he doesn’t address that these systems can have negative exponential feedback loops as well. In my view, technology is just an amoral accelerator that requires moral wisdom.

No, but you should read this book anyway

spidy

In all, this was a great read and his perspective is interesting, insightful and inspiring. It forces us to at least consider the outcome that the glass half full might actually overfill from technology and that it certainly has in the past. Who can argue against hoping for more “radical breakthroughs for the benefit of humanity.” All considered, this book is a great resource for leaders, technologists and anyone in need of some far too scarce good news.


  1. Ravi Zacharias writes that “The pace of cultural change over the last few decades has been unprecedented in human history, but the speed of those changes has offered us less time to reflect on their benefits.” 
  2. Consider that about 30 percent of the world’s fish populations have either collapsed or are on their way to collapse. Or, global carbon emissions rose by a record 5.9 percent in 2010, a worrisome development considering that the period was characterized by slow economic growth. 
  3. Charles Martin Hall was 22 when he figured out how to create pure globs of aluminum. Paul Héroult was 23 when he figured out how to do the same thing, using the same strategy, that same year. Hall lived in Oberlin, Ohio; Héroult lived in France. 
By 0 Comments

Some tax-time automation

I often struggle to find the right balance between automation and manual work. As it is tax time, and Chase bank only gives you 90 days of statements, I find myself every year going back through our statements to find any business expenses and do our overall financial review for the year. In the past I’ve played around with MS Money, Quicken, Mint and kept my own spreadsheets. Now, I just download the statements at the end of year and use acrobat to combine and ruby to massage the combined PDF into a spreadsheet.1

To do my analysis I need everything in a CSV format. After, getting one PDF, I end up looking at the structure of the document which looks like:

Earn points [truncated] and 1{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} back per $1 spent on all other Visa Card purchases.

Date of Transaction Merchant Name or Transaction Description $ Amount
PAYMENTS AND OTHER CREDITS
01/23 -865.63
AUTOMATIC PAYMENT - THANK YOU

PURCHASES
12/27  AMAZON MKTPLACE PMTS AMZN.COM/BILL WA  15.98
12/29  NEW JERSEY E-ZPASS 888-288-6865 NJ  25.00
12/30  AMAZON MKTPLACE PMTS AMZN.COM/BILL WA  54.01

0000001 FIS33339 C 2 000 Y 9 26 15/01/26 Page 1 of 2

I realize that I want all lines that have a number like MM/DD followed by some spaces and a bunch of text, followed by a decimal number and some spaces. In regular expression syntax, that looks like:

/^(\d{2}\/\d{2})\s+(.*)\s+(\d+\.\d+)\s+$/

which is literally just a way of describing to the computer where my data are.

Through using Ruby, I can easily get my expenses as CSV:

Boom. Hope this helps some of you who might otherwise be doing a lot of typing. Also, if you want to combine PDFs on the command line, you can use PDFtk thus:

pdftk file1.pdf file2.pdf cat output -

  1. The manual download takes about 10 minutes. When I get some time, I’m up for the challenge of automating this eventually with my own screen scraper and web automation using some awesome combination Ruby and Capybara. I also use PDFtk to combine PDF files. 
By One Comment

Review: History of the World in Six Glasses by Tom Standage


I love history, but raw history can be a bit boring and so I look for books that peer into the past with a different lens or narrative. Here, Tom Standage tells a popular history of the world through six beverages: beer, wine, spirits, coffee, tea and Coca Cola. Full of the anecdotes and stories that liven up an otherwise dry subject, I especially appreciated the new perspective added to the background of the otherwise unrecognized history behind my drinks. The fact that water is so essential to our survival provides the necessary justification to put our drinks at the center of history. By introducing each beverage chronologically, he allows each beverage to tell the story of a period through local stories, global processes, and connections.

One of the first conclusions was that our beverages are much more than a means to satisfy our thirst or sweet tooth. The six glasses surveyed contained medicines, currency, social equators, revolutionary substances, status indicators, and nutritional supplements.

While a good book and an engaging read, I wouldn’t say my worldview was challenged or much expanded by this book. Books like this would make a fascinating magazine article (like one of those crazy long articles in the Atlantic), and I feel the story of each glass was stretched to fill a book. To save you the time, I tried to hit the highlights below and allow you to read something much more interesting like Sapiens or Abundance (review forthcoming).

Beer

In both cultures [Egypt and Mesopotamia], beer was a staple foodstuff without which no meal was complete. It was consumed by everyone, rich and poor, men and women, adults and children, from the top of the social pyramid to the bottom. It was truly the defining drink of these first great civilizations.” Page 30

Standage begins by discussing the history of beer while presenting the story of the domestication of cereal grains, the development of farming, early migrations, and the development of river valley societies in Egypt and Mesopotamia. He talks of beer as a discovery rather than an invention, and how it was first used alternately as a social drink with a shared vessel, as a form of edible money, and as a religious offering. As urban water supplies became contaminated, beer also became a safer drink. Beer became equated with civilization and was the beverage of choice from cradle to the grave. By discussing global processes such as the increase of agriculture, urban settlement, regional trade patterns, the evolution of writing, and health and nutrition, Standage provides the needed global historical context for the social evolution of beer.

Wine

Thucydides: “the peoples of the Mediterranean began to emerge from barbarism when they learned to cultivate the olive and the vine.” (52-53)

Standage introduces wine through a discussion of early Greek and Roman society. Wine is initially associated with social class as it was exotic and scarce, being expensive to transport without breakage. The masses drank beer. Wine conveyed power, prestige, and privilege. Wine then came to embody Greek culture and became more widely available. It was used not only in the Symposium, the Greek drinking parties, but also medicinally to clean wounds and as a safer drink than water. Roman farmers combined Greek influence with their own farming background through viticulture, growing grapes instead of grain which they imported from colonies in North Africa. It became a symbol of social differentiation and a form of conspicuous consumption where the brand of the wine mattered. With the fall of the Roman Empire, wine continued to be associated with Christianity and the Mediterranean. Global processes highlighted here include the importance of geography, climate and locale, long distance trade, the rise and fall of empires, the movement of nomadic peoples, and the spread of religion.

Spirits

“Rum was the liquid embodiment of both the triumph and the oppression of the first era of globalization.” (Page 111)

First, I needed this book to force me to consider the difference between beer, wine and spirits. Here is how I keep it straight:

As far as I can tell, there are three big divisions in the world of adult beverages: beers, wines, and spirits. These typically contain between 3{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} and 40{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} Alcohol by volume (ABV).

Beer (Alcohol content: 4{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}-6{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} ABV generally) and Wine (Alcohol content: 9{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4}-16{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} ABV) are alcoholic beverages produced by fermentation.

Beer is generally composed of malted barley and/or wheat and wine is made using fermented grapes. Simple enough. Also remember that Ales are not Lagers. Ale yeasts ferment at warmer temperatures than do lager yeasts. Ales are sometimes referred to as top fermented beers, as ale yeasts tend to locate at the top of the fermenter during fermentation, while lagers are referred to as bottom-fermenting by the same logic.

Beer and Wine have low alcohol content. (And I only drink these.) So, while being alcoholic drinks they aren’t included in the general definition of ‘Liquor’, which is just a term for drinks with ABV’s higher than 16 or so percent.

To be clear, fermentation is a metabolic process that converts sugar to acids, gases or alcohol. It occurs in yeast and bacteria, but also in oxygen-starved muscle cells, as in the case of lactic acid fermentation. Fermentation is also used more broadly to refer to the bulk growth of microorganisms on a growth medium, often with the goal of producing a specific chemical product.

By the way, it was news to me that Champagne is just a specific variant of wine. More specifically, Champagne is a sparkling (carbonated) wine produced from grapes grown in the Champagne region of France following rules that demand secondary fermentation of the wine in the bottle to create carbonation.

Now, back to this section of the book. Whisky, Rum, Brandy, Vodka, Tequila are all what we call ‘Spirits’ or ‘Liquor’ and they can really crank up the ABV.

Spirits (aka Liquor or Distilled beverage) are beverages prepared using distillation. Distillation is just further processing of fermented beverage to purify and remove any diluting components like water. This increases the proportion of their alcohol content and that’s why they are also commonly known as ‘Hard Liquor’. Distilled beverages like whisky may have up to 40{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} ABV. (wow)

This was strange for me. I’ve always considered wine production to be the highest art in beverage production, but you can think of distilled spirits as a more “refined” counterpart of the more “crude” fermented beverages.

Standage focuses less on the basic content above, and gives us the history that got us here. He introduces the fact that the process of distillation originated in Cordoba by the Arabs to allow the miracle medicine of distilled wine to travel better. He talks of how this idea was spread via the new printing press, leading to the development of whiskey and, later, brandy. Much detail is provided on the spirits, slaves, and sugar connection where rum was used as a currency for slave payment. Sailors drank grog (watered-down rum), which helped to alleviate scurvy.

He argues that rum was the first globalized drink of oppression. Its popularity in the colonies, where there were few other alcoholic beverage choices, led to distilling in New England. This, he argues, began the trade wars which resulted in the molasses act, the sugar act, the boycotts of imports, and a refusal to pay taxes without representation. Indeed, he wonders whether it was rum rather than tea that started the American Revolution. He also discusses the impact of the whiskey rebellion. The French fur traders’ use of brandy, the British use of rum, and the Spanish use of pulque all point to how spirits were used to conquer territory in the Americas. Spirits became associated not only with slavery, but also with the exploitation and subjugation of natives on five continents as colonies and mercantilist economic theory was pursued.

For completeness, I wanted to summarize the difference between the different spirits out there.

Vodka is the simplest of spirits and consists almost entirely of water and ethanol. It’s distilled many times to a very high proof, removing almost all impurities, and then watered down to desired strength. Since just about all impurities are removed, I was surprised to find out that it can be made from just about anything. Potatoes, grain, or a mixture are most common. Flavored vodkas are made by adding flavors and sugars after the fact when the liquor is bottled.

Whiskey (which includes Scotches, Rye, and Bourbons) is specifically made from grain and is aged in wood casks. The grain is mixed with water and fermented to make beer and then distilled. (Yes, whiskey is first beer, surprise to me.) The liquor comes out of the still white and is very much like vodka. The color is imparted by aging in wood casks. Different types of whiskey are separated by the grain they are made of, how they are aged, and specific regional processes. Scotches are from Scotland, made mostly with barley, are smokey from the way the barley is kiln dried. Bourbons are made from at least half corn and are aged in charred barrels which impart caramel and vanilla flavors. Rye is made from rye, and there are plenty more variations.

Gin, like the others made with grain, starts is life as beer, which is then distilled to a high proof like vodka. Aromatic herbs including juniper berries and often gentian, angelica root, and a host of secret flavorings depending on the brand, are added to the pure spirit. The liquor is then distilled again. The second distillation leaves behind heavy bitter molecules which don’t vaporize readily, capturing only the lighter aromatics.

Rum is made by fermenting and distilling cane sugar. Traditionally made from less refined sugar, it contains aromas of the sugar cane. Originally it was an inadvertent by product of making sugar as runoff from the refinery quickly fermented. Like whiskey, some rums are aged, giving them an amber color. And, like other sprits there are regional variations with slightly different processes.

Brandy is a distilled spirit from fruits. Most commonly grapes.

Agave liquors, including tequila, mezcal, and sotol, are made from fermented sugars from the agave, a relative of aloes.

Coffee (my favorite beverage)

Europe’s coffeehouses functioned as information exchanges for scientists, businessmen, writers and politicians. Like modern web sites.. (Page 152)

Standage presents the history of coffee from its origins in the Arab world to Europe, addressing the initial controversy that the beverage generated in both locations. As a new and safe alternative to alcoholic drinks and water, some argued that it promoted rational enquiry and had medicinal qualities. Women felt threatened by it, however, arguing that due to its supposed deleterious effect on male potency, “The whole race is in danger of extinction.” Coffeehouses were places where men gathered to exchange news where social differences were left at the door. Some establishments specialized in particular topics such as the exchange of scientific and commercial ideas. Governments tried to suppress these institutions, since coffeehouses promoted freedom of speech and an open atmosphere for discussion amongst different classes of people–something many governments found threatening.

I had a weak appreciation for Coffee’s economic impact. Whole empires were built on coffee and coffeehouses formed the first stock exchanges. The Arabs had a monopoly on beans, while the Dutch were middlemen in the trade and then set up coffee plantations in Java and Suriname. The French began plantations in the West Indies and Haiti.

Tea

The story of tea is the story of imperialism, industrialization and world domination one cup at a time. (Page 177)

The author discusses the historic importance of tea in China as initially a medicinal good and then as a trade item along the Silk Routes with the spread of Buddhism. It became a national drink during the Tang dynasty, reflecting the prosperity of the time. Easy to prepare, its medicinal qualities were known to kill bacteria that cause cholera, typhoid, and dysentery. Though it fell from favor during Mongol rule, it had already spread to Japan, where the tea ceremony evolved as a sign of status and culture. Tea was introduced into Europe before coffee but was more expensive, and so initially denoted luxury and was used mainly as a medicinal drink. By the 18th century, Britain was won over by tea thanks in part to the role played by the British East India Trading company. Power plays in India and China as opium was traded for tea increased the economic might of the British empire abroad. Marriages, tea shops for women, tea parties, afternoon tea, and tea gardens all evolved as part of high culture. And yet, tea also showed up amongst the working class and played a role in factory production through the introduction of tea breaks. Tea also played a role in reducing waterborne diseases since the water had to be boiled first. This directly increased infant survival rates, and thus increased the available labor pool for the industrial revolution. The marketing of tea and tea paraphernalia provided additional evidence of the emergence of consumerism in England. Tea drinking in nations of the former British empire continues to this day. Tea helps to explain the global processes of trade through the Silk Routes and via later technologies such as railroads and steamships. Standage also highlights the role of tea in disease prevention, the Industrial revolution, the Rise of the West, and imperialism.

Coke

To my mind, I am in this damn mess as much to help keep the custom of drinking Cokes as I am to help preserve the million other benefits our country blesses its citizens with . . . (Page 253)

Similar to the other drinks Standage discusses, I was surprised to learn that Coca cola was initially a medicinal beverage. Soda water could be found in the soda fountains in apothecaries as early as 1820. John Pemberton in Atlanta Georgia in 1886 developed a medicinal concoction using French wine, coca (from the Incas), and kola extract. However, he needed a non-alcoholic version because of the temperance movement, and thus Coca-Cola was born. Thanks to advertising and marketing using testimonials, a distinctive logo, and free samples, the syrup became profitable when added to existing soda fountains. By 1895 it was a national drink. Legal controversy forced it to let go of medicinal claims and left it as “delicious and refreshing.” Further challenges to the drink included the end of Prohibition, the Great Depression, and the rise of Pepsi.

With World War II, America ended isolationism and sent out 16 million servicemen with Coke in their hands. Coke sought to increase soldier morale by supplying a familiar drink to them abroad. To cut down on shipping costs, only the syrup was shipped, and bottling plants were set up wherever American servicemen went. Quickly, Coke became synonymous with patriotism. After the war, there were attacks of Coca-colonization by French communists in the midst of the Cold war. The company responded by arguing that “coca cola was the essence of capitalism” representing a symbol of freedom since Pepsi had managed to get behind the “iron curtain.” Ideological divides continued as Coca Cola was marketed in Israel and the Arab world became dominated by Pepsi. Coca Cola represents the historical trend of the past century towards increased globalization, and its history raises reader awareness of global processes of industrialization, mass transportation, mass consumerism, global capitalism, conflict, the Cold war, and ideological battles.

Water?

Standage concludes the book by posing the question of whether water will be the next drink whose story will need to be told. He cites not only the bottled water habit of the developed world, but the great divide in the world being over access to safe water. He also notes water’s role as the root of many global conflicts.

By 0 Comments

Review: How we Got to Now

How we got to Now

Stephen Johnson loves making broad and interdisciplinary connections. He describes the complex evolution of technology, and the interactions of events leading to our modern world with an emphasis on understanding the true nature of role of innovation. In 289 pages, he surveys history, through the lens of the causal factors for science and technology in an engaging narrative. He takes you through such diverse places as the secret chambers within the pyramids of Giza and foul trenches in the sewers of old Chicago. The journey covers six thematic areas: glass, cold, sound, cleanliness, time and light.

Each of these six facets of modern life are described through their causes and practical impact (refrigeration, clocks, and eyeglass lenses, to name a few). Johnson highlights the role of hobbyists, amateurs, and entrepreneurs play over centuries on their non-linear path to discovery. Of course, he loves to highlight surprising stories of accidental genius and brilliant mistakes-from the French publisher who invented the phonograph before Edison but forgot to include playback, to the Hollywood movie star who helped invent the technology behind Wi-Fi and Bluetooth.

The book is strongest when Johnson examines unexpected connections between seemingly unrelated fields: how the invention of air-conditioning enabled the largest migration of human beings in the history of the species-to cities such as Dubai or Phoenix; how pendulum clocks helped trigger the industrial revolution; and how clean water and air filtration made it possible to manufacture computer chips.

I enjoyed his weird and amusing examples, more than his causal analysis, which is notoriously hard to be conclusive. And Johnson is off course too reductive in a book for such a lay-audience, but he always leaves the door open a crack for reasonable disagreements and his arguments are intriguing. I especially enjoyed the strange interconnected tales of how the things that we take for granted were developed. Johnson calls these interconnections “hummingbird effects”, which he highlights as: “An innovation, or cluster of innovations, in one field end up triggering changes that seem to belong to a different domain altogether.”

The first innovation Johnson covers is Glass. He starts with its initial discovery in the Libyan Desert to the final perfection of its manufacture in applications as broad as microscopes and mirrors. Through all of this he discusses the interconnections at each step in incremental innovation, always underscoring that the right pieces must be in place before anyone can put a new technology together. Regarding the transformation of silicon dixoide into glass, furnace building and the segregation of the Venetian glassblowers to the island of Murano had to occur concurrently. Once he pulls a thread, the connections start flying.

Johnson, for example, presents the printing press, which made books readily available, which in turn resulted in many people realizing that they were farsighted and could therefore not read them. This resulted in spectacles and spectacle makers who experimented with the lenses resulted in the invention of the microscope and telescope, which in turn altered our concept of the microscopic world and the cosmos. Glass also led to better mirrors, which in turn altered one’s view of self in full circle and had a good bit to do with the introspection that characterized the renaissance and initiated the artistic style of self-portraiture.

Threads like this make for interesting story telling, but I cringe at the implict assignment of causation his arguments imply. However, it is exciting and insightful to see a smart polymathic researcher present an opinion. The stories alone are worth reading the book. Each thread is plausible, if none of them are conclusive.

For example, with “Cold”, Johnson highlights Fredric Tudor’s troubled but ultimately successful obsession to bring ice from the frozen lakes and ponds of New England to the tropics. Ice eventually led to refrigeration and to changes in the living patterns in the US and now in much of the rest of the world because tropical climates were now made more habitable. Cold is also the story of frozen food and how this has changed eating habits and daily routine. While I’m less confident in the causality claims made, I found incredible value in the contrasting case studies of Fredric Tutor and Willis Carrier. Each were successful, but with different ingredients, and in different ways.

In a sense, “How We Got to Now” is a stellar history book regarding the technical development of six different areas, and a mediocre explanation of how we actually got to now. For that, I recommend you give it a priority that lands it somewhere in the middle of your stack.

 

By 0 Comments