Skip to main content
Posts by:

Tim Booher

Evolution, Faith and Modernity

Tonight, I just finished Francis Collins’ book “The Language of God” where he lays out the basic facts of genetics and the human genome, denounces Creationism and rejects Intelligent Design theory, rebukes Richard Dawkins, and generally sets a tone for reasonableness between Christians and scientists.

This is a lot of disjoint topics, and while he covers a lot of territory, he doesn’t provide sufficient depth in any one area to change minds on either side of the debate of these issues. His goal is clearly to get Christians to think and synchronize their beliefs with modern science.

There is much compelling in this book. As a Christian, I want to be honest and consistent, not just with others, but with myself. I don’t want to hold on to beliefs that are not in agreement with my principles and don’t derive from what I consider to be authority.

So what determines what I believe? Logic and trust, experience and faith. At a basic level, my beliefs are the result of the information I’ve received and how I’ve processed it. While, this sounds decidedly materialist, as a Christian, the Holy Spirit is an important input. Looking back, I would have to put these in the following order:

  • External Conversations (especially honest conversations with friends) – this is why you should surround yourself with the very best people, and listen to them
  • Internal Conversations (Reflections) (times I’m with books, praying, writing posts like this)

In these, I certainly consider arguments of reason to be critical, but I’m sufficiently aware that I have neither the time nor ability to form all my beliefs from my logic alone. Some would say this is a lack of moral courage, “think for yourself, Tim”, but I hold to a classical view of faith, extending lots of trust to the organizations I join to teach me the right things. This doesn’t mean I turn my mind off in church, but I approach things there with trust. Even in technical lectures, I’m generally there trusting the professor, not scoffing at her equations. I’m there trying to figure out what they are saying, under the trust that the school has vetted the professor and the scientific community has vetted the textbook. Perhaps this is best summarized with a “trust, but verify” mindset.

Here we get to the heart of Dr Collins’ book. We can’t derive everything from first principles. For me, I would say only a small fraction of my beliefs are from first principles, other things just ‘seem’ to work and I trust experience. Other things I just trust other folks on. Take a statement like “computers read and process information”. I believe this. I use computers all the time. I’ve even build logic out of Boolean circuit components, I’ve done the physical chemistry of n- and p-type junctions of transistors, but at some level I just trust that x86 processors work, even if at some point long ago, I thought through how an ALU works.

We conservative Christians have a problem. We love the consistency, products and output of science, but the science of origins has taken on theology all its own. In particular, there is now a vocal group of public intellectuals claiming they are creatures of reason and that faith and trust has no place, deriving all beliefs and forming moral judgements from the scientific method and falsifiable data. Their most popular argument is an appeal to fairness: why are your beliefs superior to ancient sun-worshipers or crazy people when you have no data to bring to the table? To oppose them counters currently accepted notions of equality. (The argument goes: “Who opposes equality but bigots and elitists? And if you don’t oppose equality, than how can you say your faith is more valid then someone else? Only data are objective. Faith is not.”)

Christians want to trust the scientific community and love the Christian scientific heritage, but our faith is precious to us and we have both experienced God and His forgiveness and place trust in His specific (i.e. Bible) and general revelation (i.e. experience of the natural world). From our own inability to control our own moral state and actions, we know we need accountability and we find great comfort in Biblical and ecclesiastical answers to the big questions. I also find comfort in not needing to arbitrate all the answers myself. Both the history of the Church and the Christian community I have is there to teach me and help me navigate life.

I value all these things, but what do I believe and why? Several weeks ago, it was helpful for me to fill out an excel spreadsheet with my beliefs. I put statements like “We live in a causal world” next to “God created the world” and categorized them by my level of certainity. I’m sure there is a better list, but I put a checkbox to see if each of the following categories supported a specific belief:

  • Basic Reason
  • Testimony of Natural World
  • Personal Experience
  • Bible
  • Historical Evidence
  • Trusted Friends
  • Scientific Community
  • The Church

I’m sure this is a poor list, but I wanted to get started. So for something like: “I exist” or “my wife is an amazing woman”, I would check personal experience and basic reason– I both know these to be true intuitively and I can give you lots of evidence why. For “the soul is immortal” I check off the church, Bible, and trusted friends. Wow, much to argue about here, but this was just an experiment to get me thinking.

Now, I’m not a philosopher, but I’m interested in Dr Collins’ central question: how can modern Christians accept authority from Bible, Church and the Scientific Community?

In order to make this work, Collins argues that faith (specifically Christian faith) is reasonable for a modern smart scientist, that the current consesus of the scientific community regarding origins is a “hands off” process of natural selection, and the Christan view to syncronize scripture is to accept (1) God started things, but didn’t guide them, (2) certain parts of scripture are “clearly” poetic and not indended to be taken literaly and (3) put faith in the smallest part possible in your understanding of the natural world, but at least allow for the possibilities for miracles to exist.

In short: trust your “scientific” part of your mind as the primary arbiter for your beliefs, but allow for faith as well, at least where it is reasonable. Then, place these two systems of belief in separate spheres where they can each answer their respective questions.

At first glance, this seems excellent. Can I really confine science and religion to operate in largely separate spheres, the natural and the supernatural, so that most instances of supposed conflict are actually misunderstandings or misapplications of one or the other? To Collins, the error is when ‘faith trumps science’ or when ‘science trumps faith’. His ideal is an egalitarian view: two healthy determinants of belief, both equal and valid.

Can I take control of scripture and start discounting the parts that don’t seem to make sense to me as poetry? Can I trust the scientists to tell me what to believe on origins like I trust the doctor to tell me what medicine to take? Would separating my faith in God and science be a peaceful coexistence, or would it be more like one hand on the oven and another in the freezer.

While I found his dialogue pretty convincing, his broad brush approach left a lot of issues unresolved. Accepting this book requires accepting the following conclusions I still can’t accept:

  • Adam and Eve were not the first people. “Genetic evidence shows that humans descended from a group of several thousand individuals who lived about 150,000 years ago.” He presents options such as accepting they were two individuals chosen from many to represent humanity or that the names Adam and Eve were a symbol for humanity. My biggest issue with this is that Paul believed in a literal Adam and Eve (cf Romans 5 and 1 Corinthians 15), so to accept this is to now say that Paul was might have been right on spiritual matters, but didn’t understand origins, or was a “product of his time”. This is a radical departure from traditional hermeneutics.
  • Death pre-existed the fall. He claims the death that is discussed in the Bible resulting from Adam’s sin is a spiritual death. This is contrary to what I’ve been taught, but I’m willing to consider it.
  • God was only involved in the smallest, initial component of creation. He implies the Creator must have been ‘clumsy’ to have to keep intervening throughout geologic time to make his creatures turn out right. Collins finds it more elegant to confine God to setting things up and then taking a hands off approach. However, this contradicts even a poetic reading of Genesis, and is much closer to a blind watchmaker than I’m comfortable with.

This is all so disappointing, because I wanted this book to define my views on this issue, but I can’t get there. Francis went through the CS Lewis program several years before I did and we have several friends in common. He is clearly in the Christian camp, but he wants the benefits of dogmatism, but tries hard to avoid dogmatism at every turn. Most disappointing, he writes in the end that he shares his faith “without the desire to convert or proselytize you” because he sees values in all faiths. What is more hollow (and logically inconsistent) than someone who doesn’t sufficiently believe his faith should apply to others? Throughout the book, he is always hedging and tries very hard to stay clear of making any claims of Christian moral superiority. God is reduced (without Collins meaning to do so) to little more than the author of natural laws. And the end result of his logic is to make the Universe appear, to the objective observer, to be unsupervised.

Despite his stature and appeal to the authority of the scientific community, he never really gets me to molecules-to-man evolution, for which Collins has provided no new arguments that I could find. While I admire his defense of the Kantian tradition: where the empirical and the spiritual happily co-exist, this book doesn’t clear up my confusion. He merely confirmed what I already knew: a lot of smart people, historical Christians, and the vast majority of academics/scientists believe that evolution was the process by which man and woman were formed. While he is in favor of a semi-literal interpretation of most of the bible, he only makes halfhearted attempts to convince the reader of his position, and, astoundingly, never explains exactly what he thinks Scripture is and how he extracts truth from it.

One key takeaway for me was the importance of working this out. As a Christian and a modern man, I need to have a thought out position on this that is logically consistent and reflects my principles and key tenants. So, if I’m not with him, am I ready to join the institute for creation research and head off to the creation museum to sort this all out for me?

As much as I found his position unsatisfying, I’m even more uncomfortable with the young earth creationists. They violate the principle of inserting certainty where it shouldn’t belong. They can’t explain the age of starlight, the consistent results of carbon/radioactive dating, ice layers or even tree rings that contradict their age of the earth. Moreover, they do stand in opposition to the scientific community. Period. Science is a community that is obsessed with truth and its members are incentivized by data-driven arguments, especially those that are novel and iconoclastic. While it is unfortunate they rule out the possibility of a God created worldview, they would at least have to admit that the evidence supports a young earth, but, alas, it does not. You can find scientific-looking articles, but the ones I’ve seen neither use real data, nor are written by folks I would call real scientists. While I deplore appeals to authority that most current scientific debate follows, the “creation research” that I can find does not withstand basic scrutiny, other than its ability to make the true point that no-one knows what happened at the beginning of time. Starting with (and staying with) the bias that any conclusions reached will not interfere with a current set of interpretations of Genesis, might be a valid framework of belief, but we should not call that process scientific discovery.

I’m also inclined to believe that Genesis is not meant to teach scientific information. I read passages such as Psa 139:13, “you knit me together in my mother’s womb”, as containing moral truth (e.g. God personally created us, not random forces), but I do not extract any conclusion from that passage that the creation of life involves the mechanics of knitting. I believe in the fundamental truth of the Bible, but I don’t think we, for example, should read that the sun stopped in the middle of the sky and delayed going down about a full day and start revising astronomy. Yes, that was a miracle, but in the end, I have to synthesize the specific and general revelations and believe that the world we can explain is constant, consistent and causal. Creation was itself a miracle after all. Choosing my interpretation of scripture when evidence is contrary to scripture is to ignore the testimony of general revelation. The only way to hold this position is to accept that God deliberately created “clues” found in the data of the world that are inconsistent with reality. Yes, the possibility exists that the world could look old and actually be young, but is this consistent with general revelation and with how God works? If we are pushed to a place where our best argument is that the natural world could be manipulated to be different than reality, we have traded the regularity of the natural world for something completely chaotic and need to remember what Sherlock Holmes said on this:

‘It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.’

In the end, I have to go with what I know: God is good, the world is real, and I’m not God. The synthesis of that for me is what Tim Keller calls “the messy approach” and admitting that I just don’t know what happened at the beginning. My faith tells me Adam and Eve were real and willingly sinned. The testimony of the natural world tells me the world is old. Can I rename my views on this the “humble, faithful and honest approach”. I’m open to new data, but I can’t find any comfort in another view. Since, I’m basically with Tim Keller on this, I’ll give you his quote:

The fact is, the one that most people consider the most conservative, which is the young-Earth, six-day creation, has all kinds of problems with the text, as we know. If it’s really true, then you have problems of contradictions between Genesis 1 and 2. … I don’t like the theory that these are two somewhat contradictory creation stories that some editor stuck together…I think therefore you’ve got a problem with how long are the days before the sun shows up in the fourth day. You have problems really reading the Bible in a straightforward way with a young-Earth, six 24-hour day theory. You’ve got some problems with the theistic evolution, because then you have to ask yourself, “Was there no Adam and Eve? Was there no Fall?” So here’s what I like-the messy approach, which is I think there was an Adam and Eve. I think there was a real Fall. I think that happened. I also think that there also was a very long process probably, you know, that the earth probably is very old, and there was some kind of process of natural selection that God guided and used, and maybe intervened in. And that’s just the messy part. I’m not a scientist. I’m not going to go beyond that.

If you’ve made it this far, I leave you with a quote from C.S. Lewis who was so foundational to Collins’ faith. In the meantime, I’ve got work to do and a God to serve . . .

“If by saying that man rose from brutality you mean simply that man is physically descended from animals, I have no objection. But it does not follow that the further back you go the more brutal—in the sense of wicked or wretched—you will find man to be.”

“For long centuries God perfected the animal form which was to become the vehicle of humanity and the image of Himself. He gave it hands whose thumb could be applied to each of the fingers, and jaws and teeth and throat capable of articulation, and a brain sufficiently complex to execute all the material motions whereby rational thought is incarnated. The creature may have existed for ages in this state before it became man: it may even have been clever enough to make things which a modern archaeologist would accept as proof of its humanity. But it was only an animal because all its physical and psychical processes were directed to purely material and natural ends. Then, in the fullness of time, God caused to descend upon this organism, both on its psychology and physiology, a new kind of consciousness which could say ‘I’ and ‘me,’ which could look upon itself as an object, which knew God, which could make judgments of truth, beauty, and goodness, and which was so far above time that it could perceive time flowing past.”

“I do not doubt that if the Paradisal man could now appear among us, we should regard him as an utter savage, a creature to be exploited or, at best, patronised. Only one or two, and those the holiest among us, would glance a second time at the naked, shaggy-bearded, slow spoken creature: but they, after a few minutes, would fall at his feet.” — C.S. Lewis, “The Problem of Pain”

Further Reading

  1. An article about understanding the Pope’s views on this issue
  2. An article on death before the fall (from Collins’ organization, Biologos)
  3. An atheist critique of Francis Collins (wow, Sam Harris is harsh!)
  4. A creationist critique of The Language of God
  5. Another creationist’s critique of The Language of God
By 3 Comments

Amana 7200TW Door Latch Replacement

Quick post, just to help others who might find themselves in a similar situation regarding a front-end loader washer with a door latch that doesn’t close

Big trouble . . . door latch broke on our amana front end loader. First thing I do when I need to fix something is look for ownership so I can decide which part to get. A quick appliance 411 showed me that they are owned by Maytag which has been part of Whirlpool since 2002. This problem bothered my wife for months as she put heavy stuff in front of the washer. From extensive travel and work, I just found out tonight that I needed to get this fixed, stat.

First thing to do . . . get an exact part number, so I took a picture of the model number from the back.

picture from back

Yes, I could read this and type it in search engines, or I could just push through an online ocr tool like http://www.onlineocr.net/ and get:

CLOTHES WASHER AMANA APPLIANCES: BENTON HARBOR, MI, USA ASSEA MODEL NO. NFW7200TW 120AV. -60Hz 5.5 SERIAL NO. 11590928PA 2009.01

Several months ago, we ordered this, but appliancepartspros sent us a slightly different part and I had to modify it with an angle grinder to remove the bevel.

whirlpool-lever-door-34001260-ap4044741_01_m

In the end this didn’t work at all, which was incredibly frustrating. So tonight, I did a new web-search and found this video on youtube, which convinced me to replace the receiver, which looks like this:

Screen Shot 2014-10-11 at 9.04.23 PM

I found this part cheaper on amazon here and found a nice summary of all parts here.

Importantly, I found the rosetta stone below that let me know that the best part number to search on was 8182634.

Part Number 8182634 (AP3837611) replaces 34001265, 8181700, 1094190, AH972231, DC64-00519B, EA972231, PS972231.

They also had another good video here and I confirmed using Appliance Compatibility Tool that “Yes- This high quality, original factory replacement part is compatible with NFW7200TW10.”

I didn’t want to be delayed by missing the correct strike, so I found the Whirlpool 8181651 Door Strike here (it seemed the cheapest) and quickly bought it for ~$5.

The whole fix took about 30 minutes. The most difficult part was keeping track of the screws. Some of the front screws were really tight and I had to use a drill. The top took a lot of tugging and pulling to get off.

By 6 Comments

Shellshock: Bashing Bash for Fun and Profit

The latest fundamental computer security vulnerability, termed Shellshock, was discovered by a vulnerability researcher Stephane Chazelas (a linux shell expert living in the UK) which allows for arbitrary code execution on linux or mac computers through creating a custom environment variable. If you haven’t already, you need to patch your system(s) and you might be hearing a lot more about this in the near future.

Since it is common for a lot of programs to run Bash shell in the background, a number of devices (the “internet of things”) may be vulnerable. I’m used to really complicated exploits, but this is really a one-liner:
env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
Because of this simplicity there is massive speculation of an imminent arrival of a worm that can traverse multiple vulnerable systems.

Why does this matter?

Bash is everywhere — particularly in things that don’t look like computers (like pacemakers or cars). It is a UNIX like shell, which is on nearly every Linux system. From its creation in 1980, bash has evolved from a simple terminal based command interpreter to many other fancy uses, particularly as linux became present in our phones, cars and refrigerators.

An exploit that operates at this level will be lurking around in all various and sundry sorts of software, both local and remote. Embedded devices often have web-enabled front-ends to shuttle user input back and forth via bash shells, for example — routers, SCADA/ICS devices, medical equipment, and all sorts of web-connected gadgets are likely to be exposed. Additionally linux distributions and Mac OS are both vulnerable–even though the major attack vectors that have been identified up to this point are HTTP requests and CGI scripts.

This all happens because bash keeps executing after processing function definitions; it continues to parse and execute shell commands following a function definition. Especially problematic is the ability for environment variables with arbitrary names to be used as a carrier for a malicious function definitions which containing trailing commands. In particular, this enables network-based exploitation and therefore propagation on a large scale. To get some feel for how easy this can propagate see the example below where a simple wget (just a request for a web page) executes this in one line:

wget -U "() { test;};/usr/bin/touch /tmp/VULNERABLE" myserver/cgi-bin/test

How does it work?

The NIST vulnerability database gives the flaw 10 out of 10 in terms of severity and provides the short, but dense, description:

GNU Bash through 4.3 processes trailing strings after function definitions in the values of environment variables, which allows remote attackers to execute arbitrary code via a crafted environment, as demonstrated by vectors involving the ForceCommand feature in OpenSSH sshd, the mod_cgi and mod_cgid modules in the Apache HTTP Server, scripts executed by unspecified DHCP clients, and other situations in which setting the environment occurs across a privilege boundary from Bash execution.
Authentication: Not required to exploit
Impact Type: Allows unauthorized disclosure of information; Allows unauthorized modification; Allows disruption of service

Let’s unpack this, because it didn’t make sense to me on a first read.

The key insight is that Bash supports exporting not just shell variables, but also shell functions to other bash instances. Bash uses an environment variable named by the function name, and a function definition starting with “() {” in the variable value to propagate function definitions. What should happen here is that bash should stop after processing the function definition, however, it continues to parse and execute shell commands after the function definition. To make this concrete, assume that you set an environment variable:

VAR=() { ignored; }; /bin/exploit_now

This will execute /bin/exploit_now when the environment is imported into the bash process. The fact that an environment variable with an arbitrary name can be used as a carrier for a malicious function definition containing trailing commands makes this vulnerability particularly severe; it enables network-based exploitation.

But, how could I be vulnerable?

On first blush, why should you care? Your are not giving shell access to strangers. To characterize the initial vulnerability space, let’s look at web applications. Web applications connect most all the networks in existence and form a large part of our digital economy, but the big problem is that web applications aren’t limited to web-browsers. Your router at home has a web-server as do other embedded devices (such as my thermostat).

When one requests a web page via the HTTP protocol, a typical request looks like this:

GET /path?query-param-name=query-param-value HTTP/1.1
Host: www.mysite.com
Custom: some header value

The CGI specification maps all parts to environment variables. With Apache httpd, the magic string “() {” can appear in the values above. An environmental variable with an arbitrary name can carry a nefarious function which can enable network exploitation.

A (little) bit deeper

To understand a little more of what happens, consider the following:

$ env x='() { :;}; echo pwned' bash -c "echo this is a test"

The result above is because the parsing of function definitions from strings (which in this case are environment variables) can have wider effects than intended. Vulnerable systems interpret arbitrary commands that occur after the termination of the function definition. This is due to insufficient (or non-existent) constraints in the determination of acceptable function-like strings in the environment.

If the function defined in x does some sneaky underhanded work, there is no way that bash can check if the return value if function x is real. Notice the function is empty above. An unchecked return value can lead to script injection. Script injection leads to privilege escalation and privilege escalation leads to root access. To fix this, most patches will disable the creation of x as a function.

Good luck, and I hope this doesn’t overly burden the well-meaning folks out there who just want computers to work securely so we can do our jobs. Right now the race is on. Vulnerability was found yesterday, exploits showed up this morning. Metasploit already has a plugin. I hope your patch gets there before the exploits do. In any case, the smart money is on the arrival of a full blown worm in the next several days. Get ready.

Some deeper (better) reading:

  • https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
  • http://www.theregister.co.uk/2014/09/24/bash_shell_vuln/
By 0 Comments

Playing with Matched Filters

During my time on the red team, we continually discussed the role of matched filters in everything from GPS to fire control radars. While I’m having a blast at DARPA where I work in cyber, I wanted to review an old topic and put MATLAB’s phased array toolbox to the test. (Yes, radar friends this is basic stuff. I’m mostly writing this to refresh my memory and remember how to code. Maybe a fellow-manager might find this review helpful, but if you are in this field, there won’t be anything interesting or new below.)

Why use matched filters?

Few things are more fundamental to RADAR performance than the fact that probability of detection increases with increasing signal to noise ratio (SNR). For a deterministic signal in white Gaussian noise (a good assumption as any regarding background noise but the noise does not need to be Gaussian for a matched filter to work), the SNR can be maximized at the receiver by using a filter matched to the signal.

One thing that always confused me about matched filters was that they really aren’t a type of filter, but more of a framework that aims to reduce the effect of noise which results in a higher signal to noise ratio. One way I’ve heard this described is that the matched filter is a time-reversed and conjugated version of the signal.

The math helps to understand what is going on here. In particular, I want to derive that the peak instantaneous signal power divided by the average noise power at the output of a matched filter is equal to twice the input signal energy divided by the input noise power, regardless of the waveform used by the radar.

Suppose we have some signal $r(t) = s(t) + n(t)$ where $n(t)$ is the noise and $s(t)$ is the signal. The signal is finite, with duration $T$ and let’s assume the noise is white gaussian noise with spectral height $N_0/2$. If the aggregated signal is input into a filter with impulse response $h(t)$ and the resultant output is $y(t)$ you can write the signal and noise outputs ($y_s$ and $y_n$) in the time domain:

$$ y_s(t) = \int_0^t s(u) h(t-u)\,du $$
$$ y_n(t) = \int_0^t n(u) h(t-u)\,du $$

Since we want to minimize the SNR, we expand the above:

$$\text{SNR} = \frac{y_s^2(t)}{E\left[y_n^2(t) \right]}$$
$$ = \frac{ \left[ \int_0^t s(u) h(t-u)\,du \right]^2}{\text{E}\left[ \int_0^t n(u) h(t-u)\,du \right]^2}$$

The denominator can be expanded:

$$\text{E} \left[y_n^2(t) \right] = \left[ \int_0^t n(u) h(t-u)\,du \int_0^t n(v) h(t-v)\,dv \right] $$

Or

$$ \int_0^t \int_0^t E [ n(u) n(v) ] h(t-u) h(t-v) du\,dv $$

We can further simplify this by invoking a standard white noise model:

$$ E[y_n^2] = \frac{N_0}{2} \int_0^t \int_0^t \delta(u-v) h(t-u) h(t – v) du\,dv $$

Which simplifies nicely to:

$$ \frac{N_0}{2} \int_0^t h^2 (t – u) du $$

Now all together we get:

$$ SNR = \frac{ \left[ \int_0^t s(u) h(t-u)\,du \right]^2 }{\frac{N_0}{2} \int_0^t h^2 (t – u) du } $$

In order to further simplify, we employ the Cauchy-Schwarz Inequality which says, for any two points (say $A$ and $B$) in a Hilbert space,

$$ \langle A, B \rangle^2 \leq |A|^2 |B|^2 \text{,}$$

and is only equal when $A = k\,B$ where $k$ is a constant. Applying this, we can then look at the numerator:

$$ \left| \int_0^t s(u)\,q(u) du \right|^2 \leq \int_0^t s^2(u) du \int_0^t q^2(u) du $$

and equality is acheived when $k\,s(u) = q(u)$.

If we pick $h(t-u)$ to be equal to $k\,s(u)$, we can write our optimal SNR as:

$$ SNR^{\text{opt}} (t) = \frac{k \left[ \, \int_0^t s^2 (u) duN \right]^2 }
{
\frac{N_0 k^2}{2}
\int_0^t s^2(u) du
} = \frac{\int_0^t s^2(u) du
}{
N_0/2
}$$

Since $s(t)$ always has a finite duration $T$, then SNR is maximized by setting $t=T$ which provides the well known formula:
$$SNR^{\text{opt}} = \frac{\int_0^T s^2(u) du}{N_0/2} = \frac{2 \epsilon}{N_0}$$

So, what can we do with matched filters?

Let’s look at an example that compares the results of matched filtering with and without spectrum weighting. (Spectrum weighting is often used with linear FM waveforms to reduce sidelobes.)

The most simple pulse compression technique I know is simply shifting the frequency linearly throughout the pulse. For those not familiar with pulse compression, a little review might be helpful. One fundamental issue in designing a good radar system is it’s capability to resolve small targets at long ranges with scant separation. This requires high energy, and the easiest way to do that is to transmit a longer pulse with enough energy to detect a small target at long range. However, a long pulse degrades range resolution. We can have our cake and eat it to if we encode a frequency change in the longer pulse. Hence, frequency or phase modulation of the signal is used to achieve a high range resolution when a long pulse is required.

The capabilities of short-pulse and high range resolution radar are significant. For example, high range resolution allows resolving more than one target with good accuracy in range without using angle information. Many other applications of short-pulse and high range resolution radar are clutter reduction, glint reduction, multipath resolution, target classification, and Doppler
tolerance.

The LFM pulse in particular has the advantage of greater bandwidth while keeping the pulse duration short and envelope constant. A constant envelope LFM pulse has an ambiguity function similar to that of the square pulse, except that it is skewed in the delay-Doppler plane. Slight Doppler mismatches for the LFM pulse do not change the general shape of the pulse and reduce the amplitude very little, but they do appear to shift the pulse in time.

Before going forward, I wanted to establish the math of an LFM pulse. With a center frequency of $f_0$ and chirp slope $b$, we have a simple expression for the intra-pulse frequency shift:

$$
\phi (t) = f_0 \, t + b\,t^2
$$

If you take the derivative of the phase function, instantaneous frequency is:

$$ \omega_i (t) = f_0 + 2\,b\,t. $$

For a chirp pulse width in the interval $[0, T_p]$, $\omega_i(0) = f_0$ is the minimum frequency and $\omega_i(T_P) = f_0 + 2b\,T_P$ is the maximum frequency. The sweep bandwidth is then $2\,bT_p$ and if the unit pulse is $u(t)$ a single pulse could be described as:

$$ S(t) = u(t) e^{j 2 \pi (f_0 t + b t^2)} \text{.}$$

I learn by doing, so I created a linear FM waveform with a duration of 0.1 milliseconds, a sweep bandwidth of 100 kHz, and a pulse repetition frequency of 5 kHz. Then, we will add noise to the linear FM pulse and filter the noisy signal using a matched filter. We will then observe how the matched filter works with and without spectrum weighting.

Which produces the following chirped pulse,

lfm1

From here, we create two matched filters: one with no spectrum weighting and one with a taylor window. We can see then see the signal input and the matched filter output:

matched_filter_in_and_out

To really see how this works we need to add some noise:

{aaa01f1184b23bc5204459599a780c2efd1a71f819cd2b338cab4b7a2f8e97d4} Create the signal and add noise.
sig = step(hwav);
rng(17)
x = sig+0.5*(randn(length(sig),1)+1j*randn(length(sig),1));

And we can see the impact noise has on the original signal:

input_plus_noise

and the final output (both with and without a Taylor window):

output

The Ambiguity Function

While it is cool to see the matched filter working, my background is more in stochastic modeling and my interest is in the radar ambiguity function — which is a much more comprehensive way to examine the performance of a matched filter. The ambiguity function is a two-dimensional function of time delay and Doppler frequency $\chi(\tau,f)$ showing the distortion of a returned pulse due to the receiver matched filter due to the Doppler shift of the return from a moving target. It is the time response of a filter matched to a given finite energy signal when the signal is received with a delay $\tau$ and a Doppler shift $\nu$ relative to the nominal values expected by the filter, or:

$$
|\chi ( \tau, \nu)| = \left| \int_{-\infty}^{\infty} u(t)u^* (t + \tau) exp(j 2 \pi \nu t) dt \right| \text{.}
$$

What is the ambiguity function of an uncompressed pulse?

For an uncompressed, rectangular, pulse the ambiguity function is relatively simple and symmetric.

ambigFun

What does the ambiguity function look like for the LFM pulse described above?

If we compare two pulses, each with a dutycycle of one (PRF is 20 kHz, and pulsewidth is 50 µs), we can see their differing ambiguity functions:

pulse_comparisons

If we look at the ambiguity function of an LFM pulse with the following properties:

SampleRate: 200000
        PulseWidth: 5e-05
               PRF: 10000
    SweepBandwidth: 100000
    SweepDirection: 'Up'
     SweepInterval: 'Positive'
          Envelope: 'Rectangular'
      OutputFormat: 'Pulses'
         NumPulses: 5

then we can see how complex the surface is:

ambig funct 3d
.

References

  • http://www.ece.gatech.edu/research/labs/sarl/tutorials/ECE4606/14-MatchedFilter.pdf
  • Matlab help files
By 0 Comments

One Time Pad Cryptography

This was much harder than it should have been. While this is the certainly the most trivial post on crypto-math on the webs, I wanted to share my MATLAB xor code in the hope that I save someone else’s time. It is a basic property of cryptography that a one time pad must be used only once. A example like this makes it very concrete:

Suppose you are told that the one time pad encryption of the message “attack at dawn” is 09e1c5f70a65ac519458e7e53f36 (the plaintext letters are encoded as 8-bit ASCII and the givenciphertext is written in hex). What would be the one time pad encryption of the message “attack at dusk” under the same OTP key?

Let $m_0$ be the message “attack at dawn”, then $m_1$ is the message “attack at dusk”, and $c_1$, $c_2$ the corresponding ciphertexts. If $p$ is the one-time pad (OTP) that encrypts the message, we then have:

$$ c_0 = m_0 \oplus p$$

So we can obtain the one-time pad by performing an XOR of the ciphertext with the plaintext:

$$ p = c_0 \oplus m_0 \text{.}$$

This enables us to encrypt the new message without using the OTP explicitly:

$$c_1 = m_1 \oplus p = m_1 \oplus \left(c_0 \oplus m_0 \right) = c_0 \oplus (m_0 \oplus m_1) \text{.}$$

You could truncate down to the only characters that are different, but since I’m writing a script for this, I didn’t bother.

In python this would be super short:

def strxor(s1,s2):
return ''.join(chr(ord(a) ^ ord(b)) for a,b in zip(s1,s2))

strxor(strxor("6c73d5240a948c86981bc294814d".decode('hex'), "attack at dawn"), "attack at dusk").encode('hex')

>>>'6c73d5240a948c86981bc2808548'

But, the government won’t allow me to have python on my laptop and I need to use matlab.

Some Helpful Links

  • My course slides: http://spark-university.s3.amazonaws.com/stanford-crypto/slides/02-stream-v2-annotated.pdf
  • Another Solution: http://crypto.stackexchange.com/questions/10534/how-to-decode-an-otp-message
  • Some good general info: http://www.binaryvision.nl/3075/statistical-test/
By 0 Comments

Cryptography

We are swimming in a sea of information. Without encryption this whole operation would be a very public event. Encryption enables our communications to be anonymous or secure and makes e-commerce and private communications possible. Because of this, I registered for Dan Boneth’s cryptography course. At this point, I’ve only watched one lecture, but I have some initial thoughts (and some code) that I wanted to jot down.

At it’s most simple, Encryption is used to convert data into a form that a third party can’t read. There are multiple reasons for wanting to do this, but the most common is a desire for security or privacy. In a more comprehensive sense, the aim of cryptography is to construct and analyze protocols that overcome the influence of adversaries and which are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation. Modern cryptography is heavily based on mathematical theory and assumptions about the capability of current computer hardware. No code is infinitely secure, and is theoretically possible to break, but a “secure code” is considered infeasible to break by any known practical means. The study of encryption is of high interest to me, because it intersects many of my current interests: the disciplines of mathematics, computer science, and electrical engineering.

Dan Boneth’s initial lecture covered the traditional overview, definition and history of cryptography. His overview was rich with basic applications of cryptography and its role in our lives today.

The concept of encryption and decryption requires some extra information to encode and decode the signal. Though this information takes many forms, it is commonly known as the key. There may be cases when same key can be used for both encryption and decryption (a shared key) while in certain cases, encryption and decryption may require different keys (such as a public/private key arrangement) and this is one way to organize existing techniques.

He provides a strong admonishment to use “standard cryptographic primitives” that have withstood public scrutiny and makes the point that without the necessary peer-review by a very large community of hundreds of people for many, many, many years, one can’t trust a cryptographic implementation. For this reason he admonishes the student to never trust an implementation based on proprietary primitives. (The student is left to wonder what exactly a cryptographic primitive is.)

He highlights that cryptography has its limitations and even a secure cryptographic channel does not guarantee a secure implementation. It was helpful that he followed this statement up with what exactly an insecure implementation is by surveying how to break different ciphers. He mentions an known insecure Blu-Ray protection standard called AACS and mentions an additional forthcoming discussion of the mechanics of its compromise.

From here, he discusses applications such as private elections and auctions and also the mechanism of digital signatures. He ends the lecture by discussing some of the “magic” recent developments in encryption such as homomorphic encryption — where operations can be accomplished on encrypted data without decryption. (See the DARPA proceed program.) This has fascinating applications such as the ability to query a database without providing an observer of database (before, during or after) any insight into the nature of the query.

He closes with a discussion stating that any cryptographic implementation has three requirements: precise specification of a threat model, a proposed construction, and a proof that breaking construction under threat mode will solve an underlying hard problem.

The next lecture was my favorite. Here, Boneth surveyed the history of cryptography which included a lot of the codes you play with in school such as symmetric and substitution cyphers, along with a discussion how to break these by using frequency and pattern recognition techniques. (Mostly based on known distributions of letters in the underlying language.)

He then introduces interesting ciphers such as the caesar cipher. In the Caesar cipher each letter of the alphabet is shifted along some number of places; for example, in a Caesar cipher of shift 3, A would become D, B would become E, Y would become B and so on. He then moves on to more complex ciphers such as the Vigenère cipher (a simple form of polyalphabetic substitution developed in Rome in the 16th century) and an interesting discussion of rotor machines (the most famous of which was the German Enigma). The Vigenère cipher consists of several Caesar ciphers in sequence with different shift values. He ended with the lecture with a quick description of modern encryption techniques.

I always enjoy these introductory lectures of a course — I can generally follow everything and am excited about the material. A lot of my friend in college would not go to the first lecture, but I never missed it. It was always go to at least one lecture where I could follow along. This sounds like it will be an interesting ride.

Enough of the discussion; let’s see how this works. As discussed above, the Vigenère cipher produces encrypted ciphertext from an input plaintext message using a key and a matrix of substitution alphabets. From the MATLAB below, you can generate the Vigenère square, also known as the tabula recta, this table can be used for encryption and decryption. It consists of the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating keyword.

Let’s try this with MATLAB (see code below). If I use a key of ‘Ceasar knows something you do not’ and a secret message of Titus Lucretius Carus (a Roman epicurean epic poet), we get:

encrypt('Titus Lucretius Carus', 'Ceasar knows something you do not')
VMTLSQKDPE KHLFLGTYBE
decrypt('VMTLSQKDPE KHLFLGTYBE', 'Ceasar knows something you do not')
TITUS LUCRETIUS CARUS

It works!

By 0 Comments

Review: Boys Should Be Boys: 7 Secrets to Raising Healthy Sons

From a one-star review on Amazon:

The content was obvious and the tone was judgmental. The complete lack of nuance is painful. Apparently receiving an MD over 25 years ago makes this Dr. Laura-style author an expert in child psychology? Let’s leave the psychology topics to those professionally trained in that discipline.

I’ve always enjoyed Meg Meeker’s books and her latest is no exception. She’s very practical and conversational, but she brings together a refreshing mix of social conservatism with practical medical know-how. Every chapter is focused towards concrete advice for parents to become more effective in crafting the virtue (and therefore well-being) of our sons. As the quote above shows, she presents a perspective that is out of sync with aspects of modern culture, and in particular with modern medicine’s trend of hyper-specialization and the unwritten rule to leave moral judgments out of medical advice. We all know this trend has risks: a specialist is going to miss the whole person concept that is critical to understand as we tackle a problem as complex as parenting. By forcing those who give us medical and parenting advice to be materialists is to force life’s great questions out of the discussion. A materialist view misses the most important dynamics in developing character and sons who become men of virtue.

Meg Meeker explains that boys need the freedom to explore and test their limits, even if this means some scrapes, bruises and difficult moments. She tries to strike the balance between helicopter and laissez-faire parenting. She slices through these two extremes with a simple call to engage: to double the time we spend with our boys all the while loving them enough to force them to grow in difficult and engaging situations.

It is out of the tension of caring too little and not caring enough that she weaves her plan for an ideal father. In many ways, I find her book more interesting for what it says not to do than what it says to do. She reminds us of the danger in letting our boys be cast adrift into a toxic mix of video games, ersatz online relationships, and a hyper-sexualized culture that emphasizes in individual’s emotion over an external, and fixed, framework of morality.

She makes it clear that there’s no substitute for personal time and attention. She paints the ideal parent as always engaged and aware of what their children are doing in a manner that doesn’t dictate the details of their life but does pour compassion and love into their schedule while allowing them to grow and develop in natural situations. In reading her book, she makes it clear that to avoid the harmful influences of society, we as fathers have to be committed and focused to protecting them in fostering the right environment which allows them to develop in healthy ways.

In 12 chapters she starts with a review of the problem and then goes over seven areas of focus. Here, in brief they are:

  • Know how to encourage your son. One fault is babying and spoiling him. But another is being so harsh that you lose communication with your son and destroy his sense of self worth. We’ll look at how to strike the right balance.
  • Understand what your boys need. Guess what? It’s not another computer game; it’s you. We’ll look at how to get the most of your time with your son.
  • Recognize that boys were made for the outdoors. Boys love being outside. A healthy boy needs that sense of adventure— and the reality check that the outdoors gives him.
  • Remember that boys need rules. Boys instinctively have a boy code. If you don’t set rules, however, they feel lost.
  • Acknowledge that virtue is not just for girls. Boys should, indeed, be boys—but boys who drink, take drugs, and have sex outside of marriage aren’t “normal” teenagers, they have been abnormally socialized by our unfortunately toxic culture. Today, my practice as a pediatrician has to deal with an epidemic of serious, even life-threatening, problems—physical and psychological—that were of comparatively minor concern only forty years ago. A healthy boy strives after virtues like integrity and self-control. In fact, it is virtues like these that make a boy’s transition to manhood possible.
  • Learn how to teach your son about the big questions in life. Many parents shy away from this, either because they are uncomfortable with these questions themselves, or want to dismiss them as unimportant or even pernicious, or because they don’t want to “impose” their views on their children. But whatever one’s personal view, your son wants to know— and needs to know—why he’s here, what his purpose in life is, why he is important. Boys who don’t have a well-grounded understanding on these big questions are the most vulnerable to being led astray into self-destructive behaviors.
  • Remember, always, that the most important person in your son’s life is you.

In the second chapter, she addresses how to deal with peer pressure with a particular emphasis on how toxic are culture is for boys and their identities. She goes on from this to discuss boy’s natural tendencies and how helpful rough and dangerous activities can be. This is exactly the natural state of boys development. She points out that neighborhood games practiced by boys with different ages force them to learn important life lessons which they can’t learn anywhere else.

In the fourth chapter, she explores the role between electronics, virtual worlds and the influence they have on the development of young boys. I’d recommend to all parents of young children read Parmy Olson’s book “We are Anonymous” to better understand how amazingly toxic (and captivating) the underbelly of the internet is for children. (Something we all know, but she brings it forward in vivid detail.)

In an interesting turn, she then explores the societal animosity towards teenage boys. I just finished Sheryl Sandberg’s Lean In in which she makes an excellent case for societal biases against women leaders in business. By contrast, Meeker makes an excellent point that society presents a self reinforcing feedback loop that cast teenage boys as moody, depressed and angry. She makes some excellent points that have particular poignancy coming from a medical professional who is used to dealing with teenage boys: it is okay to be depressed and moody and not the cause of alarm or overreaction from parents. The solution is more old-fashioned that our modern and hyper specialized world wants: more time and attention — in the context of a strategic perspective.

However, the focus of our increased time and attention is the subject of the next chapter. Chapter 6 talks about practical ways to build self-confidence and mental health in our boys. She talks about the critical importance of the fathers blessing, something that always proved far too elusive for me. She describes the feeling of true accomplishment as a powerful emotional resource builder. (I think it’s helpful to contrast true accomplishment with the common empty fawning praise and declarations of how special my children are that they find in school.)

Particular convicting is her clarion call to live my life in an exemplary way that sets the right standards for my son. I want to model the virtues that he should have and she challenges us to picture our son at the age of 25 and to foster those virtues we desire in him — much in the same way Dan Allender’s Bold Love tells us to carefully and tirelessly pursue love with the cunning of a fox.

She moves on in the next chapter to discuss why so many men are merely aged adolescents: They never got through the transition from being a boy to becoming a man. She diagnoses this, in her clinical way, as the result of the absence of a father’s guidance.

She matches this with the next chapter that talks about the importance of faith and of the knowledge of an external God to whom boys feel accountability. She describes how a faith in God helps children to have a well of hope to draw from as life gets tough and develop an understanding of love that is more than a pleasurable act between bodies, to understand the importance of truth and accountability as well as the critical importance of repentance forgiveness and grace to young child. Here as in the rest of the book she makes it clear that this doesn’t mean simply dropping off your son at church and hoping he finds God — she calls fathers to again being the best that they can be for themselves but also for their sons — and model the ideal behavior.

My biggest criticism of her books is the way she remains generic towards faith. While a Judeo-Christian concept of God has been foundational to a historical US worldview, I think she should be more honest in explaining the particular faith she holds and its critical nature to our sons of eternal destiny. A general “faith” without conviction is not what we want our sons to have. Is she really advocating to teach our sons about Islam? She remains neutral on how God is defined — without question — to reach a broader audience. But her own faith of Christianity claims exclusivity, and I found it disappointing that she avoided this.

Her book culminates in the 11th Chapter where she calls us to ideate the core virtues we want our sons to have that will ensure they make the transition from boy to man. She emphasizes virtues we all want in our children such as integrity, courage, humility, meekness, and kindness. She doesn’t just introduce these as words but fully fleshes them out into concepts and practical steps to build them in our sons.

She ends the book with 10 tips to remember and a call to double whatever time we currently are investing in our sons. Here are the 10 tips:

1) Know that you change his world 2) Raise him from the inside out (worry about his inner life and the outer life will follow) 3) Help his masculinity to explode 4) Help him find purpose and passion (other than being a video game master) 5) Teach him to serve (this is where Church can come in handy) 6) Insist on self-respect 7) Persevere 8) Be his hero 9) Watch, then watch again (pay close attention to what is going on in his life) 10) Give him the best of yourself (not just the leftovers)

By 0 Comments

GE GXULQR Twist and Lock Kitchen or Bath Filtration System Replacement Filter Won’t Fit

If you ever have to replace your under the sink GE filter (specifically the GE GXULQR), be ready for a frustrating ride.

It looks like this:

filter

You can find the manual here and the ge product page here.

It is billed as the “Twist and Lock” Replacement Filter, the inside female hex socket sometimes rotates out of alignment when you remove the old filter. If this happens, there is no easy fix. Moreover, since the receiving head is generally under a sink or in some place you can’t easily access, it is very frustrating when you are pushing into the plastic receiver and the new filter doesn’t fit. (It is extra frustrating if you have a cast on a broken right hand, carpal tunnel in the your left hand and can’t bend your back due to a herniated L5/S1.)

head

The instructions only say to “Push filter into the filter head/bracket. Turn filter 1/4 turn to the right until it stops. The top surface of the filter will be flush with the bottom of the filter head/bracket when fully installed.” This only works if everything is properly lined up. The solution is to re-align the receiver by making a new tool from the old filter. By cutting off the head of the old filter (I used an angle grinder), and hammering in a flathead screwdriver, you have a custom alignment tool that looks like this:

custom tool

How far should you rotate it? Enough to get the socket to line up so the flanges align. For me, this meant the top of hex was flat. The biggest challenge is knowing how hard to turn. I still don’t know the internal mechanism, and the casing is all plastic and I didn’t want to break it. For me, I had to rotate it pretty hard before it turned. This was a little scary because water started to leak out from inside. However, after my wife rotated in the cartridge (two hands were necessary, and a cast doesn’t help), everything seems to be working fine.

Hopefully, this spares you some frustration.

By 9 Comments

Gradient Descent

[mathjax]

It has been awhile since I’ve studied optimization, but gradient descent is always good to brush up on. Most optimization involves derivatives. Often known as the method of steepest descent, gradient descent works by taking steps proportional to the negative of the gradient of the function at the current point.

Mathematically, gradient descent is defined by an algorithm for two variables, say \theta_0, \theta_1 as repeated updates of:

$$ \theta_j := \theta_j – \alpha \frac{\partial J (\theta_0 , \theta_1) }{\partial \theta_j } $$

from the hypothesis:

$$ \begin{aligned}
h_\theta(x) = \sum_{i=1}^n \theta_i x_i = \theta^{T}x.
\end{aligned} $$

The \alpha is the learning rate. If \alpha is very large, we will take some huge steps downhill, small \alpha would mean baby steps downhill. If \alpha is too small, it might take too long to get our answer. If alpha is too large, we might step past our solution and never converge. Besides the pitfall of picking an  \alpha , you have to have a cost function with existing derivatives (continuous) and a convex function.

Often for linear regression, we use batch gradient descent. In machine learning terminology, batch gradient descent uses all training examples. This means that for every step of gradient descent, we compare all residuals in the final least squares calculation.

Why is this relevant to machine learning? Of course there is always an analytical solution to any model, but this might involve a very large matrix inversion and be computationally infeasible.

$$ \theta = (X^T X)^{-1} X^T y $$

Gradient descent gets used because it is a numerical method that generally works and is easy to implement and a is very generic optimization technique. Also, analytical solutions are strongly connected to the model, so implementing them can be inefficient if you plan to generalize/change your models in the future. They are sometimes less efficient then their numerical approximations, and sometimes there are simply harder to implement.

To sum up, gradient descent is preferable over an analytical solution if:

  • you are considering changes in the model, generalizations, adding some more complex terms/regularization/modifications
  • you need a generic method because you do not know much about the future of the code and the model
  • analytical solution is more expensive computationaly, and you need efficiency
  • analytical solution requires more memory or processing power/time than you have or want to use
  • analytical solution is hard to implement and you need easy simple code
By 0 Comments

FlexEvents: WOD 3 Simulation

During a recent CrossFit competition, flex on the mall, we had to accomplish a team chipper in which we were able to pick the order of team members in order to minimize our workout time. The workout was simple: 50 burpees, 40 over the box jumps, and 30 kettle-bell snatches. Each exercise had to be accomplished in serial: no one could start an exercise until their previous team member had finished. This meant everyone waited while the first person started. As simple as this was, I got confused when figuring the optimal order for team members: should the slowest go first or last?

At the time, I thought the best strategy would be to have the fastest go first, to prevent the scenario where the fast folks were waiting and unable to help the team. I was focused on the idea that you didn’t want anyone to be waiting — so clear out the fast people first. My partners wanted the slowest to go first because the slower participants could rest, go super slow and not affect the final score.

While I was wrong, and they were correct, this didn’t make sense at the time, because I was viewing the whole thing as a linear operation where order didn’t matter, but waiting on someone would definitely slow down the overall time. It turns out that if you put the slowest last, no one is waiting, but the clock is rising totally based upon the slowest time, when you otherwise could have obscured their slow time. The workout came down to the following critical path: the time it took to do everyone’s burpees plus the last participant’s time.

However, this is only true if the time it took to do burpees was significantly more than the other events and there was not a significant difference in fitness between team members. After the competition, I wanted to understand the dynamics of this workout and build a quick model to understand where these assumptions held true and hone my intuition for stuff like this.

It turns out the worst thing to do is have the slowest person go last. The reason why is really simple: you are putting them in the critical path. In fact, an optimal strategy is to have the slowest always go first. If I assume all four members have different speeds, and made an assumption like this based on expected values of their workout times. Let’s assume four members have the following expected completion times (in notional units), where person 1 was the fastest and each successive participant was slower in all events.

Note: These are totally made up numbers and have nothing to do with our team . . . says the man whose wife now routinely kicks his scores to the curb.

person 1 2 3 4
burpees 9 10 15 17
box jumps 7 8 13 15
KB snatch 5 7 11 13

_

In this case, I wrote some Matlab to look through all (4! = 24) permutations. (Remember: 1 = fastest, 4 is slowest.)

CrossFit Order

This is such a simple event that you can see the basic building blocks without much introspection: 21 is the fastest time to complete the sequence and 45 is the slowest. If the team were all comprised of person 1 they could complete the whole thing in 48. As this fictional team is, burpees alone will account for 51 regardless of order, but if the fastest goes at the end, you can get away with only adding 15 on to the total time, versus adding 28 if the slowest person goes last.

Several things here. It is so simple to work the math of this with variables instead it probably should be done. A better analysis could show you how much variation you could tolerate between athletes before the problem dynamics above don’t apply. Maybe another lunch break for that.

OK, on some more thought, I thought I would look at several scenarios and show optimal strategies.

A team member is much slower on burpees, but faster on the other events.

In this case, it still makes sense for her to go last. The burpees are linear, but speed in the bottom two events determines the order. It helps me to assign a color to each team member and see all 24 strategies on one plot. The lightest shade is member one who has a burpee time of 40 compared to 4 for the others, but is faster in the other two events than the others. In the plot below, the fastest combinations are at the top where you can see that all the fastest outcomes have member one going last.

burpees slower on 1

Burpee times equal KB snatch times

So if their event times look like this where everyone’s KB snatch times equal their burpee times.

burpee times:    1   2   3   4
box jump times:  2   2   2   2
KB snatch times: 1   2   3   4

Then all outcomes are equal and you get a truly random pattern where all outcomes are equal regardless of order. So in this case it doesn’t matter what order you do the workout in, even though 4 is much slower than 1. Interesting and counter-intuitive.

colors

My code is below for any of those interested.

By One Comment