Awhile back I talked about being cross-discipline associative. I did something similar to this approach when I asked what time travel could teach us about testing. Let’s see how this works with another domain entirely.
The idea of these posts is to try and associate between various disciplines and correlate ideas from one with the other. I find this is interesting in terms of learning new things as well as making me think about my own discipline more. So here I’ll do that in the context of testing and test tools by considering particle physics.
Setting the Mood
I know there’s a risk that these types of posts will seem insanely idiotic so let me just set a bit of the context right out of the starting gate.
In the scientific disciplines of the world today, we have powerful tools: mental tools such as mathematics and the scientific method, and technological tools like computers and telescopes. With the help of these tools, scientists have pieced together a lot of knowledge about the universe. This certainly applies to the study of the smallest constituents of matter, the subatomic particles that make up everything we see, hear, feel, and touch.
To understand anything about this area of discipline, you really have to first understand the material conditions that are fundamental to particle physics research.
Just as in the software testing field, there are various tools used. One of the key tools is a particle accelerator. These accelerator tools are used to create the basis of experimental data upon which all theorizing and decision-making about elementary particles is built upon. These tools, as you can imagine, have had to be continually refined and improved over the course of time.
This is in service of making sure that practitioners within the field kept working to achieve results. Here, “achieving results” meant finding techniques for accelerating larger and larger numbers of particles to higher and higher energy.
Right there you could stop and say “That’s pretty much the discipline in a nutshell. Build better tools that can do more and then figure out the results.” In fact, many people seem to treat testing like that, don’t they? This has been in large part why the industry is in danger of falling to the technocrat testers.
But you, gentle reader, know there’s more to understanding the testing discipline, right? Well, the same applies to particle physics. So let’s slowly wade ourselves into the deep end of the pool.
Start with the Basics
As the discipline was emerging, before it was even possible to explain any of the basic physical principles needed to understand how experimental particle physics was done, certain fundamental conventions of how to describe measurements had to be set. Essentially this is the question of what system of measurement units to use. There are, of course, many ways and systems. But some will be less effective than others at conveying information in a concise way. Conveying information in a concise way. Sound like what we deal with a lot in the software testing world, right?
Consider Your Abstractions
Particle physicists ending up deciding on a preferred set of units. These units were chosen to take advantage of basic features of two approaches to describing the universe: special relativity and quantum mechanics. This allowed the practitioners to, at least as much as possible, get rid of constant values that depended on the choice of measurement units. This worked by letting those practitioners choose units in such a way that any such constants could be set equal to one. In other words, this is an abstraction. Abstractions come up all the time for us testers. In fact, choosing the right abstraction for a given problem is one of the key skills not only of the modern developer, but the modern tester as well.
Still with me? We’re going to need to wade deeper into the pool.
Sources of Truth
A fundamental postulate of special relativity is that space and time are linked together so that the speed of light is always constant, no matter which reference frame it’s measured in. This is what makes the subject paradoxical from the point of view of everyday experience. The most common example of this paradoxical nature is that if you try to move at high speed in the same direction as a beam of light, then no matter how fast you go, the light will always be moving away from you at the same speed.
The equations of special relativity simplify when units of measurement for space and time are chosen to be such that the speed of light is equal to one. For example, one way of doing this is to note that light travels roughly 300,000 kilometers in a second. This means light travels about a foot in a nanosecond (one-billionth of a second). As a result of this, measuring lengths in feet and times in nanoseconds makes the value of the speed of light about equal to one. Who cares? Well, setting the speed of light equal to one determines the choice of units used to measure time in terms of the choice of units used to measure space, and vice versa. To me this is a reduction of sources of truth. Somewhat akin to making sure elaborated requirements and test cases act as the same source in BDD-style approaches. In effect, special relativity was the reduction of sources of truth. But choosing the units was a way to express the source of truth.
Probably the most famous equation related to special relativity is the E = mc2 equation relating energy (E), mass (m), and the speed of light (c). Note that using units in which the speed of light is set equal to one simplifies this to E = m. That means energy and mass become equal in the context described by this equation. Again, you may ask: who cares? Well, as a result, particle physicists use the same units to measure energy and mass.
While special relativity links together the way spatial dimensions and the time dimension are measured, quantum mechanics links together energy and time measurements. Again, notice how we’re reducing sources of truth? Or, at the very least, using simplified means of expression. In any event, this can get pretty involved so let’s simplify a bit.
For purposes of this explanation, just know that there is a fundamental mathematical entity of quantum theory that is called the Hamiltonian. This is an operator on state vectors. A state vector basically describes the current state of the universe at a given time. So what the Hamiltonian can do is transform a given state vector into a new one. Operating on a specific state vector at a given time, the Hamiltonian tells you how the state vector will change during some infinitesimal additional time period. In addition, if the state vector corresponds to a state of the universe with a well-defined energy, the Hamiltonian tells you what this energy is. Explaining this kind of stuff is good practice for dealing with a particular business domain and making sure it is understandable enough for the purposes at hand. It’s a good practice for testers and developers to break down a domain and see if they do a good job at explaining it.
All of the details don’t so much matter here. What does matter is that you consider this: if the Hamiltonian simultaneously describes the energy of a state vector as well as how fast the state vector is changing with time, then the implication is that the units in which you measure energy and the units in which you measure time are linked together. So, for example, if you change your unit of time from seconds to half-seconds, the rate of change of the state vector will double and so will the energy.
When two variables — in this case, energy and time — are directly proportional to each other, there must be a constant the relates them. The constant that relates time units and energy units is called Planck’s constant. We don’t have to get into the details of the constant here. Suffice it to say that particle physicists choose their units so as to make Planck’s constant equal to one. Are we still asking: who cares? Well, this fixes the units of time in terms of the units of energy, and vice versa. Again, notice that sometimes you have to pick and choose what needs to be presented and understood. Many times you simply want to make sure that people have the domain concepts and terms down — like Planck’s constant, for example — just so further discussion can continue. It’s not necessary — at least here — to give you all the details of who Max Planck was or how he derived his constant and so forth. These kinds of tradeoffs happen a lot in testing.
Simplify Abstractions
So let’s just make sure we understand our abstraction: with the choices of the speed of light and Planck’s constant, distance units are related to time units, and time units are related to energy units, which in turn, are related to mass units. That’s a pretty nice simplification. But how do we express this simplification? How do particle physicists use this simplification to talk about what they do? We’re wading a bit deeper into the pool here.
The standard convention of particle physics is to express everything in energy units. This means that theorists, who were often doing the expressing, had to just pick a single measurement unit, one that determined how energies would be expressed.
It was known to the theorists that the experimentalists had a lot of success in measuring energies as electron volts. An electron volt, abbreviated eV, is the energy an electron picks up as it moves between two metal plates that have a voltage difference of one volt between them. Well, the theorists didn’t want to fix something that wasn’t broken, so they decided to use this unit as well.
Once you’ve chosen to measure energies and masses in units of eV, then the choice of constants described earlier means that time and space — which are measured in inverse units to energy — are measured in “inverse electron volts” or eV-1.
In the units we’re talking about here, the unit of distance is the inverse electron volt, which in more conventional units would be about a micron (about 10-6 meters, which is about a millionth of a meter). Time is also measured in inverse electron volts. This unit of time is extremely short, roughly 4 × 10-15 seconds.
So what’s the key point to take away from this? Since energies are measured in eV and distance in eV-1, particle physicists tend to think of distances and energies interchangeably, with one being the inverse of the other.
That can all seem a little abstract so to drive it home a bit, let’s use an example. The energy corresponding to the mass of a proton, for example, is expressed as 1 GeV. The “G” is giga and this means a billion electron volts. Since this energy is a billion times larger than an electron volt, the corresponding distance will be one billion times smaller, about 10-15 meters.
This is powerful stuff because it means you can think of this energy as being the characteristic size of the proton. Hey, see what we did there! We used a specific example to refine a rule. That’s very much like how we attempt to understand business domain rules by framing our understanding as test conditions driven by data conditions.
The Scale of Activities
So, with all that being said, particle physicists were able to refer to their investigations as involving either very short distance scales or very high energy scales. Typical physical processes under study involved something that happened at some particular approximate distance or approximate energy, and this was said to be the distance or energy “scale” under study.
And that lets theorists and experimentalists reason about the tools, taking us back to where we started. I said that achieving results meant finding techniques for accelerating large numbers of particles to high energy. With our newfound knowledge, let’s unpack that statement a bit.
In particle accelerators the total energy of the particles you are colliding together sets the energy scale that you can study. Investigating shorter and shorter distances requires higher and higher energies. What this means is that at any given time the fundamental limit on the experimental information you can gather about the elementary particles comes from the technological limits on the energies of particles in your experiments. Certainly sounds like how we have to consider specific tools to find specific classes of bugs, right? And then choosing the “scale” — perhaps unit, perhaps integration, perhaps acceptance — that help us gather information in the most effective way.
But are we sure we understand this process yet? I said “investigating shorter and shorter distances.” But … investigating how? I may know the limits of what I can do now but what exactly am I doing? And what am I observing? A critical question of value that, much like in the software testing field, is often not answered or only answered incompletely.
This requires discussion of the fundamental technique.
Fundamental Techniques
The fundamental experimental technique of particle physics is to bring two particles close together and then watch what happens. The simplest way to do this is to begin by producing a beam of energetic particles in one way or another. Then you accelerate the particles to high energy in some sort of accelerator. The beam of high-energy particles is then aimed at a fixed target. Crucially, as part of this setup, you must use a detector of some sort to see what particles come out of the region where the beam hits the target.
Early on, experimentalists had to rely on naturally occurring sources of particles. A good example there would be radioactivity. So experimentalists could get a source of radium, the radioactive decay of which produces so-called alpha particles (helium nuclei). This radioactive source leads to energies of about 4 MeV. That’s “M” as in “mega” (million). Early experiments along these lines, around 1910, were done with a zinc sulfide screen being the detection source. This worked because the screen flashed when hit by the radioactive particles.
As an overall test tool and test setup, this was a bit lacking. Eventually a refinement was made with the use of cloud chambers. These chambers were capable of very quickly reducing the pressure inside of the enclosed space. When this happened, water vapor condensed along the tracks of energetic particles traveling through the chamber.
From a test tool observation standpoint, this was fantastic. Being able to see the tracks of all charged particles involved in a collision with a target provided a great deal more information about what happened than that provided by the flashes seen on zinc sulfide screens.
But we’re still relying on naturally occurring radiation and this was pretty weak stuff, overall. However, another example of this same technique was using naturally occurring cosmic rays.
Most cosmic rays are caused by energetic protons hitting the upper atmosphere, creating a shower of pions, muons, and electrons that make up most of what experimenters can observe at ground level. The energy of these interactions could be in the range of a few hundred MeV. So the good news is that the energies were much higher. The bad news is that you had to sit there and wait for particles to come barging out of the sky and then sort out what was happening when those particles ended up hitting targets. Speaking of which, what were these targets?
These were various forms of detectors that were taken to mountaintops or sent up in balloons to get as many of the most-energetic collisions as possible.
So the problem here is that the test tooling, while relatively simple, required elaborate set up procedures. And, even then, we were still at the whim of what we managed to capture from naturally occurring sources. We had little to no control over what we were getting. You must see some corollaries to the software testing industry with that, right?
To be sure, there were successes in terms of discovering new elementary particles. But consider the time frames: the positron in 1932, the muon in 1937, and charged pions and kaons in 1947. Yes, it was progress but it was somewhat few and far between, to say the least.
As a means of testing, it was critical that we gained access to much more intense high-energy particle beams whose energy and direction could be precisely controlled.
And — once again — we’re back to where we started. I said that achieving results meant finding techniques for accelerating large numbers of particles to high energy. With the above further refinement of our knowledge, let’s unpack that a bit and wade even deeper.
Focus on the Tools
Now that the problem is defined, we can focus a bit more on the tools. And that’s always the trick, right? Make sure you understand what you’re solving before you start providing a lot of tools to solve it.
The first true particle accelerator was designed and built by John Cockcroft and Ernest Walton at Cambridge in 1930. This machine was able to accelerate a beam of protons to 200 keV. By 1932, they had reconfigured their accelerator to send the beam through a sequence of accelerating stages, reaching a final energy of 800 keV.
The year 1931 saw the appearance of two additional accelerator designs that could reach similar energies. One worked by building up an electrostatic charge, designed by Robert Van de Graaff. The other design, by Rolf Wideroe, used a radio-frequency alternating voltage.
The alternating voltage design was adapted by Ernest Lawrence and others at Berkeley, who constructed the first “cyclotron” in 1931. In a cyclotron, the particle beam is bent by a magnetic field and travels in a circle, being accelerated by an alternating voltage each time it goes around the circle. Lawrence’s first cyclotron reached an energy of 80 keV and by mid-1931 he had built one that could produce a beam energy of over 1 MeV.
Notice here that we still were barely approaching what we could get with those early radium experiments and no where near the upper limits of the cosmic ray experiments.
But it didn’t take us long to start closing the gap by refining our test tool knowledge. Lawrence’s first devices had a diameter of only eleven inches. Yet by late 1932, Lawrence had a 27-inch cyclotron producing a 4.8-MeV beam, and in 1939 a 60-inch one with a 19-MeV beam.
As you can imagine, these devices were starting to become much more expensive in terms of construction and operation, not to mention maintenance. The main reason for the expense was that as the machines got more powerful, they required larger and larger magnets to bend the higher and higher energy beams into a circle. By 1940, Lawrence had a 184-inch diameter machine that could reach 100 MeV. But it cost about $1.4 million! We can relate cost to utility in our testing field, right? Sometimes that cost comes not in the form of dollar values for the tool itself but the time and effort involved in supporting and/or maintaining it.
Another interesting thing occurred though: the observer effect. As the energies in these tests got higher and higher, the effects of special relativity started to kick in. This made it harder to control and observe the system under test. A large part of this had to do with the fact that in special relativity, the faster you go, the more mass you get, which means the more energy is required to move that mass. This fact meant that the test tool design of the cyclotron had to change to that of a synchrocyclotron. In this tool, the frequency of the accelerating voltage changed as the particles were accelerated.
By November 1946, Lawrence was using his 184-inch diameter machine to produce a beam with an energy of 195 MeV. Although the cosmic ray physicists were still ahead at actually discovering particles, this situation was soon to change. After the discovery of charged pions in cosmic rays in 1947, the neutral pion was discovered at Lawrence’s lab in 1949.
But the test tools were reaching their limit.
Tools Have Their Limits
Higher-energy accelerators could not be built using a single magnet. Instead, it became necessary to use a doughnut-like ring of smaller magnets. This design was called a synchrotron. In 1952, one such device — called the Cosmotron — achieved an energy of 3 GeV. Another such device, built in 1954 — and called the Bevatron — had a proton beam of 6.2 GeV. These were topped by another device built in 1957 that that achieved 10 GeV. Even this was topped in 1959 by a device that operated at an energy of 26 GeV and then up to 33 GeV.
You can see the progression. The latter part of the 1950s was a key period of innovation in test tool approach and design. The 1960s continued this trend, albeit at a slower pace because, again, cost came into play. The early 1960s saw a 70 GeV machine. It wasn’t until 1972 that we got up to around 200 GeV and then, by 1976, about 500 GeV.
But even with all this, there were still limitations in the testing method.
In high-energy accelerators, the beam particles carry a great deal of momentum, and by the law of conservation of momentum, this must be conserved in a collision. As a result, most of the energy of the collision goes into the large total momentum that the products of the collision have to carry. What this means is that the actual energy available to produce new particles grows only as the square root of the beam energy. So that 500 GeV proton accelerator we ended up at? Well, it could provide only about 30 GeV for new particle production. I bet you’ve had testing tools, or even strategies, that — metaphorically speaking — promised 500 GeV but only gave you 30 GeV.
Modify the Strategy
This limitation required a change in the test strategy. Physicists realized that if you could collide two accelerator beams head on, the net momentum would be zero, so none of the energy in the beams would be wasted. The problem with this is that the density of particles in a beam is quite low, making collisions of particles in two intersecting beams rather rare.
This notion of colliders, rather than just accelerators, led to construction of these new kinds of test tools in the 1970s and 1980s. These things could be huge, though. For one example, the LEP (Large Electron Positron) collider was built in a tunnel that was 27 kilometers, or 16 miles, in circumference.
The LEP began operation in 1989 at a total energy of 91.2 GeV, and operated until November 2000, when it was finally shut down after having reached a total energy of 209 GeV. Wait … what? Aren’t those energy levels much lower than what we already talked about?
Indeed. Much of what I talked about here regarding the test tool machines, were proton accelerators. There was a parallel development of electron accelerators (synchrotrons) going on since the 1950s as well. A competing test tool!
Electron accelerators were less popular than the proton ones, since they had to be run at lower energy. Also, unlike protons, electrons are not strongly interacting particles. This means they could not directly be used to study the strong interaction of particles, which is where interesting physics was likely to happen. The reason electron synchrotrons run at lower energies is that when the paths of high-energy electrons are bent into a circle by magnets, the electrons give off large amounts of X-ray synchrotron radiation. As a result, energy must be continually pumped back into the electron beam. Like any test tool, there are trade offs, right?
In any event, going back to the LEP design, at 209 GeV, the particles in the LEP lost about two percent of their energy to synchrotron radiation each time they went around the ring. So that was good — we had better particle interactions where less momentum, and thus energy, was being lost. However, running the machine used an amount of electrical power about forty percent as large as that used by an average city. Doubling the energy of a ring the size of LEP increases the power needed by a factor of 16. Thus do you see the problem with electron style test tools.
Okay, but what about proton test tools? What happens when the strategy is changed from accelerating to colliding using these tools? Notice how we’re talking about using particular strategies with particular tools.
One of the first of these colliders was built in 1971 and ran until 1983. This was a proton-proton collider and reached a total energy of 63 GeV. The next major advance was in 1981 with a proton–antiproton collider with a total energy of 540 GeV. A test tool called the Tevatron became operational in 1983 and began doing physics in 1987 with an energy of 1.8 TeV. (That “T” is for “tera” and thus “trillion.”)
1.8 TeV?!? Wow! Now we’re getting up there, huh? However, to get up there required the accelerator to use superconducting magnets. These were necessary to achieve the very high magnetic fields required to bend the trajectory of the beam into a circle that was six kilometers, or three miles, in circumference.
There were a lot of developments along the years but of course the one most people are probably familiar with these days is the Large Hadron Collider (LHC). The LHC went into the 27 kilometer tunnel that had been used by LEP. The LHC uses more than one thousand superconducting dipole magnets for the purpose of keeping particles circling. Each of those magnets has a very high magnetic field strength of 8.4 tesla. This gives a total energy output of about 14 TeV.
And, in line with that notion of rising costs, the entire project cost about $13 billion and requires about $1 billion per year to operate. That’s a REALLY expensive testing tool.
Closing the Journey
So has thinking about particle physics taught us anything about testing? Or vice versa?
I suppose one point you could take from this post is that detector technology (test tools) made huge advances during this entire period of time as detectors grew into ever larger and more complex instruments using very sophisticated electronics and many layers of different particle-detection technologies.
As you can imagine, as these projects got larger, it required larger teams to be involved in the design, construction, and operation of each of these huge tools, whose price tag could be quite sizable. This cost limited the number of detectors that could be built. Further the social organization of experimental particle physics changed as larger and larger numbers of physicists were working on smaller and smaller numbers of experiments.
Interestingly, perhaps, the scientific community started agile, began to get a bit more waterfall, and then finally had to become a bit “microservice” oriented — and back to agile — as they had to accommodate a complex domain with complex tools.
Incidentally, you might be interested to know that there are really only two viable ways of getting to even higher energies: a bigger collider/accelerator ring or higher magnetic fields. The energy of a ring scales linearly with its size and the magnetic field, so you could double the energy of the LHC to 28 TeV either by building a ring twice as large or by finding a way to make magnets with twice the field strength. Both of those approaches, as you can imagine, is not just very hard to do but is very costly.
I hope this post was an interesting diversion for you but I also hope it did have something useful to say. If nothing else, I hope it gave you some appreciation for how we software testers share some of the same problems that particle physicists have dealt with throughout their careers as well.