Fairly recently, I started reading up on quantum mechanics (QM) to brush up my understanding of the topic and, quite surprisingly, I’ve found it ripe with analogies to my typical interests: software development. The one that stands out particularly well relates to the very basics of QM and the way they were widely misunderstood for many decades. What’s really amusing here is that while majority of physicists seem to have been easily fooled by how the world operates on quantum level, any contemporary half-decent software engineer, faced with problems of very similar nature, typically doesn’t exhibit folly of this magnitude.
We are not uncovering the Grand Scheme of Things every day, of course; what I’m saying is that we seem to be much less likely to come up with certain extremely bad answers to all the why? questions we encounter constantly in our work. Even the really hard ones (“Why-oh-why it doesn’t work?!”) are rarely different in this regard.
Thus I dare to say that we would not be so easily tricked by some “bizarre” phenomena that have fooled many of the early QM researchers. In fact, they turn out to be perfectly reasonable (and rather simple) if we look at them with programmer’s mindset. The hard part, of course, is to discover that such a perspective applies here, instead of quickly jumping to “intuitive” but wrong conclusions.
To see how tempting that jump can be, we should now look at one simple experiment with light and mirrors, and try to decipher its puzzling results.
The setup is not very complicated. We have one light source, two detectors and two pairs of mirrors. One pair consists of standard, fully reflective mirrors. Second pair has half-silvered ones; they reflect only half of the light, letting the other half through without changing its direction.
We arrange this equipment as shown in the following picture. Here, the yellow lines depict path the light is taking after being emitted from the source, somewhere beyond the left edge.
Source of this and subsequent images
But in this experiment, we are not letting out a continuous ray of light. Instead, we send out individual photons. We know (from some previous observations) that half-silvered mirrors are still behaving correctly in this scenario: they just reflect a photon about 50% of the time. Normal mirrors, obviously, are always reflecting all the photons.
Knowing this, we would expect both detectors to go off with roughly similar frequency. What we find out in practice is that only detector 2 is ever registering any photons, and no particle whatsoever reaches detector 1, at any time. (This is illustrated by a dashed line).
At this point we might want to perform a sanity check, to see whether we are really dealing with individual particles (rather than waves that can interfere and thus cancel themselves out). So, we block out one of the paths:
and now both detectors are going off, but not simultaneously. This indicates that our photons are indeed localized particles, as they appear to be only in one place at a time. Yet, for some weird inexplicable reason, they don’t show at detector 1 if we remove the barrier.
There are all sorts of peculiar conclusions we could come up with already, including the mere possibility of photon going both ways to have an effect on results we observe. Let’s try not to be crazy just yet, though. Surely we can establish which one of the two paths is actually being taken; it’s just a matter of putting an additional sensor:
So we do just that, and we turn on the machinery again. What we find out, however, is far from definite answer. Actually, it’s totally opposite: both detectors are going off now, just like in the previous setup – but we haven’t blocked anything this time! We just wanted to take a sneak peak and learn about the actual paths that our photons are taking.
But as it turns out, we are now preventing the phenomenon from occurring at all… What the hell?!
Okay, let’s depart for now from the confusing realms of physics and return to our usual field of interest.
Imagine that you are struggling to find a cause for atypical and unexpected bug that occurs in your program. After trying several approaches and wrestling with the problem for some time, you narrow it down to a piece of multi-threaded code. It appears that a race condition might be happening there but you are not quite sure: the behavior is consistent albeit buggy. Whatever the real circumstances are, it seems like a good idea to gather more information.
In this spirit, you sprinkle the code with logging statements, making sure that you are always recording thread ID along with every log entry. This approach seems reasonable: if there is indeed some race condition, you will know not only when does it blow up, but also what was roughly the scheduling order that ultimately caused the problem to surface. Armed with this knowledge, it should be significantly easier to hunt the bug down.
So, you run your program again… and the problem is gone. Everything seems to be working fine now. Somehow, the bug has mysteriously disappeared.
A natural conclusion would be that the bug is consciously evading detection… except that no serious programmer would ever think something like this. We might sometimes jokingly refer to “compiler not liking us today” or “restarting the program too few times for the bug to go away already” but we’re perfectly aware of the real reasons why the code is not working correctly. Not precisely, of course – as this insight is gained by debugging – but we know there’s nothing truly mysterious here: no phenomenon that deliberately escapes discovery or is affected by mere observation.
As it seems, this is exactly the sort of perspective that the early quantum physicists lacked. The idea behind the (still) popular Copenhagen interpretation of QM basically proclaims that yes, observation does affect experimental results through the nigh-magical event of wavefunction collapse. Before it happens, we can only talk about probabilities for a particle to be in some specific state (which includes position, velocity, energy, etc.). The act of
observation measurement settles it, however, and the probability “concentrates” into experimentally established area.
The reason why it sounds unintuitive, confusing and weird is precisely because it is confusing and weird. Besides postulating strange universe where probabilities (i.e. mathematical constructs) somehow have real designates, it also casually violates several important physical principles: locality, determinism, certain type of symmetry or Liouville’s theorem.
All this stems mostly from a simple misunderstanding – a one that any decent software developer would avoid when faced with analogous concurrency problem that was outlined above. It’s not hard to deduce how examining the race condition can make it disappear: by adding logging instructions we are introducing additional delays to our program. Therefore, we are also changing its execution characteristic, up to the point when system scheduler handles it so differently that the hazardous behavior just doesn’t appear anymore.
Log calls are not a magical apparatus for inspecting how the program runs; they are its inherent part. No sharp boundaries can be drawn between “observing code” and “observed code”, for the latter is simply interleaved with the former.
Very analogously, it’s silly to postulate boundaries between experimental equipment and “things” that we’re trying to observe. Both are parts on the same physical universe and they will influence each other, especially at the microscopic scales of space and energy. Hence you cannot “just look”, because the very act of “looking” is a physical process (quantum entanglement, in case of our photons), contributing to the joint phenomenon of the whole experimental setup. A setup that includes not only the photons but also the additional sensor. Or the experimenter that checks it. Or the laboratory. Or the rest of the universe, for that matter.
The logger is simply part of the program.
So by this notion, we could really attribute to ourselves a kind of wisdom that cutting edge physicists of the 20th century had lacked. But before you proclaim yourself a second Einstein, bear in mind that the whole thing is largely counterfactual.
The reason why we can even suggest this analogy is because we are routinely programming devices powerful enough for such acute concurrency problems to surface. We typically take this fact for granted. However, the enormous progress in the realm of computing over the last several decades would be largely impossible without sufficient understanding of the world at nanometer scales, and below. Even if it’s still relatively crude and full of misconceptions, we owe it a great deal already.
Not to mention that we might soon owe it even more…
Adding comments is disabled.Comments are disabled.