What an amusing coincidence. The dust has barely settled after the Apple’s mishap with extraneous goto fail;
in OpenSSL (used widely by OSX and iOS), that made it ripe for exploits, before we’ve heard another similar story. This time it was about multiple goto cleanup;
statements that comprised a vulnerability in GnuTLS, a rough equivalent of OpenSSL on various Linux distributions.
That goto
! Everyone warns it cannot be trusted, it shouldn’t be used, it’s basically the work of the devil. Yet, some people just don’t listen, and we can witness firsthand all the woes that befall them. Will it serve as a warning for the others? Can we hope for at least this much?…
The answer is no. Not only we can’t, but also shouldn’t. Despite what many would proclaim, these two fiascoes are hardly about the goto
statement. It’s very tempting, of course, to pigeonhole them into that box, but the truth is just more complex than that.
The goto
statement is only one example, drawn from a certain class of programming techniques and language features. What they all have in common is that they overstep the safety guarantees a language offers, however elaborate or scarce they are. In return, those techniques offer additional capabilities, exceeding the usual “power level” available to developers.
Pretty much every language hides something of this kind. It may be goto
in C. It could be reflection in Java. It might be metaclasses in Python, or import hooks. Even the pure and graceful Haskell has the infamous unsafePerformIO
, the use of which potentially throws all the normal guarantees of correctness out of the window.
What all those features have in common is their inevitability. You may try to ignore them, or write off as bad practices at best. But come a sufficiently complex program, an amply convoluted system, a problem hairy enough, and they suddenly become adequate, acceptable – or even necessary. The malevolent goto
, to stick with the most prominent example, becomes basically mandatory whenever C code mixes error checking with handling resources that require proper cleanup.
Therefore, the only reasonable solution is to learn to cope with it all, and just be extra careful. Translated to software engineering practices, this means taking every precaution we can reasonably afford to minimize the risk of bugs. Most of this stuff is pretty simple, really. It doesn’t necessarily involve any gymnastics with automatic testing, too, which is every developer’s “favorite” discipline. The OpenSSL’s fiasco would have been prevented by any tool that looks for unreachable code; that’s not really asking a lot.
So if we could learn anything from those high profile fail
ures, it’s that basic stuff, low hanging fruit – like code reviews and static analyzers – actually help, for little to no cost. I’m sure such conclusion is much more valuable than yet another tirade, coming from a You Should Not Code Like That department.