I’ve had a peculiar kind of awful realization after listening to a C++ talk earlier today. The speaker (Bjarne Stroustrup, actually) went over a few defining components of the language, before he took a much longer stop at templates. In C++14, templates are back in the spotlight because of the new constraints feature, intended to make working with (and especially debugging) templates much more pleasant.
Everyone using C++ templates now is probably well accustomed to the cryptic and excessively long error messages that the compiler spits out whenever you make even the slightest of mistakes. Because of the duck typing semantics of template arguments, those messages are always exposing the internals of a particular template’s implementation. If you, for example, worked with STL implementation from Visual C++, you would recognize the internal
__rb_tree symbol; it appeared very often if you misused the
set containers. Its appearance was at best only remotely helpful at locating the issue – especially if it was inside a multi-line behemoth of an error message.
Before constraints (or “concepts lite”, as they are dubbed) improve the situation, this is arguably the worst part of the C++ language. But alas, C++ is not the only language offering such a poor user experience. As a matter of fact, there is a whole class of languages which are exactly like that – and they won’t change anytime soon.
Yes, I’m talking about the so-called scripting languages in general, and Python in particular. The analogies are striking, too, once you see past the superfluous differences.
Take the before mentioned duck typing as an example. In Python, it is one of the core semantical tenets, a cornerstone of language’s approach to polymorphism. In current C++, this is precisely the cause of page-long, undecipherable compiler errors. You just don’t know whether it’s a duck before you tell it to quack, which usually happens somewhere deep inside the template code.
But wait! Python et al. also have those “compiler errors”. We just call them stacktraces and have interpreters format them in much a nicer, more readable way.
Of course unlike template-related errors in C++, stacktraces tend to be actually helpful. I pose, however, that it’s mostly because we learned to expect them. Studying Python or any other scripting language, we’re inevitably exposed to them at the very early stage, with a steady learning curve that corresponds to the growing complexity of our code.
This is totally different than having the compiler literally throw its innards at you when you try to sort a list of integers.
What I find the most interesting in this whole intellectual exercise is to examine what solutions are offered by both sides of the comparison.
Dynamic languages propose coping mechanisms, at best. You are advised to liberally blanket your code with automated tests so that failing to quack is immediately registered before the duck (er, code) goes live. While some rudimentary static analysis and linting is typically provided, you generally cannot have a reasonable idea whether your code doesn’t fail at the most basic level before you actually run it.
Now, have you ever unit-tested the template specification process that the C++ compiler performs for any of your own templates? Yeah, I thought so. Except for the biggest marvels of template metaprogramming, this may not be something that even crosses your mind. Instead, the established industry practice is simply “don’t do template metaprogramming”.
But obviously, we want to use dynamic languages, and some of us probably want to do template metaprogramming. (Maybe? Just a few? Anyone?…) Since they clearly appear to be similar problems, it’s not very surprising that remedies start to look somewhat alike. C++ is getting concepts in order to impose some rigidity on the currently free-form template arguments. Python is not fully on that path yet but the signs are evident, with the recent adoptions of enums (that I’ve fretted about) as the most prominent example.
If I’m right here, it would be curious to see what lies at the end of this road. In any case, it will probably have been already invented fifty years ago in Lisp.
If you use a powerful HTML templating engine – like Jinja – inevitably you will notice a slow creep of more and more complicated logic entering your templates. Contrary to what many may tell you, it’s not inherently bad. Views can be complex, and keeping that complexity contained within templates is often better than letting it sip into controller code.
But logic, if not trivial, requires testing. Exempting it by saying “That’s just a template!” doesn’t really cut it. It’s pretty crappy excuse, at least in Flask/Jinja, where you can easily import your template macros into Python code:
When writing a fully featured test suite, though, you would probably want some more leverage Importing those macros by hand in every test can get stale rather quickly and leave behind a lot of boilerplate code.
Fortunately, this is Python. We have world class tools to combat repetition and verbosity, second only to Lisp macros. There is no reason we couldn’t write tests for our Jinja templates in clean and concise manner:
JinjaTestCase base, implemented in this gist, provides evidence that a little
__metaclass__ can go a long way :)
Look at the following piece of jQuery code:
Of the two patterns it demonstrates, one is almost decisively bad: you shouldn’t build up DOM nodes this way. To get more concise and maintainable code, it’s better to use one of the client-side templating engines.
The second pattern, however, is hugely interesting. Most often called method chaining, it also goes by a more glamorous name of fluent interface. As you can see by a careful look at the code sample above, the idea is pretty simple:
Whenever a method is mostly mutating object’s state, it should return the object itself.
Prime example of methods that do that are setters: simple function whose pretty much only purpose is to alter the value of some property stored as a field inside the object. When augmented with support for chaining, they start to work very pleasantly with few other common patterns, such as builders in Java.
Here’s, for example, a piece of code constructing a Protocol Buffer message that doesn’t use its
Builder‘s fluent interface:
And here’s the equivalent that takes advantage of method chaining:
It may not be shorter by pure line count, but it’s definitely easier on the eyes without all these repetitions of (completely unnecessary)
builder variable. We could even say that the whole Builder pattern is almost completely hidden thanks to method chaining. And undoubtedly, this a very good thing, as that pattern is just a compensation for the deficiencies of Java programming language.
By now you’ve probably figured out how to implement method chaining. In derivatives of C language, that amounts to having a
return this; statement at the end of method’s body:
and possibly changing the return type from
void to the class itself, a pointer to it, or reference:
It’s true that it may slightly obscure the implementation of fluent class for people unfamiliar with the pattern. But this cost comes with a great benefit of making the usage clearer – which is almost always much more important.
Plus, if you are lucky to program in Python instead, you may just roll out a decorator ;-)
Even though it’s not a part of the Zen of Python, there is a widely accepted principle in Python community that reads:
It’s better to ask for forgiveness rather than permission.
For code, it generally means
trying to do something and seeing whether it succeeded, rather than checking
if it should succeed and then doing it. The first approach produces something like this:
and it’s preferred over the more cautious alternative:
What I realized recently is that there is much more general level of software development where the exact same principle can be applied. It’s not just about individual pieces of code, or catching exceptions versus checking preconditions. It extends to functions, modules, classes and packages – but more importantly, to the whole process of adding features, extending functionality or even conceiving totally new projects.
Taking a bit simplistic view, this process can be thought of having two extremes. On one end, there is a concept of pure “waterfall”. Every bit of work has to fit into predefined set of phases and no later phase can start before the previous one has completely finished. In this setting, design must be done upfront and all knowledge about the future product has to be gathered before coding starts.
By even this short description, it’s hard to stop thinking about all the hilarious ways the waterfall approach can end badly. Curiously, though, there are still companies that not only follow it but flaunt the fact publicly.
The other end is sometimes called “cowboy coding”, an almost derogatory term for churning out code without regard to pretty much any methodology whatsoever. Like in case of waterfall, it sometimes (very rarely) works, but it’s more so by accident rather than by design. And like pure waterfall, it also doesn’t really exist in nature, as long as more than one programmer is involved.
If we frame the spectrum like this, with the two completely opposite points at the edges, the natural tendency is to search for the middle ground. But if we subscribe to the titular principle of asking for forgiveness rather than permission, the right answer will clearly be shifted towards the “lean” side.
What is permission here? That’s a green light from design, architecture, requirements, UX and similar standpoints. You wouldn’t mind having all these before you start coding, but it is not absolutely crucial. Not having it, you can still do things: code, prototype, explore, and see how things pan out.
Even if you end up ripping it all apart, or maybe even fail to produce anything notable, you will still accrue useful knowledge. Say you have to throw away most of the week’s work because you’ve learned you need to do it differently. So what? With the insight you have gained along the way, it may very well take just a few hours now.
And so thou shalt be forgiven.
Many a hardships are constantly plaguing the hard working IT folks. For example, you may find out that the nearest fridge is out of your favorite drink! But among the so-called First World Problems, there are few that constitute actual, you know, problems – even if you should consider yourself very lucky facing them.
The prime example is this peculiar situation when you’re standing bathed in glaring sunlight, while your body is clearly telling you it’s the middle of the night – or the other way around. I’m talking, of course, about the physiological condition known as jet lag.
It’s pretty stubborn beast, too. There isn’t really a way to avoid it, short of reverting to old fashioned and slow means of transport. And you cannot count on having many innate adaptation to deal with it, either. It would require for our savannah ancestors to have been hopping continents in a few hours on regular basis, which – as far as we know – is not something they used to do.
Nevertheless, the symptoms of jet lag and their severity may vary greatly by person, so most of the advice about dealing with it will not necessarily work for everyone. But among the dubious statements you can find around the Internet, there are also some solid facts here and there.
Although it’s not technically the cause of jet lag, spending many long hours in closed, cramped, crowded space (also known as “taking a flight”) is certainly a factor that could make its symptoms worse. To reduce that impact, there are few things you can do.
The air in aircraft’s cabin is typically very dry. In such an environment, you will lose body water faster than on the ground and eventually experience dehydration. I suspect this is also the reason why many people feel the jet lag as a condition similar to hangover.
The solution is, obviously, to drink more. You should never refuse a cup of water whenever the flight attendant offers you one, and you may actually want to actively bug them for even more. This isn’t likely to win you friends but it’s totally worth it, and also conveniently coincides with the next point.
While on the plane, your options to move around are naturally quite limited. It’s never a bad idea, however, to get up, take a few steps and do some stretching. But what’s more important is to actually move around way before stepping on board.
As you probably know very well, the average airliner has a hermetically sealed cabin that keeps the air inside of it at much higher pressure than the surrounding atmosphere it soars through. Unfortunately, the pressure cannot be kept too high, lest the strain on fuselage’s integrity would be too severe and risk implosion. As a result, the density of air (and thus oxygen) in the cabin is usually comparable to that of about 2 kilometers above the sea level.
Unless you live at least that high up and are used to reduced oxygen pressure, you may experience some adverse effects which are curiously similar to the effects of jet lag. Though moving into some elevated area to make your breathing and circulation more efficient is certainly an option, the less radical way is simply to exercise regularly to achieve a comparable effect.
This is hint is a bit more controversial, as some people will recommend staying up for the whole travel. Personally I envy those who are in the position to even consider this as an option. But even then, getting into the more relaxed state by closing your eyes works as acceptable substitute should you choose to do so.
The practical effect of moving rapidly to a far away time zone is to make your “day” artificially shorter or longer. Coping with that change therefore requires a temporary adjustment period with altered sleep pattern. To put it simply, you will need to either sleep when you normally don’t, or stay awake when you normally do.
While this is really a matter of your typical habits, I’d wager that for the average reader – who is likely no stranger to coding sessions that drag well into the night – the latter path would be the one of less resistance. Note, however, that especially on eastward flights the total duration of “daytime” may exceed 30 or more hours. Taking a nap while still on the plane would then serve to cushion the blow of having the “evening” moved well past the normal time.
A vast majority of code is dealing with logical conditions by using three dedicated operators:
or (alternative). These are often written as
||, respectively, in C-like programming languages.
In principle, this is sufficient. Coupled with true and false, it’s enough to encode any boolean function of zero, one or two arguments. But language designers didn’t seem to be concerned with minimalism here, because it’s possible to replace those three operators with just one of the following binary functions:
If you can’t immediately see how, start with deriving negation first.
So we already have some redundancy for the sake of readability. While it’s surely a bad idea to try and incorporate all 22 before mentioned functions, isn’t there at least few others that would make sense as operators on their own?
I can probably think of one: the material implication (). It may seem like a weird choice at first, but there are certain common scenarios where such operator would make things more explicit.
Imagine for a second that some mainstream language (like Java) has been enhanced with operator
=> that acts as implication. Here’s one example of its straightforward usage:
Many situations involving “optionals” could take advantage of logical implication as an operator. Also note how in this case, the alternatives do not look very appealing at all. One could use an
if statement to produce equivalent construct:
but this makes a trivial one-liner suddenly look quite involved. We could also expand the implication using the equivalence law :
Reader would then have to perform the opposite transformation anyway, in order to restore the real meaning hidden behind the non-obvious
|| operators. Finally, we could be a little more creative:
and capture the intent almost perfectly… At least until along comes someone clever and “simplifies” the expression into the not-a-or-b form presented above.
Does any language actually have the implication operator? Not surprisingly, the answer is yes – but it’s most likely a language you wouldn’t want to code in. Older and scripting versions of Visual Basic had the
Imp operator, intended to evaluate the logical connective .
Besides provoking a few chuckles with its hilarious name, its usefulness was limited by the fact that it wasn’t short-circuiting. Both arguments were always evaluated, even if the first one turned out false. You may notice that in our
NameMatcher example, such a behavior would produce
NullPointerException when one of the names is
null. This is also the reason why implication implemented as a function:
would not work in most languages, for the arguments are all evaluated before executing function’s code.
As a fact of life, in bigger projects you often cannot just delete something – be it function, method, class or module. Replacing all its usages with whatever is the new recommendation – if any! – is typically outside of your influence, capabilities or priorities. By no means it should be treated as lost cause, though; any codebase would be quickly overwhelmed by kludges if there were no way to jettison them.
To reconcile those two opposing needs – compatibility and cleanliness – the typical approach involves a transition period. During that time, the particular piece of API shall be marked as deprecated, which is a slightly theatrical term for ‘obsolete’ and ‘not intended for new code’. How effective this is depends strongly on target audience – for publicly available APIs, someone will always wake up and start screaming when the transition period ends.
For in-project interfaces, however, the blow may be effectively cushioned by using certain features of the language, IDE, source control, continuous integration, and so on. As an example, Java has the
@Deprecated annotation that can be applied to functions or classes:
If the symbol is then used somewhere else, it produces a compiler warning (and visual cue in most IDEs). These can be suppressed, of course, but it’s something you need to do explicitly through a complementary language construct.
So I had this idea to try and add similar mechanism to Python. One part of it is already present in its standard library: we have the
warnings module and a built-in category of
DeprecationWarnings. These can be ignored, suppressed, caught or even made into errors.
They are also pretty powerful, as they allow to deprecate certain code paths and not just symbols, which can be useful when introducing new meanings for function parameters, among other things. At the same time, it means using them is irritatingly imperative and adds clutter:
And in this particular case, it also doesn’t work as intended, for reasons that will become apparent later on.
What we’d like instead is something similar to annotation approach that is available in Java:
Given that the
@-things in Python (decorators, that is) are significantly more powerful than the Java counterparts, it shouldn’t be a tough call to achieve this…
Surprisingly, though, it turns out to be very tricky and quite arcane. The problems lie mostly in the subtle issues of what exactly constitutes “usage” of a symbol in Python, and how to actually detect it. If you try to come up with a few solutions, you’ll soon realize how the one that may eventually require walking through the interpreter call stack turns out to be the least insane one.
But hey, we didn’t go to the Moon because it was easy, right? ;) So let’s see how at least we can get started.