Many are the quirks of shell scripting. Most are related to confusing syntax, but some come from certain surprising semantics of Bash as a language, as well as the way scripts are executed.
Consider, for example, that you’d like to list files that are within certain size range. This is something you cannot do with ls alone. And while there’s certainly some awk incantation that makes it trivial, let’s assume you’re a rare kind of scripter who actually likes their hacks readable:
So you use an explicit while
loop, obtain the file size using stat and compare it to given bounds using a straightforward if
statement. Pretty simple code that shouldn’t cause any troubles later on… right?
But as your needs grow, you find that you also want to count how many files fall within your range, and how many do not. Given that you have an explicit if
, it appears like a simple addition (in quite literal sense):
Why it doesn’t work, then? Because clearly this is not the output we’re looking for (ls_between is our script here):
It seems that neither matches nor misses are counted properly, even though it’s clear from the printed list that everything is fine with our if
statement and loop. Wherein lies the problem?
Last weekend I attended the PyGrunn conference, back in the good ol’ Netherlands. It was a very enjoyable and instructive event, featuring not only a few local speakers but also some prominent figures from the Python community – like Kenneth Reitz or Armin Ronacher. Overall, it was a weekend of some great pythonic fun.
…Except for a one small detail. As I’ve learned there, Python will apparently get to have enums in the 3.4 version of the language. To say that this was baffling to me would be a severe understatement. Not only I see very little need for such a feature, but I also have serious doubts whether it fits into the general spirit of Python language.
Why so? It’s mostly because of the main purpose of enumeration types, derived from their usage in many languages that already have them as a feature. That purpose is to turn arbitrary data – mostly integers and strings – into well-known, reliable entities that can be safely manipulated inside our programs. Enums act as border guardians, filtering out unexpected data and converting expected data into its safe representation.
What’s safety in this context? It’s type safety, of course. Thanks to enumeration types, we can be certain that a particular value belongs to specific and constrained set of elements. They clearly define all the variants that our code should handle, because everything else was already culled at the conversion stage.
Problem is, in Python there is nothing that would guarantee those safety promises are actually fulfilled. True, the basic property of enum types is retained: given an enum object, we know it must belong to a preordained set of entities. But there is nothing that ensures we are dealing with an enum object at all – short of us actually checking that ourselves:
How this is different from a straightforward membership check:
except for the latter looking cleaner and more explicit?…
This saying, I do not claim enums are completely out of place in Python. This is untrue, if simply because of the fact they are easily implemented through a rather simple metaclass. In fact, this is exactly how the proposed enums.Enum
base is supposed to work.
At the same time, it is also possible to provide some of the before-mentioned type safety. Just look into various libraries that enhance Python with support for contracts: a form of type safety which is even more powerful than what you’ll find in many statically typed languages. You are free to use them, and you should definitely do, if your project would benefit from such a functionality.
Incidentally, they fit right in with that new & upcoming enumeration types from Python 3.4. It remains to be seen what it means exactly for the overall direction of the language design. But with enums in place, the style of “checked” typing suddenly became a lot more pythonic than it was before.
And I can’t say I like it very much.
Much can be said about similarities between two popular, distributed version control systems: Git and Mercurial. For the most part, choosing between them can be a matter of taste. And because Git seems to have considerably more proponents in the Cool Kids camp, it doesn’t necessarily follow it’s a better option.
But I have found at least one specific and common scenario where Git clearly outshines Hg. Suppose you have coded yourself into a dead end: the feature you planned doesn’t pan out the way you wanted it; or you have some compatibility issues you cannot easily resolve; or you just need to escape the refactoring bathtub.
In any case, you just want to step back a few commits and pretend nothing happened, for now. The mishap might be useful later, though, so it’d be nice if we left it marked for the future.
In Git, this is easily done. You would start by creating a new branch that points to your dead end:
Afterwards, both my_feature
and __my_feature__dead_end__
refer to the same, head commit. We would then move the former a little back, sticking it to one of the earlier hashes. Let’s find a suitable target:
If it looks right, we can reset the my_feature
branch so it points to this specific commit:
Our final situation would then looks like this:
which is exactly what we wanted. Note how any further commits starting from the referent of my_feature
would fork at that point, diverging from development line which has lead us into dead end.
Why the same thing is not so easily done in Mercurial?… In general, this is mostly because of its one fundamental design decision: every commit belongs to one branch, forever and for always. Branch designation is actually part of the changeset’s metadata, just like the commit message or diff. Moving things around – like we did above – is therefore equivalent to changing history and requires tools that are capable of doing so, such as hg strip
.
Many languages now include the concept of annotations that can be applied to definitions of functions, classes, or even variables and fields. The term ‘annotation’ comes from Java, while other languages use different names: attributes (C#), decorators (Python), tags (Go), etc.
Besides naming disparities, those features also tend to differ in terms of offered power and flexibility. The Python ones, for example, allow for almost arbitrary transformations, while Go tags are constrained to struct fields and can only consist of text labels. The archetypal annotations from Java lie somewhere in between: they are mostly for storing metadata, but they can also be (pre)processed during compilation or runtime.
Now, what about C++? We know the language has a long history of lacking several critical features (*cough* delegates *cough*), but the recent advent of C++11 fixed quite a few of them all at once.
And at first sight, the lack of annotation support seems to be among them. New standard introduces something called attributes, which appears to fall in into the same conceptual bucket:
That’s misleading, though. Attributes are nothing else than unified syntax for compiler extensions. Until C++11, the job of attributes were done by custom keywords, such as __attribute__
(GCC) or __declspec
(Visual C++). Now they should be replaced by the new [[squareBracket]]
syntax above.
So there is nothing really new about those attributes. At best, you could compare them to W3C deciding on common syntax for border-radius
that forces both -webkit-border-radius
and -moz-border-radius
to adapt. Most importantly, there is no standard way to define your custom attributes and introspect them later.
That’s a shame. So I thought I could try to fix that, because the case didn’t look completely lost. In fact, there is a precedent for a mainstream language where some people implemented something-kinda-almost-like annotations. That language is JavaScript, where annotations can be realized as functions arranged into a pipeline. Here’s an example from Express framework:
Both loginRequired
and cached(...)
are functions, tied together by app.get
with the actual request handler at the end. They “decorate” that handler, wrapping it inside a code which offers additional functionality.
But that’s JavaScript, with its dynamic typing and callback bonanza. Can we even try to translate the above code into C++?…
Well yes, we actually can! Two new features from C++11 allow us to attempt this: lambda functions and initializer lists. With those two – and a healthy dose of functional programming – we may achieve at least something comparable.
Last weekend I had the pleasure to attend the game development conference IGK in Siedlce, Poland. Although gamedev is not something I normally do, it was a fun experience.
One thing I was especially looking forward to was the one day-long game programming contest called Compo. Like last year and the one before, I formed a team with Adam “Reg” Sawicki and Krzysztof “Krzysiek K.” Kluczek, guest starring Kamil “Netrix” Szatkowski. That’s four coders in total, so we kinda had some problem with, say, more artistic part of the endeavor :)
Nevertheless, the result was pretty alright: we scored 4th place out of about dozen teams. The theme this time was an “artillery” game with few mandatory features: two types of energy-like resources (HP & MP), achievements and multiplayer gameplay.
We nailed the last one by implementing support for two mice at once. This allowed us to come up with an interesting idea inspired by the Atari game Rampart: two castles that expand and attack each other in a fast-paced duel full of frantic clicking. And because the attackers, or guardians of each castle are dragons, hence the game’s title: Here Be Dragons.
[2013-04-07] Here Be Dragons (3.2 MiB, 1,725 downloads)
In retrospect, we should probably have narrowed the scope of our project more. Doing it 3D was ambitious in itself, but packing it with all the features we planned the night before turned out to be slightly too much :) In the end we didn’t even have time to code any sound effects or music!… That’s certainly a lesson to learn here.
But all in all, it was great fun. It certainly inspired me to maybe try and brush up my gamedev skills a bit more :)
You probably know very well that internationalization is hard. The mere act of translating the UI texts is actually one of the easiest parts, even though it’s not a pushover either. As one example: if your messages include quantities, you need to have some logic in place to choose different forms of nouns to go with your numbers. Fortunately, most frameworks already have that, as it’s a standard i18n feature.
Not every string message in your code is something to localize, of course. Log messages that are not visible to the user can be left alone in English – they should be, in fact. Coincidentally, though, those messages are also very likely to contain many numbers, often used as numerical quantities: things to do, things done, error count, and so on:
What if that number is 1?…
Oh well. That’s hardly the end of the world, isn’t it? Anyway, let’s just make the message slightly more universal:
There, problem solved!
No worries, I haven’t gone insane. I know that no real-world software would put such a gold plating on something as irrelevant as grammar of its log messages. But it’s spring break, and we can be silly, so let’s have some fun with the idea.
Here I pose the question:
How hard would it be to construct a plural form of English noun from the singular one?
Consulting the largest repository of human knowledge (well, second largest) reveals that the rules of building English plurals are not exactly trivial – but not very complex either. There are exceptions to almost every rule, though, and a large body of exceptions in general. Still, you could expect to achieve at least some success by just disregarding them completely, and following the simple rules to the letter.
How high that success ratio would be, though?
Coding for Android typically means writing Java, for all the good and bad it entails. The language itself is known of its verbosity, but the Android itself does not really encourage conciseness either.
To illustrate, look at this trivial activity that simply displays its application name and version, complete with a button that allows to close the app:
Phew, that’s a lot of work! While the IDE will help you substantially in crafting this code, it won’t help that much when it comes to reading it. You can easily see how a lot of stuff here is repeated over and over, most notably fields holding View
objects. Should you need to change something about them or add a new one, you have to go through all these places.
Not to mention that it simply looks cluttered.
What to do about it, though?… As it turns out there is a way to structure your Android code in a more succinct and readable way. Like a few other approaches to modern Java, it employs a palette of annotations. There is namely a project called Android Annotations that offers few dozens of them and aims to speed up Android development, making the code easier and more maintainable.
And it’s pretty damn good at that, I must say. Rewriting the previous snippet to use those annotations results in a class which looks roughly like this:
Not only we have eliminated all the boilerplate, leaving only the actual logic, but also made the code more declarative and explicit. Some of the irrelevant entities has been completely removed, too, like the btnExit
field which was only used to bind an event listener. Overall, it’s much more elegant and understandable code.
How about a bigger example? There is one on the project’s official page which looks pretty impressive. I can also weigh in my own anecdotal evidence of going through the Android game I wrote long ago and molding its code (a few KLOC) to work with AA. The result has been rather impressive, partially thanks to very light dependency injection facilities that the project provides, allowing me to replace many occurrences of Game.get().getGfx().draw(...);
silliness with just gfx.draw(...);
.
So in closing, I can recommend AA to all Android devs out there. It will certainly make your lives easier!