Some time ago, on one of the forums I got into discussion about merits and motivations of releasing projects and code as open source. Turns out that many people cannot exactly wrap their heads around the concept of giving away your code for free. Even leaving the exact meaning of ‘free’ aside (it applies to both of them), I believe we can observe a kind of cultural gap here. Strangely enough, it’s not even the case of Nerds vs. Rest of the World: the geek community is in itself somewhat divided with respect to this issue.
And that’s OK, in a way. I know it may not be obvious how value can be preserved if we just volunteer our time and skills for open source projects and don’t receive any direct compensation in return. Honestly, I’m still kinda amazed how it all works out, but I have my small theory. It’s somewhat tangential to the typical gift culture explanation, but could also shed some light on companies’ motivations for contributing to OSS.
Long story short, I think the relation dynamics between open source contributors and beneficents can be described in term of the prisoner’s dilemma: a rather classic example from the game theory. Before pursuing the analogy further, let’s have a brief look at this curious puzzle.
Like the name suggests, the prisoner’s dilemma can be formulated in terms of jail, prisoners, cooperation and defection. I prefer an alternate setting, though, as it seems to better illustrate the concept and may be easier to understand. You can check the original formulation in Wikipedia, among other sources.
There is small game-theoretical difference between those two scenarios, but it’s largely irrelevant to our discussion.
Consider a two-player game with a potential reward of $100. The money can be taken by one of the players or split evenly among them. There is also a possibility of both players getting nothing. It all depends on how the players themselves decide what to do with the money.
They make independent decisions by choosing one of two options. They can decide to either split the money evenly between both of them, or steal (figuratively) the whole sum for themselves.
What happens next is the result of both decisions, revealed and applied at once:
What is the optimal strategy in a setting like this? If both opponents are known to be rational agents, they should arrive at the same conclusions. Because they cannot know each other’s thoughts, they both may only speculate what – probability that opponent steals – may be.
We know their decisions are independent, though, so shall remain the same regardless of what the player chooses in the end. Hence the expected values of both decisions seem to clearly indicate which one is better:
Looks like we should simply steal and be done with it.
But wait! Since both players are rational agents, they will both arrive at the same conclusion. Moreover, each will know the other thought the same. So they both know that is actually 1. Unfortunately, in this case
and they will both get nothing if they follow this logic.
Pity. Or maybe it’s better to split, then? If they both do just that, each will at least get $50 instead of nothing… Yes, this looks like a much better alternative, especially that we can count on both players to apply this reasoning; they are rational agents, after all. So by this logic, they will both split and everyone will get something in the end…
Well, except that now is 0, so it’s actually better to steal instead. Oh, and both players know that, obviously, and are not happy about it… again. Hence they will rather split, which makes stealing more attractive option, which in turns compels splitting – and so on.
This example of circular meta-reasoning is unresolvable in two-player case, because a single choice will make or break the system. Fortunately, reality is much more complex, with multiple agents making countless decisions all the time.
Decisions such as, for example, whether to open source this new project, or maybe contribute to some existing one.
Just like with the situation described above, looks like the optimal choice is to “steal”: to draw liberally from the vast expanse of existing open source software while contributing nothing in return. No single such behavior would cause the whole ecosystem to crumble, so the incentive for exploitation is very tangible. Heck, it’s not even obvious what’s the steal here, because what value loss is incurred by “splitters” is not easily recognizable.
Yet it’s obvious why every party cannot apply this strategy, lest the result will be very suboptimal for everyone. Somewhere between those two extremes (everyone “splits” vs everyone “steals”) there might be the point of equilibrium: where stealers can derive maximum utility without irrevocably harming the game’s dynamics. We don’t really know where that point lies, though, so we just choose to play along and split.
When diving into new language, or radically different framework, it may be a good idea to have a bigger project where you can apply your newfound skills. In my experience, this is typically better than having a lot of smaller ones, because it minimizes the hassle of project’s initial setup. Therefore, it encourages you to experiment more.
To reap the largest benefits of this approach, the project of choice should exhibit two important properties:
What types of projects fit into this characteristic? I’d say quite a lot of them.
When I was honing my Python skills, I started programming an IRC bot so that I could cram a few ideas into it rather quickly. They were implemented mostly as commands that users could input and have the bot perform some actions, like searching Wikipedia for any given term.
A similar pattern (collection of mostly independent commands) can be realized in many different scenarios. Aspiring web programmers could come up with something like a YubNub clone (bonus points if it allows users to add their own commands). Complete coding novices would probably have to resort to simple, menu-based programs in terminal instead.
Another option is to attack a problem which is a very broad and/or vague. Text editors, for example, fuel countless discussions (even wars) over what functionality should they contain and how it should be accessible in the UI. Chances are slim that your take on the problem sprouts a new Emacs or Vim, but a home-brewed editor is easy enough to start and obviously extensible, almost without limits. Additionally, editors can fit into pretty much any environment, from terminals to desktop UIs or HTML5 applications.
Some endeavors are a bit more specific, though. In web development, a CMS or blogging engine became something of a timeless classic now. Everyone has written one at some point, and there’s a lot of additional (thought not always useful) functionality that can be added to it. Getting the basics right is also a challenge here, especially from security standpoint.
For mobile app creators, the infamous To-Do list app is an idea exercised ad nauseam. But it’s actually a good playground for toying with various device capabilities (e.g. location-based reminders) or web services (like Google or iCloud calendar).
I’m pretty sure I’m far from exhausting the list of possibilities here. I cannot really speak for domains I have little-to-no experience with, for example embedded or hardware-oriented programming with equipment such as Arduino.
It should be possible to come up with infinitely extensible projects for almost every environment and platform, though. After all, every program has always one more feature to add ;)
It won’t be a stretch to bet that you’ve heard of XML. The infamous markup format was intended to be easily parseable by machines, in addition to being readable by humans. Needless to say, it failed to deliver on either of these promises. Markup elements tend to obscure the actual data, while parsing it – with all its namespaces,
![CDATA[s – is convoluted and not exactly efficient.
Other formats have thus risen to popularity, out of which JSON is probably widest known and used. It also has excellent support on many platforms.
For the purpose of transporting data between Internet endpoints, or for various APIs offered by websites and services, it works pretty great. There are other applications, though, where it might be the obvious first choice – but not necessarily the best one.
What JSON does well is ease to read and parse: the syntax can be outlined and defined in few paragraphs. However, at the same it’s not that convenient to write.
And lastly, there is no good support for longer texts due to lack of multi-line string literals.
There is another, lesser known format which addresses these concerns very well. It’s called YAML, recursively from YAML Ain’t Markup Language. Here’s a short sample:
At first sight it probably appears as pythonic (or haskellish) counterpart to the C-like JSON. Indentation does indeed matter, at least in the most popular and powerful variant of YAML syntax.
However, giving this bit of significance to whitespace allows to substantially reduce syntactic clutter. There are no curly or square brackets, and no commas. For the most part, there’s not much use for quotes either: strings are easily recognized as both keys and values, even if they contain spaces. Arrays are also supported in a straightforward way: by listing their elements with a leading dash (
-), including cases where they are key-value objects themselves.
There’s even support for long strings that span multiple lines:
Here, pipe character (
|) instructs parsers to preserve newlines and most of the whitespace, excluding the leading indent. Were it replaced with greater-than sign (
>), an HTML-like fold would be performed, converting chains of whitespace into a single space.
Simple YAML documents represent a tree of keys and values which is much the same as the one produced from JSON files. For some programming languages, this similarity extends to the parsing libraries, as they offer more or less the same interface:
Even if it’s not the case for your language, it’s quite likely there is a YAML parser readily available.
Funny thing about YAML is that its glaringly simple syntax hides very powerful and flexible format.
For example, the values are actually typed. They can be strings or
nulls, but also integers, floats or booleans. Syntactic rules make a very good job here at automatically detecting the types; for instance, the word
Yes will be recognized as boolean truth if standing on its own, but a text starting with it will be correctly recognized as string.
The other nifty feature is referencing. Parts of YAML document tree can be given labels, while other parts can later use those labels to point back to specific nodes. The whole structure can therefore morph into something more general than just a tree:
With types (incl. custom ones) and references, YAML can actually serve pretty well as serialization format for persisting objects.
But besides that, what YAML is actually good for?
I have hinted in the beginning that it seems like a decent choice for structured text data which is meant to be hand-edited. Various configuration files fall into this category, as well as some datasets around the size of contact list. I used YAML as config format for my IRC bot and it worked very well for this purpose. I’m also using it to store initialization data for database used by another side project of mine.
So, if you are not exercising YAML in any of your current endeavors, I’m encouraging you to give it a try. It might not be the best thing since sliced bread, but it’s very pleasant format to work with.
The upcoming release of Windows 8 is stirring up the tech world, mostly because of some controversial decisions Microsoft has made regarding the new version of their flagship OS.
Were it a few years ago, I would likely participate in various discussions about this, springing up on message boards I typically frequent. Probably I would have also tested the development preview which was made available several months before. Most likely, I would have clear opinion about viability of the new Metro UI, WinRT apps’ platform or the overall push in the general direction of HTML5.
I would… But I do not. Actually, I couldn’t care less about the whole thing.
It’s not only because I touch my Windows installation once in a blue moon, almost exclusively for gaming. No, that’s mostly because it’s 2012, and Windows still doesn’t support some crucial functionality you could expect from modern operating system.
Unfortunately, most users don’t expect it because they never knew any better. Therefore I will try to highlight some features from alternative operating systems (typically Linux or OSX) that I think Windows can be rightfully scorned for lacking.
In Unix-like systems, files and directories with names starting from ‘.’ (dot) are “hidden”: they don’t appear when listed by ls and are not shown by default in graphical file browsers. It doesn’t seem very clear why there is such a mechanism, especially when we have extensive chmod permissions and attributes which are not tied to filename. Actually, that’s one of distinguishing features of Unix/Linux: there are neither .exe nor .app files, just chmod +x.
But, here it is: name-based visibility control for files and directories. Why such a thing was ever implemented in the first place? Well, it turns out it was purely by accident:
Long ago, as the design of the Unix file system was being worked out, the entries . and .. appeared, to make navigation easier. (…) When one typed ls, however, these entries appeared, so either Ken [Thompson] or Dennis [Ritchie] added a simple test to the program. It was in assembler then, but the code in question was equivalent to something like this:
- if (name == '.') continue;
That test was meant to filter out . and .. only. Unintentionally, though, it ruled out a much bigger class of names: all that start with a dot, because that’s what it actually checked for. Back then it probably seemed like an innocuous detail. Fast-forward a couple of decades and it’s a de facto standard for storing program-specific data inside user’s home path. They can grow quite numerous over time, too:
That’s over 100 entries, a vast majority of my home directory. It’s neither elegant nor efficient to have that much of app-specific cruft inside the most important place in the filesystem. And even if GUI applications tend to collectively use a single ~/.config directory, the tradition to clutter the root $HOME path is strong enough to persist for foreseeable future.
Heed this as a warning. In the event your software becomes a basis for many derived solutions, future programmers will exploit every corner case of every piece of logic you have written. It doesn’t really matter what you wanted to put into your code, but only what you actually did.
Design patterns are often criticized, typically in the context of object-oriented programming. I buy into many such critiques, mostly because I value simplicity as one of the most important qualities of good code. Patterns – especially when overused – often stand in the way to achieve it,
Not all critique aimed towards design patterns is well founded and targeted, though. More specifically, the example I’ve seen brought up quite often is the Singleton pattern, and I don’t think it’s a good one in this context. Actually, for making a case for design patterns being (sometimes) harmful, the singleton is probably one of the worst picks.
Realizing this is important, because whatever point you’re trying to convey will be significantly watered down if you use an inadequate example. It’s just too easy to make up counterarguments or excuses, concentrating on specific flaws of your sloppy choice, rather than addressing more general issues you wanted to put some light on. A bad example can simply be a red herring, drawing attention from the topic you wanted it to stand for.
What’s so bad about singleton pattern, though?
Especially in their classic incarnation formulated in famous work of Gang of Four, design patterns are mostly about increasing robustness and flexibility of software design by introducing additional layers of indirection between existing concepts. For instance, you can consider the Factory pattern as proxy that separates the process of creating an object from specific type (class) of that object.
This goes along the same lines as separation between interface and implementation, a fundamental concept behind the whole object-oriented paradigm. The purpose is to decrease coupling, i.e. dependencies between different parts of the code, and it’s noble goal in its own regard.
Unfortunately, the Singleton pattern doesn’t really aid us in this pursuit. Quite the opposite: it talks about having at most one single instance of some class, which will easily make it a choke point for many otherwise independent parts of program logic. It happens especially often with top-level objects, representing whole subsystems; thanks to making them into singletons, they end up being used almost everywhere.
We also shouldn’t forget what singletons really are – that is, global variables. (You can have singletons with more limited scope, of course, but OO languages typically support them as language feature that doesn’t require dedicated design pattern). The pattern attempts to abstract them away but they tend to leak out rather eagerly, causing numerous problems.
Indeed, there are all sorts of nastiness related to global variables, with these two being – in my opinion – the most important ones:
It is worth noting that these problems are somewhat language-specific. In several programming languages, you can relatively easily create “global” variables which are only apparent; in reality, they proxy to thread-local and/or mockable objects, addressing both concerns outlined above.
However, in such languages the Singleton pattern is often obsolete as explicit technique, because they readily provide it as part of the language. For example, Python module objects are already singletons: their singularity is guaranteed by interpreter itself.
So, if you are to discuss the merits of software design patterns: pros and (specifically) cons, make sure you don’t base your whole argumentation on the example of Singleton. Accuracy, integrity and honesty would require choosing a target which is more representative and has no severe, unrelated issues.
Writing code is not everything there is in programming. But writing code comprises of much more than just typing it in. There is compiling or otherwise building it; running the application to see
whether it works how it breaks; and of course debugging to pinpoint the issue and fix it. These are inherent parts of development process and we shouldn’t expect to be skipping them anytime soon…
Well, except that right now, I virtually pass on all of them. Your mileage may vary, of course, but I wouldn’t be surprised if many more developers found themselves in this peculiar position. I actually think this might be a sign of times, and that we should expect more changes in developer’s workflow that head in this very direction.
So, how do you “develop without developing”? Let’s look at the before mentioned activities one by one.
Getting rid of the build step is not really inconceivable. There are plenty of languages that do not require additional processing prior to running their code. They are called interpreted languages, and are steadily gaining grounds (and hype) in the programming world for quite some time now.
But even if we’re talking about traditional, de facto compiled languages (like Java or C++), there’s still something missing. It’s the fact that you don’t often have to explicitly order your IDE to compile & build your project, because it’s already doing it, all the time.
I feel there’s tremendous productivity gain by shortening the feedback loop and having your editor/IDE work with you as your write the code. When you can spot and correct simple mistakes as you go, you end up having more time and cognitive power for more interesting problems. This background assistance is something that I really like to have at all times, therefore I’ve set it up in my editor for Python as well.
The kind of programs I’m writing most often now – server-side code for web applications and backends – does not require another, seemingly necessary step all that often: running the app. As it stands, their software scaffolding is clever enough to detect changes in runtime and automatically reload program’s code without explicit prompting.
Granted, this works largely because we’re talking about interpreted languages. For compiled ones, there are usually many more hurdles to overcome if we want to allow for hot-swapping code into and out of a running program. Still, there are languages that allow for just that, but they are usually chosen because of reliability requirements for some mission critical systems.
In my opinion, there are also significant programming benefits if you can pull it off on your development machine. They are again related to making the cycle of writing code and testing it shorter, therefore making the whole flow more interactive and “real-time”. As of recently, we can see some serious pushes into this very direction. Maybe we will see this approach hitting mainstream soon enough.
“Oh, come on”, you might say, “how can you claim you’ve got rid of debugging? Is all your code always correct and magically bug-free?…”
I wish this was indeed true, but so far reality refuses to comply. What I’m referring to is proactive debugging: stepping though code to investigate the state of variables and objects. This is done to verify whether the actual control flow of particular piece of code is the one that we’ve really intended. If we find a divergence, it might indicate a possible cause for a bug we’re trying to find and fix.
Unfortunately, this debugging ordeal is both ineffective and time consuming. It’s still necessary for investigating errors in some remote, test-forsaken parts of the code which are not (easily) traceable with other methods and tools. For most, however, it’s an obsolete, almost antiquated way of doing things. That’s mainly because:
assertthat fails), while the relevant part of the code is even easier to localize. You might occasionally drop into debugger to examine local variables of the test run, but you never really step through whole algorithms.
It’s not like you can throw away your Xdb completely. With generous logging, decent test coverage and a little cautiousness when adding new things, the usefulness of long debugging sessions is greatly diminished, though. It is no longer mandatory, or even typical part of development workflow.
Whatever else it may be, I won’t hesitate calling it a progress.