Last week – while still on the other side of the pond – I attended a meet-up organized by the local Google Developers Group. The meeting included a presentation about Go, aimed mostly at newcomers, which covered the language from the ground up but at very fast pace. This spurred a lot of survey questions, as people evidently wanted to assess the language’s viability in general and fitness for particular domain of applications.
One of them was about web frameworks that are available to use in Go. Answer mentioned few simple, existing ones, but also how people coming from other languages are working to (re)build their favorite ones in Go. The point was, of course, that even though the language does not have its own Django or Rails just yet, it’s bound to happen quite soon.
And that’s when it dawned on me.
See, I wondered for a while now why people are eager to subject themselves to a huge productivity drop (among other hardships) when they switch from one technology, that they are proficient in, to a different but curiously similar one.
Mind you, I’m not talking about exploratory ventures intended to evaluate language X or framework Y by doing one or two non-trivial projects; heck, I do it very often (and you should too). No, I’m talking about all-out switching to a new shiny toy, especially when decided consciously and not through a gradual slanting, in a kind of “best tool for the job” fashion.
Whenever I looked for justification, usually I’d just find a straightforward litany of perks and benefits of
$targetTechnology, often having a not insignificant intersection with analogous list for the old one. Add the other, necessary part of risk-benefit calculation – drawbacks – and it just doesn’t balance out. Not by a long shot.
So, I notice that I am confused. There must some be other factor in play, but I couldn’t come up with any candidates – until that day.
As I speculate now, there is actually a big incentive to jump ship whenever a new one appear. And it seems to be one of dirty secrets of the hacker community, because it directly questions the esteemed notion of meritocracy that we are so eager to flaunt.
The incentive lies in a low hanging fruit of mere fame that emerging technologies offer – or more precisely, the communities that surround them. The newer they are, the more opportunities for fundamental contributions are up for grabs. Those who reach out and seize them will almost certainly earn long-lived respect and recognition – at least among that particular community.
Thing is, those early problems are usually well known, thoroughly explored wheels that just need to be (re)invented for yet another time. Depending on what kind of technology we’re talking about, the exact needs might be a bit different. However, I’m pretty sure at some point it needs at least few items from the following list:
It’s nowhere near trivial, obviously, to just jump in and implement something from the above list – doubly so when working in an unfamiliar setting. But as it stands now, there are dozens of precedents for each and every one of them. If you worked with/on some of them in the past, it might not be a tall order to recreate the experience in a new language or platform.
Compare it to how hard it is to innovate within the bounds of an existing language. It’s nigh impossible in Java or C++ and it’s hard in Python, Ruby or even Objective-C. But when the environment lacks some of the very basics, it suddenly becomes that much easier to contribute significantly.
And if the current bandwagon drives off, there’s always next one just around the corner…
Adding comments is disabled.Comments are disabled.