There is a specific technology I wanted to play around with for some time now; it’s called node.js. It also happens that I think the best way to get to know new stuff is to create something small, but complete and functional. Note that by ‘functional’ I don’t really mean ‘practical’; that distinction is pretty important, given what I’m about to present here.
Basically, I wrote a package manager for jQuery. The idea was to have a straightforward way to install jQuery plugins – a way that somewhat mirrors the experience of dozens of other package managers, from pip
to cabal
. End result looks pretty decent, in my opinion:
The funny part? It doesn’t use any central, remote registry of plugins. What it does is searching GitHub and pulling code directly from there – provided it is able to find something relevant that looks like jQuery plugin. That seems to work well for quite a few popular ones, which is rather surprising given how silly and simplistic the underlying algorithm is. Certainly, there’s plenty of room for improvement, including support for jquery.json manifests – the future standard for the upcoming official plugin site.
As I said before, though, the main purpose of jqpm was educational one. After toying with underlying technologies for a couple of evenings, I definitely have better perspective to evaluate their usefulness. While the topic might warrant a follow-up posts in the future, I think I can briefly summarize my findings in few bullet points:
function
s and loops, and without denser syntactic sugar such as list comprehensions.require()
calls, it makes for an unusual system that resembles classic C/C++ #include
s – but improved, of course. What stands out the most is the lack of virtualenv/rvm-style utilities; instead, an equivalent approach of local node_modules
subfolders is used instead. (npm faq
and npm help folders
provide more elaborate explanation on how does it work exactly).The bottom line: node.js is definitely not a cancer and has many legitimate uses, mostly pertaining to rapid transfer of relatively small pieces of data over the Internet. API backends, single page web applications or certain game servers all fall easily into this category.
From developer’s point of view, it’s also quite fun platform to code in, despite the asynchronous PITA mentioned above (which is partially alleviated by libraries like async.js or frameworks providing futures/promises). On the overall abstraction ladder, I think it can be placed noticeably lower than Java and not very much higher than plain C. That place is an interesting one, and it’s also not densely populated by any similar technologies and languages (only Go and Objective-C come to mind). Occupying this mostly overlooked niche could very well be one of reasons for Node’s recent popularity.
At work I’ve got a colleague who displays unusual aptitude in coming up with amusing terminology for everyday (coding) things. Among those, the Cake Pattern is always certain to provoke few laughs.
Recently, though, I heard him mention a great common sense rule for any development decision, from the grand architecture down to single line of code. It can be phrased like this:
One hack is fine. But if you need another one on top of the first, it’s probably high time to really consider what you are doing.
For good measure, I’ll call it a Principle of Two Hacks, and I’m pretty convinced that it’s a very beneficial rule to apply in programming – especially when creating any non-trivial, not throw-away programs.
At first it may sound rather vague, however. It’s concept of a “hack” is not easily explicable, or put into indisputable definitions. But that’s what makes it powerful: we don’t need to invoke elaborate (and often controversial) notions of design patterns or abstraction to be able to discuss them.
At best, the ideas of accidental complexity or technical debt might be somewhat close to what developers typically deem as a hack. In practice, this is mostly an opaque intuition that stems from experience or skill, and it’s usually very hard to express in words. Yet, it’s always apparent whenever we encounter it, even though the exact sensation may vary considerably: from a dim feeling that something is maybe a bit off, up to severe intellectual nausea caused by looking at really bad code.
I also like how extremely pragmatic this principle is. Quick-and-dirty fixes making it into production code are just a fact of life, and we are not prohibited from letting them slip. What we are strongly advised here is to maintain integrity of the software we’re writing, trying not to stack one hack upon another.
But even that is not absolute, nonnegotiable gospel; there might still be valid reasons to loosen up its structure. The important part, however, is to notice when we’re doing something fishy and consciously decide whether or not it is a good idea. It is much better than just plunging forward with total disregard of sanity of future maintainers.
Ultimately, this principle is just subtly telling us to think, and that is never a bad advice.
It would be quite far-fetched to call JavaScript a functional language, for it lacks many more sophisticated features from the FP paradigm – like tail recursion or automatic currying. This puts it on par with many similar languages which incorporate just enough of FP to make it useful but not as much as to blur their fundamental, imperative nature (and confuse programmers in the process). C++, Python or Ruby are a few examples, and on the surface JavaScript seems to place itself in the same region as well.
Except that it doesn’t. The numerous different purposes that JavaScript code uses functions makes it very distinct, even though the functions themselves are of very typical sort, found in almost all imperative languages. Learning to recognize those different roles and the real meaning of function
keyword is essential to becoming an effective JS coder.
So, let’s look into them one by one and see what the function
might really mean.
If you’ve seen few good JavaScript libraries, you have surely stumbled upon the following idiom:
Any and all code is enclosed within an anonymous function
. It’s not even stored in a var
iable; it’s just called immediately so its content is just executed, now.
This round-trip may easily be thought as if doing absolutely nothing but there is an important reason for keeping it that way. The point is that JavaScript has just one global object (window
in case of web browsers) which is a fragile namespace, easily polluted by defining things directly at the script level.
We can prevent that by using “bracketing” technique presented above, and putting everything inside this big, anonymous function. It works because JavaScript has function scope and it’s the only type of non-global scope available to the programmer.
So in the example above, the function
is used to confine script’s code and all the symbols it defines. But sometimes we obviously want to let some things through, while restricting access to some others – a concept known as encapsulation and exposing an interface.
Perhaps unsurprisingly, in JavaScript this is also done with the help of a function
:
What we get here is normal JS object but it should be thought of more like a module. It offers some public interface in the form of increment
and getValue
functions. But underneath, it also has some internal data stored within a closure: the value
variable. If you know few things about C or C++, you can easily see parallels with header files (.h, .hpp, …) which store declarations that are only implemented in the code files (.c, .cpp).
Or, alternatively, you may draw analogies to C# or Java with their public and private (or protected) members of a class. Incidentally, this leads us to another point…
Let’s assume that the counter
object from the example above is practical enough to be useful in more than one place (a tall order, I know). The DRY principle of course prohibits blatant duplication of code such as this, so we’d like to make the piece more reusable.
Here’s how we typically tackle this problem – still using only vanilla function
s:
Pretty straightforward, right? Instead of calling the function on a spot, we keep it around and use to create multiple objects. Hence the function becomes a constructor for them, while the whole mechanism is nothing else but a foundation for object-oriented programming.
We have now covered most (if not all) roles that functions play when it comes to structuring JavaScript code. What remains is to recognize how they interplay with each other to control the execution path of a program. Given the highly asynchronous nature of JavaScript (on both client and server side), it’s totally expected that we will see a lot of functions in any typical JS code.
If you haven’t heard about it, DreamPie is an awesome GUI application layered on top of standard Python shell. I use it for elaborate prototyping where its multi-line input box is a significant advance over raw, terminal UX of IPython.
However, up until recently I didn’t know how to make DreamPie cooperate with virtualenv. Because it’s a GUI program, I scoured its menu and all the preference windows, searching for any trace of option that would allow me to set the Python executable. Having failed, I was convinced that authors didn’t think about including it – which was rather surprising, though.
But hey, DreamPie is open source! So I went to look around its code to see whether I can easily enhance it with an ability to specify Python binary. It wasn’t too long before I stumbled into this vital fragment:
The conclusions we could draw from this anecdote are thereby as follows:
With this newfound knowledge about dreampie arguments, it wasn’t very hard to make it use current virtualenv:
And after doing some more research, I ended up adding the following line to my ~/.bash_aliases:
Now I can simply type dp to get a DreamPie instance operating within current virtualenv but independent from terminal session. Very useful!
…for fun and profit!
I’m still kind of amazed of how malleable the Python language is. It’s no small feat to allow for messing with classes before they are created but it turns out to be pretty commonplace now. My latest frontier of pythonic hackery is import hooks and today I’d like to write something about them. I believe this will come handy for at least a few pythonistas because the topic seems to be rather scarcely covered on the ‘net.
As you can easily deduce, the name ‘import hook’ indicates something related to Python’s mechanism of imports. More specifically, import hooks are about injecting our custom logic directly into Python’s importing routines. Before delving into details, though, let’s revise how the imports are being handled by default.
As far as we are concerned, the process seems to be pretty simple. When the Python interpreter encounters an import
statement, it looks up the list of directories stored inside sys.path
. This list is populated at startup and usually contains entries inserted by external libraries or the operating system, as well as some standard directories (e.g. dist-packages). These directories are searched in order and in greedy fashion: if one of them contains the desired package/module, it’s picked immediately and the whole process stops right there.
Should we run out of places to look, an ImportError
is raised. Because this is an exception we can catch, it’s possible to try multiple imports before giving up:
While this is extremely ugly boilerplate, it serves to greatly increase portability of our application or package. Fortunately, there is only handful of worthwhile libraries that we may need to handle this way; json
is the most prominent example.
__path__
What I presented above as Python’s import flow is sufficient as description for most purposes but far from being complete. It omits few crucial places where we can tweak things to our needs.
First is the __path__
attribute which can be defined in package’s __init__.py file. You can think of it as a local extension to sys.path
list that only works for submodules of this particular package. In other words, it contains directories that should be searched when a package’s submodule is being imported. By default it only has the __init__.py‘s directory but it can be extended to contain different paths as well.
A typical use case here is splitting single “logical” package between several “physical” packages, distributed separately – typically as different PyPI packets. For example, let’s say we have foo
package with foo.server
and foo.client
as subpackages. They are registered in PyPI as separate distributions (foo-server and foo-client, for instance) and user can have any or both of them installed at the same time. For this setup to work correctly, we need to modify foo.__path__
so that it may point to foo.server
‘s directory and foo.client
‘s directory, depending on whether they are present or not. While this task sounds exceedingly complex, it is actually very easy thanks to the standard pkgutil
module. All we need to do is to put the following two lines into foo/__init__.py file:
There is much more to __path__
manipulation than this simple trick, of course. If you are interested, I recommend reading an issue of Python Module of the Week devoted solely to pkgutil
.
sys.meta_path
and sys.path_hooks
Moving on, let’s focus on parts of import process that let you do the truly amazing things. Here I’m talking stuff like pulling modules directly from Zip files or remote repositories, or just creating them dynamically based on, say, WSDL description of Web services, symbols exported by DLLs, REST APIs, command line tools and their arguments… pretty much anything you can think of (and your imagination is likely better than mine). I’m also referring to “aggressive” interoperability between independent modules: when one package can adjust or expand its functionality when it detects that another one has been imported. Finally, I’m also talking about security-enhanced Python sandboxes that intercept import requests and can deny access to certain modules or alter their functionality on the fly.
It was almost exactly one year ago and I remember it quite vividly. I was attending an event organized by Google which was about the Chrome Web Store, as well as HTML5 and web applications in general. After the speaker finished pitching about awesomeness of this stuff (and how Chrome was the only browser that supported them all, of course), it was time for a round of questions and some discussion. I seized this opportunity and brought up an issue of user interface inconsistencies that plague the contemporary web apps. Because the Web as a platform doesn’t really enforce any constraints on UI paradigms, we can experience all sorts of “creative” approaches to user interface. In their pursuit of novelty and eye candies, web designers often sacrifice usability by not adhering to well known interface metaphors, and shying away from uniform UI elements.
At that time I didn’t really get a good answer that would address this issue. And it’s an important one, given the rate at which web applications are springing to life and replacing their equivalent desktop programs. Does it mean we’ll be stuck with this UI bonanza for the time being?…
Fortunately, there are some early first signs that it might not necessarily be so.
Few months later, in August 2011, Twitter released the first version of Bootstrap framework. Originally intended for internal use, this set of common HTML, CSS and JS patterns was made open source and released into the wild. The praise it subsequently gained is definitely well deserved, for it is a great set of tools for kickstarting development of any web-related project. Its features include:
Along with universal acclaim came also the popularity: it is currently the most watched project on GitHub.
However, some want to argue that being so popular has also an unanticipated, negative side. It makes the developers lazy, convinced they can get away without a “proper” design for their site or app. Even more: it allegedly shows disrespect for their users, as if the creator simply didn’t care how does their product look like.
Does it compute? I don’t think so. Do you complain if you find that any particular desktop application uses the very same looks for UI components, like buttons or list boxes?… Of course not. We learned to value the consistency and predictability that this entails, because it frees us from the burden of mentally adjusting to every single GUI app that we happen to use. Similarly, developers appreciate how this makes their work easier: they don’t have to code dropdown menus or modal dialogs, which in turns allows them to spend more time on actual, useful functionality.
Sure, it didn’t happen overnight when desktop OS’ were emerging as software platforms. Also, there are still plenty of apps whose creators – willfully or unintentionally – chose not to adhere to the standards. Sometimes it’s even for the good, as it allows for new, useful UI patterns to emerge and be adopted by the mainstream. The resulting process is still that of convergence, producing interfaces which are more consistent and easier to use.
The analogous process may just be happening to the Internet, considered as a “platform” for web applications. By steadily raising in popularity, Bootstrap has a chance of becoming the UI framework for Web in general – an obvious first choice for any new project.
Of course, even if this happens, it would be terribly unlikely that it starts reigning supreme and making every website look almost exactly the same – i.e. transforming the Web into equivalent of desktop. What’s much more likely is following the footsteps of mobile platforms. In there, every app strives to be original and appealing but only those that correctly balance usability with artsy design provide really compelling user experience.
It will not be without differences, though. Mobile platforms are generally ruled with iron (or clay) fist and have relevant UI guidelines spelled out explicitly. For Web it’s very much not the case, so any potential “standardization” is necessarily a bottom-up process whose benefits have to be indisputable and clearly visible. Despite some opposition, Bootstrap seems to have enough momentum to really (ahem) bootstrap this process. It already wins hearts and minds of many web developers, so it may be just a matter of time.
If it happens, I believe the Web will be in better place.
Yesterday I came back from IGK conference (Engineering of Computer Games) in Siedlce. Among few interesting lectures and presentations it also featured a traditional, 7-hour long game development contest: Compo. Those unfamiliar with the concept should know that it’s a competition where every team (of max. 4 people) has to produce a playable game, according to particular topic, e.g. theme or genre. When the work is done, results are being presented to the public while a comittee of organizers votes for winners.
As usual, I took part in the competition along with Adam Sawicki “Reg” and Krzystof Kluczek “Krzysiek K.“. Topic for this year revolved around the idea of “escape” or “survival” kind of game, so we designed an old school, pixel-artsy scroller where you play as a prisoner trying to escape detention by running and avoiding numerous obstacles. We coded intensely and passionately, and in the end it paid off really well because we actually managed to snatch the first place! Woohoo!
A whopping amount of 15 teams took part in this year’s Compo, so it might take some time before all the submissions are available online. Meanwhile, I’m mirroring our game here. Please note that it uses DirectX 9 (incl. some shaders), so for best results you should have a non-ancient GPU and Windows OS. (It might work under Wine; we haven’t tested it).
[2012-04-01] Prison Escape (5.8 MiB, 1,910 downloads)