Monthly archive for February, 2012

Against Unit Tests

2012-02-26 21:23

When discussing the topic of unit testing and methodologies they might entail (mostly TDD, i.e. Test-Driven Development), I noticed a curious imbalance in the number and strength of arguments pro and contra. The latter are few and far between, up to the point of ridiculous scarcity when googling “arguments against TDD” is equally likely to yield stories from both sides of the fence. That’s pretty telling. Is it so that TDD in general and unit tests in particular are just the best thing ever, because there is an industry-wide consensus about them?…

I wouldn’t be so sure. All this unequivocal acknowledgement looks suspiciously similar to many other trends and fashions that were (or still are) sweeping through the IT domain, receding only when the alternative approach gains enough traction.


O RLY?

Take OOP, for example. Back in the 90s and around 2000, you would hear all kinds of praise for the object-oriented methodology: how natural it is, how it helps to model problems in intuitive way, how flexible and useful its abstractions are. Critics’ camp existed, of course, but they were small, scattered and not taken very seriously. Objects and classes were reigning supreme.

Compare this to present day, when OOP is taking blows from almost every direction. On one hand, it is rejected on performance basis, as the unknown factors of virtual method’s call are seen as a liability. On the other hand, its abstraction patterns are considered baroque, overblown and outdated, unfit for modern computing challenges – most notably concurrency and asynchronism.

Could it be that approaches emphasizing the utmost importance of unit tests are following the same route? Given the pretty much universal praise they are receiving, it’s not unimaginable. In this context, providing some reasonable counterarguments seems like a good thing: if we let some air out of this balloon, we may prevent it from popping later on.

Incidentally, this is a service for TDD/unit testing that I’m glad to provide ;-) So in the rest of this post, I’m going to discuss some of their potential drawbacks, hopefully helping to even-out the playing field. Ultimately, this should always lead to better software engineering practices, and better software.

Tags: , , ,
Author: Xion, posted under Programming, Thoughts » 3 comments

Importance of Using Virtual Environments

2012-02-22 19:59

One of technological marvels behind modern languages is the easiness of installing new libraries, packages and modules. Thanks to having a central repository (PyPI, RubyGems, Hackage, …) and a suitable installer (pip/easy_install, gem, cabal, …), any library is usually just one command away. For one, this makes it very easy to bootstrap development of a new project – or alternatively, to abandon the idea of doing so because there is already something that does what you need :)

But being generous with external libraries also means adding a lot of dependencies. After a short while, they become practically untraceable, unless we keep an up-to-date list. In Python, for example, it would be the content of requirements.txt file, or a value for requires parameter of the setuptools.setup/distutils.setup function call inside setup.py module. Other languages have their own means of specifying dependencies but the principles are generally the same.

How to ensure this list is correct, though?… The best way is to create a dedicated virtual environment specifically for our project. An environment is simply a sandboxed interpreter/compiler, along with all the packages that it can use for executing/compiling) programs.

Rationale

Normally, there is just one, global environment for a system as a whole: all external libraries or packages for a particular language are being installed there. This makes it easy to accidentally introduce extraneous dependencies to our project. More importantly, with this setting we are sharing our required libraries with other applications installed or developed on the system. This spells trouble if we’re relying on particular version of a library: some other program could update it and suddenly break our application this way.

If we use a virtual environment instead, our program is isolated from the rest and is using its own, dedicated set of libraries and packages. Besides preventing conflicts, this also has an added benefit of keeping our dependency list up to date. If we use an API which isn’t present in our virtual environment, the program will simply blow up – hopefully with a helpful error :) Should this happen, we need to make proper amends to the list, and use it to update the environment by reinstalling our project into it. As a bonus – though in practice that’s the main treat – deploying our program to another machine is as trivial as repeating this last step, preferably also in a dedicated virtual environment created there.

Using it

So, how to use all this goodness? It heavily depends on what programming language are we actually using. The idea of virtual environments (or at least this very term) comes from Python, where it coalesced into the virtualenv package. For Ruby, there is a pretty much exact equivalent in the form of Ruby Version Manager (rvm). Haskell has somewhat less developed cabal-dev utility, which should nevertheless suffice for most purposes.

More exotic languages might have their own tools for that. In that case, searching for “language virtualenv” is almost certain way to find them.

Tags: , , , , ,
Author: Xion, posted under Programming » 2 comments

Adventures in Haskelland

2012-02-14 21:31

In a post from top of the year I mentioned that I was looking into Haskell programming language, and promised to give some insight on how it fares compared to others. Well, I think it’s time to deliver on that promise, for the topic of this particular language – and functional programming in general – is indeed very insightful.

I do not claim to have achieved any kind of proficiency in Haskell, of course; I might very well be just scratching the surface. However, this is exactly the sort of perspective I wanted to employ when evaluating language usefulness: a practical standpoint, held by programmer who is looking to use it for actual tasks, without having mastered it in great detail – at least initially. This is, by the way, a pretty common setting when tackling all new things related to coding, be it frameworks, software platforms or languages.

Still, I knew it was almost supposed to be a rough ride. A language whose tutorials purposefully shy away from classic Hello World example (typically inserting a factorial function instead) looks like something designed specifically to melt brains of poor programmers who dare to venture forth to investigate it. Those few who make their way back are told about in folk tales, portrayed as mildly crazy types who profess this weird idea of “purity”, and utter the word ‘monad’ with great contempt.

Okay, I may be exaggerating… slightly. But there is no denying that Haskell attained a certain kind of reputation, something like a quirky cousin in the big and diverse family of languages. Unlike Lisp, he isn’t picked on due to any (particular (and (rather) obvious)) property. There is just this general aura of weirdness and impracticality that he supposedly radiates, repelling all but the most adventurous of coders.

If it really exists, then pity: it certainly failed to repel me. As a result, I may have a story (or two) to share with those who want to learn what’s this functional business is really about, and why should they care.

Grab a chair and something to drink – it’s going to take a while. But in the end, you shouldn’t regret staying.

Generator Pitfalls

2012-02-08 19:34

I’m a big fan of Python generators. With seemingly little effort, they often allow to greatly reduce memory overhead. They do so by completely eliminating intermediate results of list manipulations. Along with functions defined within itertools package, generators also introduce a basic lazy computation capabilities into Python.
This combination of higher-order functions and generators is truly remarkable. Why? Because compared to ordinary loops, it gains us both speed and readability at the same time. That’s quite an achievement; typically, optimizations sacrifice one for the other.

All this goodness, however, comes with few strings attached. There are some ways in which we can be bitten by improperly used generators, and I think it’s helpful to know about them. So today, I’ll try to outline some of those traps.

Generators are lazy

You don’t say, eh? That’s one of the main reasons we are so keen on using them, after all! But what I’m implying is the following: if a generator-based code is to actually do anything, there must be a point where all this laziness “bottoms out”. Otherwise, you are just building expression thunks and not really evaluating anything.

How such circumstances might occur, though? One case involves using generators for their side effects – a rather questionable practice, mind you:

  1. def process_kvp(key, value=None):
  2.     print key,
  3.     if value: print "=", value
  4.  
  5. itertools.starmap(process_kvp, some_dict.iteritems())

The starmap call does not print anything here, because it just constructs a generator. Only when this generator is iterated over, dictionary elements would be fed to process_kvp function. Such iteration would of course require a loop (or consume recipe from itertools docs), so we might as well do away with generator altogether and just stick with plain old for:

  1. for kv in some_dict.iteritems():
  2.     process_kvp(*kv)

In real code the judgement isn’t so obvious, of course. We’ll most likely receive the generator from some external function – a one that probably refers to its result only as an iterable. As usual in such cases, we must not assume anything beyond what is explicitly stated. Specifically, we cannot expect that we have any kind of existing, tangible collection with actual elements. An iterable can very well be just a generator, whose items are yet to be evaluated. This fact can impact performance by moving some potentially heavy processing into unexpected places, resulting in e.g. executing database queries while rendering website’s template.

Generators are truthy

The above point was concerned mostly with performance, but what about correctness? You may think it’s not hard to conform to iterables’ contract, which should in turn guarantee that they don’t backfire on us with unexpected behavior. Yet how many times did you find yourself reading (or writing!) code such as this:

  1. def do_stuff():
  2.     items = get_stuff_to_do() # returns iterable
  3.     if not items:
  4.         logging.debug("No stuff to do!")
  5.         return 0
  6.     # ...do something with 'items'...

It’s small and may look harmless, but it still manages to make an ungrounded (thus potentially false) assumption about an iterable. The problem lies in if condition, meant to check whether we have any items to do stuff with. It would work correctly if items were a list, dict, deque or any other actual collection. But for generators, it will not work that way – even though they are still undoubtedly iterable!

Why? Because generators are not collections; they are just suppliers. We can only tell them to return their next value, but we cannot peek inside to see if the value is really there. As such, generators are not susceptible to boolean coercion in the same way that collections are; it’s not possible to check their “emptiness”. They behave like regular Python objects and are always truthy, even if it’s “obvious” they won’t yield any value when asked:

  1. >>> g = (x for x in [])
  2. >>> if g: print "Truthy!"
  3. Truthy!

Going back to previous sample, we can see that if block won’t be executed in case of get_stuff_to_do returning an “empty” generator. Consequences of this fact may vary from barely noticeable to disastrous, depending on how the rest of do_stuff function looks like. Nevertheless, that code will run with one of its invariants violated: a fertile ground for any kind of unintended effects.

Generators are once-off

An intuitive, informal understanding of the term ‘iterable’ is likely to include one more unjustified assumption: that it can iterated over, and over, i.e. multiple times. Again, this is very much true if we’re dealing with a collection, but generators simply don’t carry enough information to repeat the sequence they yield. In other words, they cannot be rewound: once we go through a generator, it’s stuck in its final state, not usable for anything more.

Just like with previous caveats, failure to account for that can very well go unnoticed – at least until we spot some weird behavior it results in. Continuing our superficial example from preceding paragraph, let’s pretend the rest of do_stuff function requires going over items at least twice. It could be, for example, an iterable of objects in a game or physics simulation; objects that can potentially move really fast and thus require some more sophisticated collision detection (based on e.g. intersection of segments):

  1. new_positions = calculate_displacement(items, delta_time)
  2. collision_pairs, rest = detect_collisions(items, new_positions)
  3. collided = compute_collision_response(collision_pairs)
  4. for item in itertools.chain(collided, rest):
  5.     item.draw()

Even assuming the incredible coincidence of getting all the required math right (;-)), we wouldn’t see any action whatsoever if items is a generator. The reason for that is simple: when calculate_displacement goes through items once (vigorously applying the Eulerian integration, most likely), it fully expends the generator. For any subsequent traversal (like the one in detect_collitions), the generator will appear empty. In this particular snippet, it will most likely result in blank screen, which hopefully is enough of a hint to figure out what’s going on :P

Generators are not lists

An overarching conclusion of the above-mentioned pitfalls is rather evident and seemingly contrary to statement from the beginning. Indeed, generators may not be a drop-in replacement for lists (or other collections) if we are very much relying on their “listy” behavior. And unless memory overhead proves troublesome, it’s also not worth it to inject them into older code that already uses lists.

For new code, however, sticking with generators right off the bat has numerous benefits which I mentioned at the start. What it requires, though, is evicting some habits that might have emerged after we spent some time manipulating lists. I think I managed to pinpoint the most common ways in which those habits result in incorrect code. Incidentally, they all origin from accidental, unfounded expectations towards iterables in general. That’s no coincidence: generators simply happen to be the “purest” of iterables, supporting only the bare minimum of required operations.

Tags: , , , , ,
Author: Xion, posted under Computer Science & IT » Comments Off on Generator Pitfalls

Checking Whether IP is Within a Subnet

2012-02-04 22:11

Recently I had to solve a simple but very practical coding puzzle. The task was to check whether an IPv4 address (given in traditional dot notation) – is within specified network, described as a CIDR block.
You may notice that this is almost as ubiquitous as a programming problem can get. Implementations of its simplified version are executed literally millions (if not billions) of times per second, for every IP packet at every junction of this immensely vast Internet. Yet, when it comes to doing such thing in Python, one is actually left without much help from the standard library and must simply Do It Yourself.

It’s not particularly hard, of course. But before jumping to a solution, let’s look how we expect our function to behave:

  1. >>> ip_in_network("10.0.0.0", "10.0.0.0/8")
  2. True
  3. >> ip_in_network("127.0.56.34", "127.0.0.0/8")
  4. True
  5. >> ip_in_network("192.168.0.0", "192.168.224.0/24")
  6. False

Pretty obvious, isn’t it? Now, if you recall how packet routing works under the hood, you might remember that there are some bitwise operations involved. They are necessary to determine whether specific IP address can be found within given network. However low-level this may sound, there is really no escape from doing it this way. Yes, even in Python :P

It goes deeper, though. Most of the interface of Python’s socket module is actually carbon-copied from POSIX socket.h and related headers, down to exact names of particular functions. As a result, solving our task in C isn’t very different from doing it in Python. I’d even argue that the C version is clearer and easier to follow. This is how it could look like:
#include
#include
#include
#include

bool ip_in_network(const char* addr, const char* net) {
struct in_addr ip_addr;
if (!inet_aton(addr, &ip_addr)) return false;

char network[32];
strncpy(network, net, strlen(net));

char* slash = strstr(network, “/”);
if (!slash) return false;
int mask_len = atoi(slash + 1);

*slash = ‘\0’;
struct in_addr net_addr;
if (!inet_aton(network, &net_addr)) return false;

unsigned ip_bits = ip_addr.s_addr;
unsigned net_bits = net_addr.s_addr;
unsigned netmask = net_bits & ((1 << mask_len) - 1); return (ip_bits & netmask) == net_bits; }[/c] A similar thing done in Python obviously requires less scaffolding, but it also introduces its own intricacies via struct module (for unpacking bytes). All in all, it seems like there is not much to be gained here from Python’s high level of abstraction.

And that’s perfectly OK: no language is a silver bullet. Sometimes we need to do things the quirky way.

Tags: , , ,
Author: Xion, posted under Computer Science & IT » 3 comments
 


© 2023 Karol Kuczmarski "Xion". Layout by Urszulka. Powered by WordPress with QuickLaTeX.com.