The Leprechaun Trap

A few years ago and on occasion of St. Patrick’s Day, my daughter built a leprechaun trap. She wrapped a shoe box in shiny green paper and cut a slot through it, decorated it with a border with jingle bells, added a ladder made of Q-tips so the leprechaun would be able to climb, and then added a gold box with the legend “gold inside”.  To trick the leprechaun, she hung a piece of
“gold” from a pole. Once he reached for the piece of gold, he would fall through the slot and he’d be caught. Who could argue with such piece of engineering? It was virtually impossible for the leprechaun to escape once tempted.

The night before she asked me impatiently where would be the best place to place the trap.  I mentioned that next to the front door would be best, since the leprechaun would have to come in from somewhere.  She placed the trap and we all went to sleep.  Next morning, as I got off to work–I’m an early riser, so it was still dark–and opened the door, I almost tripped because of some object on the floor.  I uttered words that I cannot repeat here, and as I turned on the light I realized I had rattled the leprechaun trap and had messed it up a little.

After realizing my mistake, I decided to leave the trap as it was and just left.  Sure enough, later in the day my daughter called me very excited: “Dad, I almost caught the leprechaun!”, and she described what had happened, and how she had found the box. When I came back, she described why the trap had failed and started thinking of improvements–she kept the box and used it the following year, again, with no success.

There are leprechaun traps in software. Very frequently we’re tempted by “gold”whether it is in the form of:

  1. Latest and greatest programming language X…it doesn’t matter if X has a few percentage points in global usage and has remained so for several years. The problem is not in the language itself, it’s mostly in the long-term viability, tools and libraries around it.
  2. Latest technology…even though there is scarce support for it.  How often have you looked at an open source library that looks promising only to find out that the community
    around it is a couple of developers and it has not maintained for years, or that once you look under the hood the code quality leaves a lot to be desired?
  3. Latest and greatest development process…DevOps anyone, anyone, Bueller? The problem is not in the development process–after all DevOps  responds to a set of needs–, it is in the edicts that move teams towards development processes that are  misunderstood or inappropriate, often without training and simply as a trend that starts who knows where.

It does not mean that there are not authentic opportunities for striking “gold” and that one shouldn’t be looking elsewhere for satisfactory solutions not currently met with existing technologies.  So why, as software professionals, do we often fall in a leprechaun trap? A coworker of mine long ago mentioned that we as engineers are pleasers.  We’re problem solvers, and often times overly optimistic.  It’s hard to say no to “gold”, to resist the impulse and think back carefully before we offer our educated guesses and commit to schedules that are unrealistic–but there’s a shiny box that reads “gold inside”. We fall into the leprechaun trap, and once inside, we know it’s hard to climb out of late schedules, budget overruns and buggy software: whole teams have to work overtime to compensate for the late schedules and the budget overruns, and with buggy software you just build a backlog that you–or some unlucky soul–will have to work on for the next year because of software “gold”.

The secret not to fall into a leprechaun trap?  Don’t be a leprechaun.

Happy Saint Patrick’s Day!

Posted in software | Tagged | Leave a comment

Observations on Observer

zopiloteMy first encounter with the Observer design pattern was circa 1998.  The GoF book was still new–published in 1994–and the community was still assimilating design patterns. Java  the Hot* even offered an Observer interface and an Observable class. What could be simpler?

However, I’ve used Observer exactly once.  The main reason is that Observer is a scaffolding pattern–it is good to understand the concepts, but once you try to implement a robust version of it, it becomes complicated very quickly.  Furthermore, you often graduate to a better version of it, and often times there are language-specific facilities or libraries that implement a variation on Observer. In essence, not worth your squeeze.

Consider this naïve implementation in C++:

struct observer {
  virtual void update() = 0;

class subject {
 set<observer> observers;
  void notify() { 
   for_each(begin(observers), end(observers), mem_fn(&observer::update));

  add(const observer& o) { observers.insert(o); }
  remove(const observer& o) { observers.erase(o); } 

class concrete_subject : public subject {

The problems start very quickly.  They’ve been discussed ad naseam over the years. One of the most complete discussions I’ve read are by Alexandrescu [1], [2] and Sutter [3].

  1. What type of container is best? For example, can an observer register more than once? Can observers be notified in a specific order?
  2. Is the container thread-safe? For example, can observers be added or removed concurrently from different threads.
  3. What is the type a container stores?  Storing an observer requires it is copy-constructible, so perhaps storing an observer* is best.  However, if the observer goes out of scope without being removed from the subject one will end up with a dangling pointer.  How about using a shared_ptr<observer>?  If all but the reference stored in a subject are released, unwanted notifications could be coming in.
  4. Can observers call remove from within the update method?  This could invalidate the iterator in subject::notify.
  5. What happens if an exception is thrown from the a concrete observer’s update member function? You don’t pass Go, you don’t collect $200.

These questions are not specific to C++–the same problems need to be solved Java or C#, for that matter, even when using delegates**, which behind the scenes implement a variation on Observer: It provides a collection to store the targets, add/remove methods with operators +=, -=, and a notification mechanism that executes on the thread where the delegate is invoked.

Java’s  Observer interface and Observable are not perfect, either. If you read the fine print, you’ll notice that the notifications can come in different threads.  Ouch. This means that you better make sure your event handler is thread-safe.

Notifications per Method Ratio (N:M)

A taxonomy for Observer that can let you choose what to use is the ratio of notifications per method.  Assume N is the number of notifications from a specific Subject and M is the number of methods to handle the notification.  When choosing, you can look at the N:M ratio and see if it makes sense to you.

In Observer, the N:M ratio is many:1, since all notifications are funneled through the update method.

In the Delegation Event Model (DEM) design pattern, all the notifications are grouped into one interface, with one method per notification. The N:M ratio is 1:1.

// Methods in Java interfaces are public.  Ditto in C#
// Events are grouped in one 
public interface XxxListener {
  void event1(XxxEvent e);
  void event2(XxxEvent e); 

public class MyXxxListener implements XxxListener {
  public void event1(XxxEvent e) { ... }
  public void event2(XxxEvent e) { ... }

public class MySubject {
  public addXxxListener(XxxListener lis) { ... }
  public removeXxxListener(XxxListener lis) { ... }

In C#, using delegates is a language-specific version of Observer,  where each delegate, while it can address several targets, it will support a notification per delegate, so the N:M ratio is also 1:1.

If you want notifications à la carte, in C++ you can make use of Boost.Signals2, a signals and slots, thread-safe library where you can get a notification per signal. You can consider signals a C++ alternative to delegates in C#. Added flexibility is that you can specify a priority for each slot. Just like in C#, there will be a signal per slot (or a notification per method). The N:M ratio is also 1:1.

In essence, when shopping around for Observer, beware of the consequences and trade-offs of the implementation, whether it is a language-specific solution, a library or a roll-your own implementation.


[1] Alexandrescu, Andrei,  Generic<Programming>: Prying Eyes: A Policy-Based Observer (I), C/C++ Users JournalApril 2005.

[2] Alexandrescu, Andrei,  Generic<Programming>: Prying Eyes: A Policy-Based Observer (II), C/C++ Users Journal, June 2005.

[3] Sutter, Herb,  Generalizing Observer, C/C++ Users JournalSeptember 2003.

* Could not resist paying homage to Star Wars character Jabba the Hutt.

** As an anecdote, when learning about delegates in the first-ever Guerrilla .NET at DevelopMentor before the .NET framework was released, I asked the instructor, Brent Rector, what happened if the object receiving the delegate call threw an exception.  He tried it, and the program crashed.

Posted in design patterns, software | Leave a comment

Singling out Singleton

Out of all the design patterns that I’ve used in my professional career, Singleton is the one I’ve used the least. Don’t get me wrong. In my experience, as developers we are fascinated with the ability to refer to a single instance, and in our quest to master the pattern we whip up naive implementations that bring more problems than solutions. Most of time one can solve the single instance problem by creating, well, a single instance (in UML terms, a Singleton has both cardinality and multiplicity of 1).  If you and your team have control of the code, why would you want to create a Singleton when you can easily make sure only one instance of a given class is created?   The main reason I don’t use Singleton is because it is misunderstood: Singleton isn’t about a single instance, it is about a single state.  That’s right.  Singleton makes sure that the object state is consistent–meets its invariants–by providing a single point of entry. However, there is another design pattern called MonoState that addresses the problem more elegantly and does not have many of the issues from Singleton. It was originally published in C++ Report, now a defunct magazine.  You can find a nice explanation of it here.

If you’re still intent to whip up your own Singleton, there are two problems to solve: lifetime and thread-safety.

Regarding lifetime, if your Singleton lifetime is independent, a simple implementation that I used in the past makes used of the Nifty Counter technique, described in the C++ ARM [1] and  by John Lakos [2]. It uses C++ ’98 but can easily be converted to C++ ’11, and it does not provide Double-Checked Locking, albeit experts will argue that DCL is broken. In addition, there have been several approaches discussed, particularly in Alexandrescu [3] and Vlissides [4], and one can find Singleton implementations in Boost. One of the differences with the approach explored by Andrei ultimately makes use of atexit(), while a Singleton using the nifty counter technique will be deallocated after all the calls to atexit() have completed. That may not seem like much, but if you’re looking for extra flexibility, this might be what you need.

Listing 1 (singleton.h):

class NiftySingletonDestroyer;

class Singleton {
 static Singleton* _instance;

 Singleton(const Singleton&);
 Singleton& operator=(const Singleton&);

 friend class NiftySingletonDestroyer;
 static Singleton* Instance();

static class NiftySingletonDestroyer {
 static int counter;
 } aNiftySingletonDestroyer;

Listing 2 (singleton.cpp):

#include "singleton.h"
Singleton* Singleton::_instance = 0;
Singleton::Singleton() {}
Singleton::~Singleton() {}

Singleton* Singleton::Instance() {
  if (!_instance) {
    _instance = new Singleton();
  return _instance; 

int NiftySingletonDestroyer::counter;

NiftySingletonDestroyer::NiftySingletonDestroyer() { ++counter; }
NiftySingletonDestroyer::~NiftySingletonDestroyer() {
  if ( --counter == 0 && Singleton::_instance ) 
    delete Singleton::_instance;

Note the declaration of the NiftySingletonDestroyer:

static class NiftySingletonDestroyer {}...

as being a static class.  This basically does the trick.  Every time you do a #include "singleton.h" there will be a NiftySingletonDestroyer constructed, incrementing the counter for that compilation unit when it is loaded at runtime.  When the program is unloaded, there will be a call to the destructor, therefore decrementing the counter and eventually destroying the Singleton.


1. Ellis, Margaret, A. and Bjarne Stroustrup, The Annotated C++
Reference Manual, Section 3.4, pp. 20-21, Addison Wesley, Reading, MA,
2. Lakos, John, Large-Scale C++ Software Design, Chapter 7, pp.
Addison Wesley, Reading, MA 1996.
3. Alexandrescu, Andrei, Modern C++ Design, Addison-Wesley, Reading,
MA 2001.
4. Vlissides, J. Pattern Hatching: Design Patterns Applied,
Addison-Wesley, Reading, MA 1998.

Posted in design patterns | Tagged | Leave a comment

Troubleshooting Continuity in iOS 8 and OS X Yosemite…and addendum

Besides the Mac models that are needed to support it (2012 and later), and iOS 8.1 and later, make sure that Bluetooth is on both in your Mac and your iPhone or iPad, then try this:


Posted in Uncategorized | Leave a comment

A Time for Everything

When Plant Equipment (renamed to PlantCML and now Cassidian Communications) opened the door for me in 1996 to join as a software developer, little did I know that it would be one of the most wonderful professional rides of my life.

For almost eighteen years I had the privilege of touching the lives of millions of people through Cassidian Communications CTI products for the 9-1-1 Public Safety market, and in the process forged many friendships and had unforgettable experiences.

But, as Ecclesiastes 3 reads, there is a time for everything.  I accepted a position with VMware and I’m ready for a great experience.

Happy Thanksgiving!


Posted in Uncategorized | Leave a comment


This week, and propitiated by Scott Forstall’s exit from Apple, we’ve been bombarded about the impending UI changes that will supposedly be making way into iOS and OS X to go back to basics –or should I say, Bahaus?–from skeuomorphic user interfaces.

The way I see it, it’s all about the design language. I think that both skeuomorphic and Bahaus–I’ll use that term since I could not find a proper antonym–have a place in user interface design.  Both have extremes, too.  In iOS I only get into Game Center by accident, while in Windows 8, with its formerly-known-as-Metro design language, I get dizzy with live tiles or swiping too much on endless grids with a uniform look.

Design languages are there to provide the 80% of what a solution can be, whether those design languages are for user interface, application frameworks, design patterns or your favorite design category. However, design languages that are too rigid, while bringing consistency and uniformity, can also stifle creativity.

Perhaps hermit crabs are skeuomorphic. Or peackocks.


Posted in software | Leave a comment

The Importance of Being Agile: A Trivial Process for Software Developers

I could not resist paraphrasing Oscar Wilde regarding Agile software development.  I’ve been pondering for some time how to properly incorporate Agile in my “software development belief system,” and recently came across the following quote:

The morale effects are startling. Enthusiasm jumps when there is a running system, even a simple one. Efforts redouble when the first picture from a new graphics software system appears on the screen, even if it is only a rectangle. One always has, at every stage in the process, a working system. I find that teams can grow much more complex entities in four months than they can build.

Try to guess who’s the author without peeking ahead…three, two, one. I’ll spill the beans in the next sentences, but suffice to say that I was expecting a prominent Agile author, perhaps one of the authors of the Agile Manifesto, but no.  The author is Frederick P. Brooks, Jr., author of the The Mythical Man-Month, a book written in 1978!

The problem that I had with Agile is that for the longest time I considered it a work in progress. In the 1990’s all the rave was OOP, and then OOD and OOA, followed by the different notations that culminated with the UML.  Back then, as a developer you were considered progressive if you knew objects.  Objects were fashionable, chic, cool.  Structured programming was old, boring, heavy. New systems were developed using RAD’s, Rational made millions selling UML tools, Java took center stage, development processes that used OOA/OOD/OOP and UML with use cases were in high demand.

Then in the new millenium came the Agile Manifesto.  I became aware of the Manifesto early on, and in the following years, as developers came and went, I started hearing stories of success.  Every time I asked, however, I did not get a full picture.  Perhaps because I was asking the wrong questions, or perhaps because I considered that several of the authors of the Agile Manifesto were “jumping ship” after contributing heavily to UML…I was expecting that they would commit to a software development process that would favor UML, but that did not happen.

My “breakthrough,” if I can call it that, came late in 2009, when I was involved in a project long enough (ten months) to try new techniques and with the right team size, a known scope and domain expertise.  We applied only an Agile wall and concepts from Theory Of Constraints, but boy, did it make a difference.  One of the highlights of that project was “on time and on budget.”  Very little overtime from the beginning, since I requested that no one would work overtime for the duration of the project. The project made use of…use cases and UML, not epics or user stories.

After that, I got the “itch,” but got formally involved in a real Agile project until recently, and I’m coming to appreciate the results.  It’s still too early to see the real impact for using Agile, but the results are promising.  Suddenly, being Agile is fashionable, chic, cool, and using UML and use cases is old, boring, heavy. Sounds familiar?

I believe that Agile is not panacea and it cannot solve all the problems.  For one, the company organizational structure plays a significant role in the success of the project(s)–e.g., organizing with functional managers, strong matrix or independent projects.  Another one is the never-ending need for people, and often times teams are short on proper product owners or Scrum masters. Yet another one is the commitment from upper management in Agile, and changing the way software is measured for success (e.g., throughput accounting). Saying that, there are multiple benefits to have focused, geo-located teams working together as one entity towards a single goal.

I also believe that UML and use cases contribute greatly to a system design when properly managed.  A frequent problem with use cases is how to properly write them. After more than ten years of information on the subject, you still find UML artifacts and use cases that are poorly written.

Agile is still a work in progress, but to where?  Eli Goldratt, author of The Goal and Critical Chain, provides some insight.  Summarizing from Agile Management for Software Engineering: Applying the Theory of Constraints for Business Results, all sciences evolve from classification to correlation, to effect-cause-effect.

Agile methods have agreed on internal nomenclature (e.g., epics, user stories, backlog), therefore they have reached the classification stage.

Agile methods , individually, have reached the correlation stage by pattern recognition: techniques such as XP, Scrum, work.

Agile methods, however, have not reached the effect-cause-effect stage. I don’t think that this stage will be reached soon–perhaps by the end of this decade–, mainly because it involves measuring and agreeing on what needs to be measured, as well as the interdependencies among those measurements in order to predict the effect a given decision will have on the project.

This does not take any merit away from Agile methods.  On the contrary, the positive effects from the results is a great motivator to reach the last stage and turn Agile methods into a science.

Now let’s go back and be Agile.

Posted in software | Tagged , , , , | Leave a comment

Installing Windows To Go on a USB 2.0/3.0 Flash Drive or Hard Drive from Windows 7

If you have a spare USB 2.0/30 flash drive or hard drive and want to re-purpose it for Windows To Go, you can follow the instructions at  The guide works as advertised as long as you’re installing from Windows 8.  However, if you’re installing Windows To Go from Windows 7, running the last step will not configure the boot record:

bcdboot.exe h:\windows /s h: /f ALL

The reason is that the version of bcdboot in Windows 7 does not support the same options (in particular the /f option) as the version in Windows 8.

Wait a second…don’t I have Windows 8 already loaded on the USB flash drive/hard drive?  Yes, and you can copy the right version of bcdboot.exe to your Windows 7 machine or execute it in place. The right version can be found at:


where h is your USB flash drive/hard drive letter.

Posted in windows 8, Windows To Go | Tagged , , , | 2 Comments

Installing Windows 8 Developer Preview on an Acer Aspire 1420p

The Acer Aspire 1420p is a nice candidate for the Windows 8 Developer Preview because it has a touch screen with two touch points. If you’re the lucky owner of one from the Microsoft PDC 2009, Windows 8 installs just fine.  I used the flash drive from the BUILD conference as the boot drive, which installs the Windows Developer Preview with Apps  and Tools (x64 version), which should suffice for most installations. If you want to install a 32-bit version, create a bootable flash drive using the instructions at the hyperlink below.  Notice that the 32-bit version fits on a 4GB flash drive, unlike the 64-bit version.

The only trick to the 1420p is that it can boot from any of the two USB ports. The boot options for flash drives are listed FDD at the end of boot list when you enter the BIOS (pressing F2 during the startup sequence).  I recommend to move both FDD options to the top of the list (pressing F6).

After that, you can follow the instructions on

I shrank the Windows 7 Ultimate partition and gave Windows 8 64 GB of space.  Once the installation completes, the touch screen works fine but there are physical limitations to use the gestures that are possible on the Samsung tablet provided at the BUILD conference.  Any of the gestures that are executed by swiping your finger from any of the edges of the screen into the screen (charms: swipe from the left; switch apps: swipe from the right; app bar: swipe from the bottom; bring up options for an app: swipe from the top) are very difficult due to the limited space between the edge of the touch screen and its frame (about 1/4 inch).  You can resort to using keyboard shortcuts ( or use the stylus included with the 1420p. Even with the stylus, the swipe-from-the-top and swipe-from-the-left gestures are somewhat troublesome.

The gesture to switch between applications (swipe-from-the-left) can also be performed with the mouse: let the mouse pointer rest on the left side of the screen for about one second and a little window appears that lets you select the next application.

Posted in windows 8 | Tagged , | 1 Comment

Hello world!

In September of 1988 I was holding a copy of K&R’s The C Programming Language and tried to understand the first C program, the original Hello World. C looked alien to me.

Fortran had been my first programming language in college. The school had a PDP-11 or some DEC machine that had been donated to the school by Digital Equipment, and we used it to compile our assignments.  We had to schedule time in the lab, and that’s where I had my first experience in “concurrent programming.”  The compiler (called the TKB, an acronym that I was never interested in knowing what it meant) was not multi-user, so every time you wanted to compile your program, you’d literally had to stand on your chair and yell “I’m going to use the TKB.” That would be the signal for other users to wait before submitting. Once the compiler was done you would “unlock” the TKB by yelling again and life was good…except when someone wanted to mess up your work and would submit the job simultaneously. There was no “event log” that would say who the offender was, so you would just curse and move on.

Back to C. After college I had enrolled in a class and at the end we had to deliver an implementation of Tic Tac Toe.  Having been an avid chess player during high school, I was excited to come up with the magic formula that would evaluate the position and perused through some books that had something on game theory and stumbled on what looked like a good algorithm for the game. The instructor had given us loose instructions on the user interface, but he preferred a GUI.

Having no time and no money, only a copy of Turbo C, and not willing to invest time in a GUI that came in one of the C books from Herb Schildt due to several bugs I had to solve for it on another project, I decided to tackle the project using the algorithm, but alas, there was a problem position where the evaluation function would fail and the computer would lose.  So I injected a “special case” to make sure that the function would safely make it should it encounter the losing position.

At the end of the project, the other two teams got a beautiful user interface–and both teams used Pascal and not C–but my ugly baby with a command-line interface, writing O and X and asking for the coordinates of the next move, was the only one that would not lose.

I did not get the best grade, though.

Posted in concurrent programming, software | Leave a comment