Mary had a Little Lambda (Architecture)

 

ardillaNursery rhyme aside, I’ve been looking avidly at Big Data Lambda Architectures. Nathan Marz introduced the term back in 2012, which is reminiscent of őĽ-Calculus. The first time you hear the term it brings memories of high-order functions in programming languages (functional or imperative, applications or systems). It is a layered architectural style, similar in nature, since its layered, to Pipes and Filters…but for Big Data.

The “lambda” portion of the term refers to data immutability at the Batch Layer (similar to pure functions). Quoting Nathan, “The batch layer needs to be able to do two things to do its job: store an immutable, constantly growing master dataset, and compute arbitrary functions on that dataset.”

Lambda Architectures are not new, but I think that Nathan had the great idea of giving it a name. Reminds me of the Allegory of the Cave by Plato, where while in the cave, we professionals have most likely seen a lambda architecture in one way or another, or participated in developing, or even created products with similar characteristics, but could not describe it fully. Nathan departed from the cave and saw the general aspects of the architectural style and then returned to the cave and described the real thing, giving a name that, whether you like it or not, is here to stay. Hey, it’s catchy and intriguing!

Lately I’ve been thinking that Lambda Architectures come in two flavors: Big Lambda Architectures (ūĚö≤-Architecture), which is what Nathan describes, and Little Lambda architectures (őĽ-Architecture), and the differentiation has to do with how much the underlying technologies can scale, and one would choose such technologies on purpose instead of “scaling down” Big Data products. Otherwise, how would architectures that have the three layers and same functions, but don’t scale as much be called? I can think of analytics products that have evolved from little lambda to big lambda, too, so little lambdas must exist.

Take it with a little grain of data salt ūüėČ

Advertisements
Posted in big data, lambda architecture, software, software architecture | Leave a comment

Customizing Priority Queues in C++

seagull
A nifty feature of priority queues in C++ is that you can implement a max heap or a min heap by simply changing the compare type. For most purposes, std::less and std::greater can do the job, provided that you use built-in types or custom types that provide a < or > operator and strict weak ordering.

A priority_queue can be implemented by a couple of different containers, with vector being the default.  In addition, you can provide a Compare type that must comply with strict weak ordering.

 

template<typename T, typename Container=std::vector<T>,
    typename Compare=std::less<typename Container::value_type>> 
        class priority_queue;

Here’s a priority queue with a max heap, followed by a priority queue with a min heap (and yes, it’s counterintuitive that max heap uses std::less and min heap uses std::greater).

priority_queue<int, vector<int>, less<int>> max_pq;
priority_queue<int, vector<int>, greater<int>> min_pq;

max_pq.push(1); max_pq.push(2); max_pq.push(3);
min_pq.push(1); min_pq.push(2); min_pq.push(3);

// prints 3 2 1
while (!max_pq.empty()) {
  cout << max_pq.top() << ' ';
  max_pq.pop();
}
cout << '\n';

// prints 1 2 3
while (!min_pq.empty()) {
  cout << min_pq.top() << ' ';
  min_pq.pop();
}
cout << '\n'

Nothing earth-shattering here. However, if you have a user-defined type, you have several mechanisms at your disposal. Assume that we have the following type

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;
};

A student has a GPA and a given age. ¬†We want to give priority to students with the highest GPA. ¬†Let’s look at the options:

Provide comparison operators

Comparison operators can be done as a member function, as a friend function if it accesses private member variables of the type, or as a free function if it makes use of only publicly accessible parts of the type. Once you have comparison operators, the type can be safely used with std::less and std::greater (should you provide both operators).

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;

  // Define it as a member function
  bool operator < (const student& rhs) const { return gpa < rhs.gpa; }

  // Define it as as a friend if it accesses private member variables,
  // or as a free function if it does not.
  // (in this case it does not, but it's for illustration).
  friend bool operator >(const student& lhs, const student& rhs);
};

bool operator >(const student& lhs, const student& rhs) {
  return gpa > rhs.gpa;
}

int main() {
  priority_queue<student, vector<student>, std::less> max_gpa;
  priority_queue<student, vector<student>, std::greater> min_gpa;

  student alice(3.98,15), bob(3.97, 18), charlie(3.95, 16);
  max_gpa.push(alice);
  max_gpa.push(bob);
  max_gpa.push(charlie);

  // prints 3.98 3.97 3.95
  while (!max_gpa.empty()) {
    cout << max_gpa.top().gpa << ' ' << endl;
    max_gpa.pop();
  }

  min_gpa.push(alice);
  min_gpa.push(bob);
  min_gpa.push(charlie);

  // prints 3.95, 3.97, 3.98
  while (!min_gpa.empty()) {
    cout << min_gpa.top().gpa << ' ' << endl;
    min_gpa.pop();
  }
}

Roll your own ¬†function object…

Simply define a type with a function call operator as shown below. The signature for the function call operator makes it a binary predicate.

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;
};

struct less_gpa {
  bool operator() (const student& lhs, const student& rhs) { 
    return lhs.gpa < rhs.gpa;
  }
};

struct greater_gpa {
  bool operator() (const student& lhs, const student& rhs) { 
    return lhs.gpa > rhs.gpa;
  }
};

int main() {
  priority_queue<student, vector<student>, less_gpa> max_gpa;
  priority_queue<student, vector<student>, greater_gpa> min_gpa;

  student alice(3.98,15), bob(3.97, 18), charlie(3.95, 16);
  max_gpa.push(alice);
  max_gpa.push(bob);
  max_gpa.push(charlie);

  // prints 3.98 3.97 3.95
  while (!max_gpa.empty()) {
    cout << max_gpa.top().gpa << ' ';
    max_gpa.pop();
  }
  cout << '\n';

  min_gpa.push(alice);
  min_gpa.push(bob);
  min_gpa.push(charlie);

  // prints 3.95, 3.97, 3.98
  while (!min_gpa.empty()) {
    cout << min_gpa.top().gpa << ' ';
    min_gpa.pop();
  }
  cout << '\n';
}

…Or use std::binary_function as the base class for your function object

The only benefit to using a binary function object is if you need the predefined typedefs for the argument and return types and use them in generic programming.  You still must provide the function call operator and be able to access the necessary elements.

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;
};

struct less_gpa : public std::binary_function<student, student, bool> {
  bool operator() (const student& lhs, const student& rhs) { 
    return lhs.gpa < rhs.gpa;
  }
};

struct greater_gpa : public std::binary_function<student, student, bool> {
  bool operator() (const student& lhs, const student& rhs) { 
    return lhs.gpa > rhs.gpa;
  }
};

Use a lambda expression

Technically, you will use a copy of the closure generated by the lambda expression and assigned to the f variable. Scott Meyers gives a great explanation here.

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;
};

int main()  
 // Binary predicate with lambda
  auto f = [](const student& lhs, const student& rhs) -> bool {
    return lhs.gpa < rhs.gpa;
  };

  // Notice the decltype and passing f in the constructor
  priority_queue<student, vector<student>, decltype(f)> max_gpa(f);

  student alice(3.98,15), bob(3.97, 18), charlie(3.95, 16);

  max_gpa.push(alice);
  max_gpa.push(bob);
  max_gpa.push(charlie);

  // prints 3.98 3.97 3.95
  while (!max_gpa.empty()) {
    cout << max_gpa.top().gpa << ' ';
    max_gpa.pop();
  }
  cout << '\n';

And if you need two or more criteria…

All of the options above will work when you have only one criterion for the priority queue, but what happens if you want to apply two or more criteria? ¬†Say that you want to give the highest priority to the student with the highest GPA, but in case of a tie, you want to give it to the younger student. In this case you can apply any of the methods above, but in your comparison, you have to consider the additional criteria. ¬†In the code below, when the age GPA is the same, the return value is true if the lhs student is older–and in a max heap priority queue, the younger student will be given higher priority.

struct student {
  student(double gpa, int age) : gpa{gpa}, age{age} {}
  
  double gpa;
  int age;
};

int main()  
  // Binary predicate with lambda
  auto f = [](const student& lhs, const student& rhs) -> bool {
    if (lhs.gpa < rhs.gpa)
      return true;
    else if (lhs.gpa == rhs.gpa && 
             lhs.age > rhs.age)
      return true;
    else
      return false;
  };

  // Notice the decltype and passing f in the constructor
  priority_queue<student, vector<student>, decltype(f)> max_gpa(f);

  student alice(3.98,15), bob(3.97, 18), charlie(3.98, 16);

  max_gpa.push(alice);
  max_gpa.push(bob);
  max_gpa.push(charlie);

  // prints (3.98, 15) (3.98, 16) (3.97, 18)
  while (!max_gpa.empty()) {
    const student& s = max_gpa.top();
    cout << '(' << s.gpa << ", " << s.age << ')' << ' ';
    max_gpa.pop();
  }
 cout << '\n';

 

Posted in algorithms, data structures, software | Leave a comment

The (White) Elephant in the Room

lizard

All successful software development projects¬†are alike; each unsuccessful¬†software development project¬†is¬†unsuccessful¬†in its own way…and some become white elephants.¬†Paraphrasing and extending the first sentence of Anna Karenina, a¬†white elephant is¬†not only¬†a failed project, but also a project¬†that fails miserably, severely¬†late to market–if at all–, severe cost overruns, severe¬†quality problems¬†or a combination of the above.¬†As a software professional, chances are you’ve participated¬†in a white elephant project.

Why do white elephants happen?

Nobody wants a white elephant. So how is it that, as software professionals and very frequently being the most expensive resources for a company, we (as in the collective we) manage to produce white elephants?

There’s a combination of factors, but projects that turn into white elephants are typically very large, either for the size of the company, or multi-million, multi-year projects that span multiple, possibly geographically disperse teams. Large product rewrites are¬†typical, but you can also find white elephants when investing in new technology, too.

While researching for this post, I came across this article, entitled The Most Common Reasons Why Software Projects Fail.  Unfortunately, not much has changed from previous years, and the reasons are similar to works published by other authors.  A couple of the points in that article that resonate with me have to do with schedules and process.

Mandated completion dates originate from the need to deliver products that are time-sensitive: Products sensitive to time to market can become irrelevant if they’re not delivered on time, and this is perfectly understood by¬†the collective we,¬†so why are these schedules still imposed and accepted? ¬†For a green team–and I’m not talking about an energy-efficient team–it is easier to impose a mandated completion date. ¬†Few dare to say no or negotiate a different outcome, and the team embarks in crazy hours. ¬†These teams usually have high turnover rate¬†and replacing members is expensive to the project. ¬†This white elephant is easier to understand, but how about experienced teams savvy teams that can negotiate a schedule?

Software development processes are necessary in¬†any product that is going to be in front of customers…yet, when we fail to follow them chaos ensues and things become like a Running of the Bulls in Pamplona, where everyone runs for their lives. I remember a C++seminar I went to in 2003, where the speaker, Herb Sutter, asked the audience how many of us had a software development process in place in our companies. ¬†I remember raising my hand…only to find myself alone. I don’t think things have changed dramatically in the last (ouch!) 13 years. ¬†What used to be iterative processes, such as Rational Unified Processes (and referred by people that has joined the ranks after the Agile manifesto as “waterfall”), turned into Agile, but¬†implementations true to the spirit of Agile are scarce (and some are turned into Fragile) and now DevOps.

Two other reasons I’ve seen for white elephants are gullibility and panacea.¬†Gullibility¬†is an¬†over-promised and under-delivered completion date. It flows from the bottom layers to upper management, who buys the completion date with a complete disregard for data that shows the opposite.¬†Panacea¬†can be anything that will deliver your project,¬†be it a technology that is a silver bullet (and the one¬†you want to add to your resume) or compressing a schedule by bringing another team, but failing to manage it properly. For some reason it seems that throwing people at a problem is akin to throwing hardware at a problem–heck, add more RAM and while you’re at it add more disk space. Any additional resources that you throw at a problem will increase the amount of communication needed and the number of “collisions” will increase, too. Nothing new under the sun.

So..have you seen a white elephant lately? I didn’t think so. They don’t exist, right?

Posted in software | Leave a comment

My Sister From Another Mister

No, this is not about family infidelities. ¬†Rather, it is about copyright laws. ¬†You see, a couple of months ago the debate team in my daughter’s school had to prepare for a topic on copyright laws and how effective or ineffective they are.

I’ve always thought that copyright laws mean something, and given that my friend Catherine is always calling me “brother from another mother”, I only thought it was natural to start calling her sister from another mister and see if copyright really works.

The purpose of this post is to claim a copyright in LinkedIn¬†(and now WordPress)¬†over this simple phrase, an experiment of sorts. ¬†Who knows, perhaps it’ll become something like the smiley–albeit I don’t expect to make any money on it. ¬†I must also disclose that I did my due diligence and Googled for the phrase, and it did not exist. ¬†So there you have it. ¬†You can use it¬†in social gatherings to show your smart wits, T-shirts, or whatever floats your boat and to your heart’s content and don’t have to pay any royalties or attributions–as long as you don’t claim it as yours.

Sister From Another Mister, © Javier Estrada, 2016

Sista’ From Another Mista’ ¬© Javier Estrada, 2016

Posted in software | Leave a comment

The Leprechaun Trap

A few years ago and on occasion of St. Patrick’s Day, my daughter¬†built a leprechaun trap. She wrapped a shoe box in shiny green paper and cut a slot through it, decorated it with a border with jingle bells, added a ladder made of Q-tips so the leprechaun would be able to climb, and then added a gold box with the legend “gold inside”. ¬†To trick the leprechaun, she hung a piece of
“gold” from a pole. Once he reached for the piece of gold, he would fall through the slot and he’d be caught. Who could argue with such piece of engineering? It was virtually impossible for the leprechaun to escape once tempted.

The night before she asked me impatiently where¬†would be the best place to place the trap. ¬†I mentioned that next to the front door would be best, since the leprechaun would have to come in from somewhere. ¬†She placed the trap and we all went to sleep. ¬†Next morning, as I got off to work–I’m an early riser, so it was still dark–and opened the door, I almost tripped because of some object on the floor. ¬†I uttered words that I cannot repeat here, and as I turned on the light I realized I had rattled the leprechaun trap and had messed it up a little.

After realizing my mistake, I decided to leave the trap as it was¬†and just left. ¬†Sure enough, later in the day my daughter called me very excited: “Dad, I almost caught the leprechaun!”, and she described what had happened, and how she had found the box.¬†When I came back, she described why the trap had failed and started thinking of improvements–she kept the box and used it the following year, again, with no success.

There are leprechaun traps in software. Very frequently we’re tempted by “gold”whether it is in the form of:

  1. Latest and greatest programming language X…it doesn’t matter if X has a few percentage points in global¬†usage and has remained so for several years.¬†The problem is not in the language itself, it’s mostly in the long-term viability, tools and libraries around it.
  2. Latest technology…even though there is scarce support for it. ¬†How often have you looked at an open source library that looks promising only to find out that the community
    around it is a couple of developers and it has not maintained for years, or that once you look under the hood the code quality leaves a lot to be desired?
  3. Latest and greatest development process…DevOps anyone, anyone, Bueller? The problem is not in the development process–after all DevOps ¬†responds to a set of needs–, it is in the edicts that move teams towards development processes that are ¬†misunderstood or inappropriate, often without training and simply as a trend that starts who knows where.

It does not mean that there are not¬†authentic opportunities for striking “gold” and that one shouldn’t be looking elsewhere for satisfactory solutions not currently met with existing technologies.¬†¬†So why, as software professionals, do we often fall in a leprechaun trap?¬†A coworker of mine long ago mentioned that we as engineers are pleasers. ¬†We’re problem solvers, and often times¬†overly optimistic. ¬†It’s hard to say no to “gold”, to resist the impulse and think back carefully before we offer our educated guesses and commit to schedules that are unrealistic–but there’s a shiny box that reads “gold inside”. We fall into the leprechaun trap, and once inside, we know it’s hard to climb out of late schedules, budget overruns and buggy software: whole teams have to work overtime to compensate for the late schedules and the budget overruns, and with buggy software you just build a backlog that you–or some unlucky soul–will have to work on for the next year because of software “gold”.

The secret not to¬†fall into a leprechaun trap? ¬†Don’t be a leprechaun.

Happy Saint Patrick’s Day!

Posted in software | Tagged | Leave a comment

Observations on Observer

zopiloteMy first encounter with the Observer design pattern was circa¬†1998. ¬†The GoF book was still new–published in 1994–and the community was still assimilating design patterns. Java ¬†the Hot* even offered an¬†Observer¬†interface and an¬†Observable¬†class. What could be simpler?

However,¬†I’ve used Observer exactly once. ¬†The main reason is that Observer is a scaffolding pattern–it is good to understand the concepts, but once you try to implement a robust version of it, it becomes complicated very quickly. ¬†Furthermore, you often graduate to a better version of it, and often times there are language-specific facilities or libraries that implement a variation on Observer. In essence, not worth your squeeze.

Consider this na√Įve implementation in C++:

struct observer {
  virtual void update() = 0;
};

class subject {
 set<observer> observers;
 public:
  void notify() { 
   for_each(begin(observers), end(observers), mem_fn(&observer::update));
  }

  add(const observer& o) { observers.insert(o); }
  remove(const observer& o) { observers.erase(o); } 
};

class concrete_subject : public subject {
 ...
};

The problems start very quickly. ¬†They’ve been discussed ad naseam¬†over the years. One of the most complete discussions I’ve read are by Alexandrescu [1], [2] and Sutter [3].

  1. What type of container is best? For example, can an observer register more than once? Can observers be notified in a specific order?
  2. Is the container thread-safe? For example, can observers be added or removed concurrently from different threads.
  3. What is the type a container stores?  Storing an observer requires it is copy-constructible, so perhaps storing an observer* is best.  However, if the observer goes out of scope without being removed from the subject one will end up with a dangling pointer.  How about using a shared_ptr<observer>?  If all but the reference stored in a subject are released, unwanted notifications could be coming in.
  4. Can observers call remove from within the update method?  This could invalidate the iterator in subject::notify.
  5. What happens if an exception is thrown from the a concrete observer’s update member function? You don’t pass Go, you don’t collect $200.

These questions are not specific to C++–the same problems need to be solved¬†Java¬†or C#,¬†for that matter, even when using delegates**, which behind the scenes implement a variation on Observer: It provides a collection to store the targets, add/remove methods with operators +=, -=, and a notification mechanism that executes on the thread where the delegate is invoked.

Java’s ¬†Observer interface and Observable¬†are not perfect, either. If you read the fine print, you’ll notice that the notifications can come in different threads. ¬†Ouch. This means that you better make sure your event handler is thread-safe.

Notifications per Method Ratio (N:M)

A taxonomy for Observer that can let you choose what to use is the ratio of notifications per method.  Assume N is the number of notifications from a specific Subject and M is the number of methods to handle the notification.  When choosing, you can look at the N:M ratio and see if it makes sense to you.

In Observer, the N:M ratio is many:1, since all notifications are funneled through the update method.

In the Delegation Event Model (DEM) design pattern, all the notifications are grouped into one interface, with one method per notification. The N:M ratio is 1:1.

// Methods in Java interfaces are public.  Ditto in C#
// Events are grouped in one 
public interface XxxListener {
  void event1(XxxEvent e);
  void event2(XxxEvent e); 
}

public class MyXxxListener implements XxxListener {
  public void event1(XxxEvent e) { ... }
  public void event2(XxxEvent e) { ... }
}

public class MySubject {
  public addXxxListener(XxxListener lis) { ... }
  public removeXxxListener(XxxListener lis) { ... }
}

In C#, using delegates is a language-specific version of Observer,  where each delegate, while it can address several targets, it will support a notification per delegate, so the N:M ratio is also 1:1.

If you want notifications à la carte, in C++ you can make use of Boost.Signals2, a signals and slots, thread-safe library where you can get a notification per signal. You can consider signals a C++ alternative to delegates in C#. Added flexibility is that you can specify a priority for each slot. Just like in C#, there will be a signal per slot (or a notification per method). The N:M ratio is also 1:1.

In essence, when shopping around for Observer, beware of the consequences and trade-offs of the implementation, whether it is a language-specific solution, a library or a roll-your own implementation.

References

[1] Alexandrescu, Andrei,  Generic<Programming>: Prying Eyes: A Policy-Based Observer (I), C/C++ Users Journal, April 2005.

[2] Alexandrescu, Andrei,  Generic<Programming>: Prying Eyes: A Policy-Based Observer (II), C/C++ Users Journal, June 2005.

[3] Sutter, Herb,  Generalizing Observer, C/C++ Users Journal, September 2003.

* Could not resist paying homage to Star Wars character Jabba the Hutt.

** As an anecdote, when learning about delegates in the first-ever Guerrilla .NET at DevelopMentor before the .NET framework was released, I asked the instructor, Brent Rector, what happened if the object receiving the delegate call threw an exception.  He tried it, and the program crashed.

Posted in design patterns, software | Leave a comment

Singling out Singleton

Out of all the design patterns that I’ve used in my professional career, Singleton is the one I’ve used the least. Don’t get me wrong. In my experience, as developers we are fascinated with the ability to refer to a¬†single instance, and in our¬†quest to master the pattern we whip up naive implementations that bring more problems than solutions. Most of time one can solve the single instance problem by creating, well, a single instance (in UML terms, a Singleton has both cardinality and multiplicity of 1). ¬†If you and your team have control of the code, why would you want to create a Singleton when you can easily make sure only one instance of a given class is created?¬† ¬†The main reason I don’t use Singleton is because it is misunderstood: Singleton isn’t about a single instance, it is about a single state. ¬†That’s right. ¬†Singleton makes sure that the object state is consistent–meets its invariants–by providing a single point of entry. However, there is another design pattern called MonoState that addresses the problem more elegantly and does not have many of the issues from Singleton. It was originally published in C++ Report, now a defunct magazine. ¬†You can find a nice explanation of it here.

If you’re still intent to whip up your own Singleton, there are two problems to solve: lifetime and thread-safety.

Regarding lifetime, if your Singleton lifetime is independent, a simple implementation that I used in the past makes used of the Nifty Counter technique, described in the C++ ARM [1] and ¬†by John Lakos [2]. It uses C++ ’98 but can easily be converted to C++ ’11, and it does not provide Double-Checked Locking, albeit experts will argue that DCL is¬†broken. In addition, there have been several approaches discussed, particularly¬†in Alexandrescu [3] and Vlissides [4], and one can find Singleton implementations in Boost. One of the differences¬†with the approach¬†explored by Andrei ultimately makes use of¬†atexit(), while a Singleton using the nifty counter¬†technique will be deallocated¬†after¬†all the calls to¬†atexit()¬†have¬†completed. That may not seem like much, but if you’re looking for extra flexibility, this might be what you need.

Listing 1 (singleton.h):

class NiftySingletonDestroyer;

class Singleton {
 static Singleton* _instance;

 Singleton(const Singleton&);
 Singleton& operator=(const Singleton&);

 friend class NiftySingletonDestroyer;
 Singleton();
 ~Singleton();
public:
 static Singleton* Instance();
};

static class NiftySingletonDestroyer {
 static int counter;
 public:
  NiftySingletonDestroyer();
  ~NiftySingletonDestroyer();
 } aNiftySingletonDestroyer;

Listing 2 (singleton.cpp):

#include "singleton.h"
Singleton* Singleton::_instance = 0;
Singleton::Singleton() {}
Singleton::~Singleton() {}

Singleton* Singleton::Instance() {
  if (!_instance) {
    _instance = new Singleton();
  }   
  return _instance; 
}

int NiftySingletonDestroyer::counter;

NiftySingletonDestroyer::NiftySingletonDestroyer() { ++counter; }
NiftySingletonDestroyer::~NiftySingletonDestroyer() {
  if ( --counter == 0 && Singleton::_instance ) 
    delete Singleton::_instance;
}

Note the declaration of the NiftySingletonDestroyer:

static class NiftySingletonDestroyer {}...

as being a static class.  This basically does the trick.  Every time you do a #include "singleton.h" there will be a NiftySingletonDestroyer constructed, incrementing the counter for that compilation unit when it is loaded at runtime.  When the program is unloaded, there will be a call to the destructor, therefore decrementing the counter and eventually destroying the Singleton.

References

1. Ellis, Margaret, A. and Bjarne Stroustrup, The Annotated C++
Reference Manual, Section 3.4, pp. 20-21, Addison Wesley, Reading, MA,
1990
2. Lakos, John, Large-Scale C++ Software Design, Chapter 7, pp.
537-543
Addison Wesley, Reading, MA 1996.
3. Alexandrescu, Andrei, Modern C++ Design, Addison-Wesley, Reading,
MA 2001.
4. Vlissides, J. Pattern Hatching: Design Patterns Applied,
Addison-Wesley, Reading, MA 1998.

Posted in design patterns | Tagged | Leave a comment