This is my Lightning Talk for CPPCON 2020. I’m super grateful to Chris Ryan for posting it on LinkedIn.
Quality C++ documentation is abundant…as long as it is in English. If you’re a Spanish speaker your options are limited. This talk is about my experience in bringing es.cppreference.com, the Spanish companion site to cppreference.com, up to date to enable Spanish-speaking C++ lovers and enthusiasts a quality experience when learning C++…en español.
Many years ago I read a follow-up book to the seminal Design Patterns book by The GoF, called Pattern Hatching: Design Patterns Applied by John Vlissides. He kept a homonymous monthly column in C++ Report, a magazine now defunct. One of the things that called my attention is that he mentioned that keeping on with design patterns was a labor of love.
Before his passing, he was kind enough to review an article that I intended to publish on Pluggable Factory and gave me very valuable feedback, as well as feedback on using the nifty counter technique–originally in the C++ ARM and then in John Lakos‘ book, Large Scale C++ (first edition)–to implement a variation of Singleton.
To me, John was an inspiring figure to give to the community without expecting anything in return, and hopefully the effort of translating C++ documentation to Spanish will enable others to learn and love this programming language that has given me so much.
Three-way comparison (aka “the spaceship operator <=>) is a cool new feature in C++20. While the usage is simple, the documentation is not :-). Strong, weak, partial ordering…concepts, more concepts, type traits, and so on.
Documentation a long time in the making, but I finally had a good couple of days working on it. There are some examples missing–ditto in the English version–but nonetheless much needed documentation. You can find it here.
La comparación de tres vías (también conocida como “el operador nave espacial”, <=> ). Mientras que el uso es simple, la documentación no. Ordenamiento fuerte, débil, parcial…conceptos, más conceptos, rasgos de tipo y así sucesivamente.
La documentación tomó bastante tiempo, pero finalmente tuve un buen par de días para trabajar en ella. Hay algunos ejemplos que faltan–lo mismo en la versión en inglés–pero no obstante, se necesita mucho la documentación. La puedes encontrar aquí.
I also own a 2019 MacBook Pro (Intel Core i9, six-core), and informally they’re on par in terms of performance, but Geekbench reports, the Mac mini M1 beating the MacBook Pro hands down.
I’ve been running versions of IntelliJ IDEA for Apple Silicon and CLion for Apple Silicon. Startup time almost instantaneous from a cold boot. Kudos to Phil Nash, Anastasia Kazakova and team for these great tools.
It took around 3 weeks to receive it since I ordered it, but it was worth the wait–and I wasn’t in a hurry. If you can get your hands on one of these, or are undecided, highly recommended.
This was my favorite session. I must confess that I’ve seen it three times, and had the time to go slowly over the topic discussed: performance.
Emery Berger needs no introduction. Just check his LinkedIn profile, but just in case, his credentials are at the beginning of the session and just the fact that the CPPCON organizers scheduled him for a plenary session is telling: He’s got the goods.
What made this session useful and memorable? On one hand, Emery supports the topic of performance taking a meticulous approach supported by sound math, statistics, data, you name it. At the end of the post I add references that are mentioned in the session.
Chillax, have a mojito
Emery starts the session with a simple example: performance of a given system in the ’70’s, cool ’80’s, and today, but soon things get super interesting. In the past, you gained performance just by…upgrading your computer, right? Of course! Who doesn’t remember those Gateway 486SX computers with 4MB of RAM (and don’t make me go to 386SX, 80286, 8086…)
Well, with the advent of multicore/manycore processors, things changed, and for the better.
Gains in performance cannot be stated by “I ran it once and it ran better”. Given two systems, A and A’, if A’ performs better and making a claim, what is the reason? Our typical approach is to find opportunities for better code, but why? It shouldn’t be black magic, right?
So, without spilling the beans, this is what you will learn from the session:
Variance when there are factors that may affect performance
Data Layout (from code to how it’s really laid out in memory)
Link order and its impact on function addresses
Environment variable size (moves program stack)
Running a program in a new directory and its impact on layout
Moreover, Emery introduces causal profile and Coz, and how to approach a performance problem. What he exposed reminded me of Control Theory with a feedback loop. I asked Emery in a follow up Q&A about the difference between BPF and Coz, and he mentioned that BPF is an observing tool, while Coz is a predicting tool. I would think they would be a great complement to each other. On verra.
In this section of the talk you’ll be introduced to:
Experimentation and what slowing down a component reveals
Latency and throughput
My favorite reference was Little’s Law, which I have used exactly twice in previous projects. A specific resource where it is mentioned–and used–is in Windows Server 2003 Resource Kit: Performance Guide.
Hopefully I got you interested in this session. If you have an hour to chillax with a mojito, I encourage you to check this session. You’ll derive hours of satisfaction following up the papers and putting into practice what you learn.
Word of the Conference
Many of the sessions, such as Emery’s plenary session, and Herb Sutter’s capstone plenary session, make use of the word incantation. Presenters use it to present a non-apparent solution to a given problem. It seems that every CPPCON conference I’ve attended there’s a catchy phrase. We’ll see what next year brings.
Arthur O’Dwyer’s session is a lap around the concurrency facilities present in C++, starting with C++11. He starts the session with a gentle introduction to concurrency and parallelism pre-C++11 and walks you towards the memory model present in Modern C++, then he proceeds by presenting threads, joining threads, mutexes, scoped lock, atomics, shared_mutex, condition variables, counting semaphores, latches, barriers, promises, futures. You get the idea. He goes further by introducing usage idioms for some of these mechanisms.
Ay Chihuahua, Singleton again!
I mentioned in a previous post that Singletons are posible in Modern C++ using std::call_once and std::once_flag. That is explained here, too.
A cursory view of std::memory_order (or if you prefer Spanish, std::memory_order) will give you more details on the different relationships and their formal definitions, synchronizes-with being, IMO, the easiest.
As a side note, if you want a holistic approach for concurrency in C++, be sure you check Anthony Williams’ book, entitled C++ Concurrency in Action, 2nd Edition, and it’ll be a great complement to Arthur’s talk.
I like the fact that CPPCON is devoting time and speakers to Back to Basics sessions. I think they’re much needed and help bring novices or engineers coming from other languages into mainline C++ programming, helping them to navigate intricate–or sometimes historical–aspects of the language and are time savers, or simply help pointing in the right direction.
I also like the fact that Arthur does not make the session complicated. It is a nice introduction and leaves it up to you to dig up deeper on each of the facilities.
The Blue/Green Pattern
I like the last couple of slides defining this pattern. When sliding in new versions or services/microservices/configurations, is important to do it right. In my particular case, this was the highlight of the session and wished there had been more on it, but alas, Arthur was pressured with time. The technique is what I’m interested in, particularly since he mentioned C++20.
In a nutshell, I liked the approach that Arthur had. Easy going, reaching the Goldilocks principle: not too deep, not too shallow, just right. Got an hour to spare? Jump in!
I was interested in this talk by Phil Nash to see his approach with C++. TDD is an approach that, as it states it is test driven (and Phil clarifies that it is different from test first). Saying that, there is an assumption that you know your OOA/OOD/OOP well, along with your SOLIDs, or template metaprogramming well, and so on. So, you don’t abandon any of these well-known and well-learned principles; you rather apply them to converge fast on the things that you need to pass a test. The assumption is that such test is the most important thing to do: it may realize a use case, implement a user story, add a piece of functionality, and so on.
As Phil explains, TDD starts with a failing test. Perhaps you’ve written your base classes or interfaces or protocol classes or abstract classes, or primary templates, or…the name doesn’t matter. The thing is, you start with a test that will fail and write the minimal amount of code to make that test pass. That will bring you to a refactoring opportunity, then you try again until the test passes, and so on. This portion of the talk takes 10-15 minutes, but I say they’re well spent to get the best grasp of the talk.
On minute 17 Phil addresses a question on how to introduce TDD on legacy code, and he’s right: it’s very hard. The reason, which is easy to visualize, is that you must have designed your classes with testing in mind–say, X-Injection (constructor, property, parameter), easy to stub (fake, etc) by writing to an interface (or if you use templates, then perhaps a primary template with partial and explicit specializations). You get the idea. This process can be painful, depending on how well you broke your dependencies.
Enough said. Now let’s check the meat of the talk. As the author of Catch, Phil conveniently makes us of it in the talk–and conveniently, too, CLion has built-in integration with Catch. While the examples are with Catch, what matters are the principles, the approach on how to write the test, make sure it eventually passes, and how to go about working with your C++ code to make the TDD cycle happen.
One thing you’ll notice is that source code and test code are together. That’s a style that Phil uses, and he explains that you can move the code out–hey, whatever floats your boat; you may decide to first write a minimal portion of your class, class template, or other artifact, and then write the tests by including those minimal files, and see how that goes.
What I liked about this talk is that it shows you how to go about TDD in a simple way–I perused through the Catch2 documentation and the framework is simple to use and seems like a short learning curve. The examples that Phil gives are designed to show how to go from test to working code in short iterations, little by little, poquito a poco. Hey, had to use some Spanish.
At the end of the talk, Phil answers some interesting questions. One that he addresses that is very common has to do with design and testing: do you give up design when you use TDD and the answer is a resounding no. Any approach that uses tests–POUTs or Plain Old Unit Tests (you see, I can also invent my own acronyms) and TDD, have evolving designs. You design little by little and adapt the design according to the results that you’re getting from the tests. Approaches that I’ve used in the past tend to do a full design (you could call it an architectural baseline to make it sound cool) that takes longer to do. None of them are wrong and one may make more sense given your particular circumstances. The point here is that TDD is a tool that does not forego design, it simply starts with a different assumption and builds from there.
The question remains, though, what will happen when Catch reaches version 22. Will it be called Catch-22?
The Singleton design pattern has been discussed ad naseaum over the years. Peter Muldoon starts with the motivation for this talk: an occurrence of Singleton in the workplace, oh my!
The premise of the talk, is not whether to keep it, but what to replace it with.
Over the years, I’ve seen people being captivated by Singleton, as some sort of the Highlander syndrome, where “there can only be one”. In Singling Out Singleton, I have a short blog stating that Singleton refers to single state, rather than a single instance, and even provided an option–what I call the Nifty Singleton, inspired by John Lako’s technique, which in itself, if I remember correctly, came about in the C++ ARM book of old. For a modern alternative, one can use std::call_once and std::once_flag, which I don’t consider a singleton, but rather a requirement of the particular problem at hand. You can also resort to libraries such as Boost.
In my particular case, I’ve lived very well without it, thank you very much. Perhaps Singleton should be relegated to the annals of Design Patterns lore and used only pedagogically?
Peter takes a more wholesome approach, tackling different aspects that arise with a replacement. You can “cut to the chase” and go to minute 59 of the talk if you want to, but than would have to back trace to understand the reasons. Some of the problems mentioned go beyond Singleton, and we can all learn from it.