Friday 31 December 2021

Beauty in Software Design (and Programming)


Recently I repeatedly heard the phrases "beautiful"/"not beautiful" used to describe and grade a software design, a bugfix or a piece of existing code. 

Personally, I didn't like it at all, as it seemed somehow simplistic, but I told to myself: be humble, don't judge, think before you speak... But then, out of the blue, I realized what disturbed me in that phrase! Let me present my argument.


Why didn't I like the "beauty" argument? Because it's too easy, too general and too unspecific.

Software design, as an engineering effort, is an exercise in finding a right tradeoff between different vectors: performance and code readability, code quality and release deadlines, programming effort and importance of a feature, etc.

When we just fall back to the "beauty" criterion we are just ignoring the more complex reality behind our code. I'd even go as far and venture to say that it is the sign of an immature engineer.  

Let me cite from a blog post*:
"- Mature engineers make their trade-offs explicit when making judgements and decisions."
Let me state it again - we are not looking for beauty when programming, we are trying to survive in a world full of trade-offs. And a mature engineer won't try to ignore that and lock himself up in an ivory tower. Let me cite from a blog post again*:
"The tl;dr on trade-offs is that everyone cuts corners, in every project. Immature engineers discover them in hindsight, disgusted. Mature engineers spell them out at the onset of a project, accept them and recognize them as part of good engineering."
Mammoths?

Have you seen the mammoths? They weren't beautiful**, not by any means! But they were optimally adapted to thrive in their habitat!


Here is one - should we reject him in a code review because of his ugliness?

Mathematics?

What about beauty in mathematics***? Isn't programming "mathematics done with other means"

Isn't there the old adage among mathematicians that "if it's not beautiful, than it's probably wrong"? And des it apply to programming (via the argument that programming is "mathematics done with other means")? 

Well, my response to that is: mathematics is pure, in the sense that it doesn't have to make any trade-offs. 

Besides, I can't see much beauty in modern advanced mathematic like, for example, in proof of Fermat's theorem. Rather than that, it's just complicated and tedious. Looks like the low hanging fruit in mathematics has been already picked... 

Update

A recent tweet expresses much the same sentiment:
--

** as a classic polish rock songtext stated (credits where credit is due!)

*** "there is some cold beauty in mathematics..." as Bertrand Russel said


My talk about the Emma architectural pattern


When working for one of my clients I learned about Emma (or rather EMMA :-) pattern and it got me quite intrigued. As far as I know, it's never been published anywhere, and frankly, there aren't many publications about embedded systems architecture anyway.

In the meanwhile, Burkhard Stuber from Embedded Use gave a talk about Hexagonal Architecture* pattern and argued in his newsletter that:
"In my talk at Meeting Embedded 2021, I argue that the Hexagonal Architecture should be the standard architecture for UI applications on embedded devices. As software and system architects we should never have to justify why we use the Hexagonal Architecture. In contrast, the people, who don't want to use Hexagonal Architecture, should justify their opinion." 
However, at the time of this talk, these ideas werent yet communicated, but at the moment I still think that even for UI-based applications on embedded devices EMMA could also be an option!

The EMMA architecture was successfully used in embedded devices at my client's, it follows an interesting idea and it wasn't published outside of the company before. So I secured an OK from my client and presented it for the general public!

The Talk

As they gave me permission to blog and speak about it, I gave a presentation on EMMA at the emBO++ 2021** conference:
The general TOC of the talk looks like this:
  • General intro about architecture patterns 
    - and also about embedded software architectures

  • Motivations and trade-offs
    - the motivation for an the trade-offs taken in the EMMA pattern

  • Overview of EMMA 
     - its layers, the various conventions it imposes on developers, the standard project structure
     - the basic control flow
     - the startup phase

  • Use Case 1
    - Bootloader on a Device Control MCU

  • Use Case 2
    - Target Tests for a Device

  • Use Case 3
    - Qt GUI on Yocto Linux and iMX-6

EMMA Architecture

And here it is, the short, non-formal description of Emma architecture pattern.

The name of the pattern is an acronym of:
  • – event-driven (asynchronous events for communication)
  • – multi-layered (layering to organize the )
  • – multi-threaded (i.e. threading has to follow guidelines)
  • – autonomous (i.e. minimal interfaces)
The motivation of the pattern is to avoid designs like this:


which, unfortunately, tend to spring to life in many programming projects!

The inspiration for the pattern comes from the ISO's seven layer model of a networking stack:


We can clearly see, that there are analogies between networking stack decomposition and embedded architecture layers! This is what got me interesten in first pplace

When to use EMMA? The original design argues that it is good for: 
  • Low-power 8-bit MCUs
  • Cortex M0-M4 class, 32-bit MCUs (e.g. STM32F4xx), RTOS, C
But in a recent project we also used it for:
  • Cortex A class, Linux, C++, Qt, UI
I'd say that it can be naturally extended to UI-driven embedded devices, where the A7 (Application) layer isn't a message loop but a Qt-based UI!

Thus it can be seen as an alternative to Hexagonal Architecture*, offering the one benefit, that it follows the more widely known Layered Architecture pattern!

---

  

** emBO++ 2021 | the embedded c++ conference in Bochum, 25.-27.03.2021

Thursday 30 December 2021

My two talks about polymorphic memory resources in C++ 17/20

 
Quite a time ago I was listening to Jason Turner's C++ Weekly‘s Episode 250* about PMRs (i.e. C++17's Polymorphic Memory Resources) and he mentioned some advanced usage techniques that can be accomplished using them. 

Althought I dare to consider myself to be quite an knowlegeable C++ programmer, I just couldn't get it - I didn't understand how these techniques are working and what are they good for! What a disaster! Myself not understanding C++??? That cannot be true! Well, it was, but I wasn't willing to leave it like that.

So I started to look into the whole memory allocators story in C++. 

What have I found? Well:

  • the surprisingly lacking design of C++98 allocators
  • a fascinating story of library design and evolution
  • understanding of some techniques which aren‘t generally known

Talk 1

So I told myself, that this could be a material for a conference talk, and I just did it!  I mean, I gave a presentation at  C++Italy 2021 conference**:

I titled the talk "PMRs forPerformance in C++17-20", but as I can see in retrospect, it was a little misleading because performance wasn't the main theme of it. Instead its TOC looked like this:

  • Memory Allocators for Performance
    - how allocators can increase performance (OK, at last some general performance knowledge was brought forth here!)

  • Allocators in C++98 and C++11
    - how (and why) allocators were broken in C++98, how Bloomberg's design tried to fix it, and what parts of it were incorporated in C++11

  • Allocators in C++17 and PMRs
    - how the remaining parts of Bloomberg's design were addded in C++17 and what classes in the pmr namespace are implementing it

  • Usage Examples
    - here more pmr:: classes, functions and mechanisms were introduced

  • Advanced PMR Techniques
    - at last, the 2 techniques that started all of that were explained here - Wink-Out and Localized Garbage Collection, but also the EBO optimization technique for allocators was mentioned

  • Allocators in C++20 (and Beyond)
    - here I discussed PMR extensions added in C++20, talked about C++23's test_resource proposal and hinted at the possibility of delegating all of the allocator mess (sic!) to the compiler

So, it was more a journey through allocator's evolution up to the PMRs and about some more advanced  techniques than PMR's impact on performance, I must admit. But all in all, I was rather happy with my talk - there was lot of material covered! 

However, promptly, the inevitable question popped up - is the Wink-Out technique UB-safe***? Ooops, I didn't expect that! I tried to conjecture something but in the end I had to admit that I wasn't sure. 😕 

Besides, I was only able to give Wink-Out and Localized GC examples in C++20 as C++17 lacked some PMR features added in C++20. 😕 And I also wanted to look on the interplay of PMRs and smart pointers but didn't quite go round to it...

Talk 2

At that point was clear, that I have to do more digging! The result of it was a second talk, which I proposed for the 2021 Meeting C++ conference. Luckily, it got accepted (yay!): 

This time I wanted to concertrate on the 2 techniques, that I could only shortly discuss in the previous talk, so I tried to cut out the library evolution stuff and add discussion of the UB problems, C++17 vs C++20 code and smart pointer usage. 

Finally the TOC of the presentation looked like this:
  • Intro: memory allocators for performance and more
    - again, how and why allocators can increase performance

  • STL Allocators vs PMRs
    - how PMRs improve on the traditional C++ allocator model (spoiler - by using a virtual base class!)

  • Advanced techniques
    1. Wink Out (and/or arenas)
      - here I also cleared the Wink-Out UB question. It was already answered in some proposal or TR (as I noticed later!) but I found a C++ Standard's section  9.2.2. [basic.life] which says:

      "For an object of a class type with a non-trivial destructor, the program is not required to call the destructor explicitly before the storage which the object occupies is reused or released"

      So winking-out is allowed within C++ object lifetime framework!

      - also real-life examples of the technique were presented: Google's Protocol Buffers and JSON data struct implementation in C++ Actor Framework

    2. Localized GC (aka. self contained heaps)
      - with localized GC we also discussed how smart pointers work with PMRs, where they both collide and we even touched on C++20 destroying delete feature!

  • Some lessons
    - this time we also discussed the usage costs and open problems with PMRs and concluded that there are still some of them. Maybe pushing it all down to the compiler via a language mechanism could improve the situation?
All in all, I though that I cut out so much from my initial slide set, but I was still short for time! I wonder how I managed to squeeze in so much material in the first talk!

The two techniques

The example code for both talks can be found here: PmrTests.cpp, so you can just check it out (or listen to my presentations 🙂), but for completeness' sake I'll briefly introduce those two advanced techniques also in this post.

Winking out tries to omit all calls to a destructors of in given container and reclaims the used memory when the allocator (and it's PMR) goes out of scope. Of course no side-effects in destructors are allowed here!

The slide below shows the example code:


Localized GC uses the above technique to avoid unbounded recursion and stack exhaustion when deleting complicated graph structures where nodes are connected by smart pointers. We just fall back to naked pointers and wink-out the entire graph. Graph cycles are also no problem now!

The slide below shows the example code:


Want to learn more? Listen to presentations or read the slides. Or just ask your question in the comment section.

Resources

Here I will add a list of the resources I used when preparing the 2 talks: mainly standard proposals and technical reports of the C++ standard committee. 

--> OPEN TODO: please be patient a liitle more, I promise it comes soon!

___

* C++ Weekly, Ep.250: "Custom Allocation -How, Why, Where (Huge multi threaded gains and more!) "https://www.youtube.com/watch?v=5VrX_EXYIaM

** C++Italy 2021 - Italian C++ Conference 2021, 19.06.2021, online

*** UB: Undefined Behaviour, the bane of C++ developer's existence