Creating a terminal application - 3 - Frontend take 1

Creating a terminal application - Frontend take 1

Intro

The last post has been a while back. Sorry for that. I was busy with daily business and also the implementation of the terminal got a bit stuck. Now I’m back on track and can feed you with the latest development news.

The backend is only half way towards a complete terminal application. In fact it’s more 1/5 of the way.
The reason to use a terminal application other than the built in one is only its user interface. At least for normal operating systems where the built in terminal works as expected.

So it’s all about user interface and functionality.

As already mentioned I want to take the C# + QML route as it is a good way to dog food Qml.Net and also gain some more QML / Qt insights that help me with my day job. Win-Win :)

QML might not be the first choice when you need a UI that basically renders its main content all the time and also needs to be fairly fast in doing so. We don’t want to hold back the process we control with having it waiting for our UI to render.

Options

During the backend development I used a very simple QML UI. Basically a Label that got the whole text set whenever the terminal content updated. Performance was OK.
The problem starts with formatting the text. Terminals can have colors, text ranges can be bold or italic.
In any UI devkit I came across (Ok, leave out some of the embedded UI kits) there was a way to define text format for a subset of a string in a Label.
The MacTerminal that is part of the XtermSharp project also uses this feature of Cocoa.
QML also has something for this: HTML and CSS. Yes, you are still on a blog post that describes the UI for a terminal application.
So QML uses its already built in web capabilities to provide rich text formatting.

HTML and CSS

OK, so lets just try that approach, right? I implemented the rendering functionality that translated the terminal content into a string with HTML markups (basically spans with styles).
First test run revealed that the performance of that thing dropped extremely hard. No way that this approach is by any means acceptable even by me and even less by a random user that might be interested in my take of a terminal application.

So I spent a couple of hours (read: evenings) trying to optimize that approach:

  • create a label for each line - this resulted in a very complex logic on QML side (We don’t have that low level access C++ would have) written in Javascript. Performance was a bit better but still nowhere near I wanted it to be
  • separated background color from the foreground options - the rendering logic already tried to get as many characters as possible into one span by comparing the formatting attributes. When one changed a new span got started. The separation of those two groups allowed me to have nested spans. The background span changed way less often as the foreground span. Performance improvement: almost not noticable
  • One Label per cell - the ultimate unnecessary redraw prevention. Performance was not that bad (not good though) but the logic that had to kick in to create all those labels really went crazy when the window was resized.

Result

So the HTML approach was not going to make me happy. I decided to look for something different.
QML has a control called QQuickPaintedItem. This control is meant to be subclassed and to take over the complete drawing (I think most of the ui devkits have something similar).

So you are basically alone with a blank canvas and can draw whatever you need.
Problem: QML.Net does not have support for this QQuickPaintedItem so I decided to jump in and add that feature. More on that topic in the next post.

Software developmentProjects

Creating a terminal application - 2 - Backend

Creating a terminal application - Backend

How Terminals work

Maybe not all of you know what goes on behind the scenes when you launch your Terminal.
The term “terminal” is from the 1970s (or even earlier) Wikipedia: Computer terminal. At this time you had a real terminal device that was built in a way so that the input/output part was separated from the computing part. There were different protocols used for those two parts to talk to each other.
A well known terminal has been the VT100 that introduces special escape codes to control e.g. the cursor or status lights.
The protocol used there is still what drives the Terminal of your operating system (at least the Unix based ones).

So given the history a terminal application today is still controlled using the same protocol by the process it talks to.
The terminal application is “only” handling user input and output. It connects to the underlying process (most of the time the default shell of your OS) via special pseudo terminal connections where it sends the user input to the shell process and receives the UI data from the shell process (using the protocol from the old days).

Hosting a shell process with DotNet

The first step in my endeavor to create a terminal application is to host the shell process. This has to be done in a special way to get a proper pseudo terminal connection.
It seems that DotNet has some problems on that end. There is something special in the dotnet process that makes the default fork mechanism that yields the pty (pseudoterminal) endpoints not working.
In a C++ program you would simply call forkpty (see man page) that does a unix style fork and returns the process id of the process launched and a file descriptor for the pty socket.
When you do that in a DotNet application it just doesn’t work.
This is why Miguel had to create a native library that executes the forkpty function to work around that problem. Source code

Of course this is not really nice as it means I have to compile platform specific code in order to make the process start work.
In one of the issues a maintainer of AvalonStudio (that also includes a Terminal component) mentioned another approach to launch the shell process that doesn’t involve native code Github issue.
This approach basically launches the running application again with a special argument that leads to an immediate (before anything else happens) fork. They have to do some special magic with the pty file descriptors so that the terminal application can communicate with the process.

So I had two options: Use Miguels approach to have classical fork but with native code or use the rather hacky approach of AvalonStudio to avoid native code.

I decided to use the AvalonStudio approach.

I implemented the process launch and setup of the pty connection and it worked. When I tried to use zsh as shell I recognized that CTRL+C didn’t stop the currently running process.
After digging around a bit I also recognized that when using bash it complains about a missing control connection.
So this means that the AvalonStudio approach did set up the basic communication channels (entering and receiving data works) but somehow screws up some bits that are needed for the shell to work properly.

So I decided to switch over to Miguels approach because the MacTerminal that he has in his repository had a working control connection.
After implementing it I tried to launch bash using my DotNet application and it crashed with a message like “Qt was not able to release locked mutex”.
I digged around and tried to find the problem but it seems to me that the way Qt is loaded into the DotNet process is not compatible to the way a Unix fork works.

So I had two not working fork solutions. What to do?

I decided to stick with Miguels approach but pull the forking in a separate executable (ConsoleHost) and communicate with the ConsoleHost process via a named pipe connection.

Some evenings later the construction was ready and Miguels approach is able to fork the shell process. Performance is also good (I will cover the real performance bottleneck in one of the next posts).
There is still one quirk: I have to launch the Console Host application via bash. Otherwise I get the same missing control connection message. Not sure why this is.

So the architecture of the terminal application looks like this:
Terminal app architecture

Currently I have the backend working for Mac. Linux will probably work in a similar way and for Windows I expect WinPty to do all that hassle for me (at least is AvalonStudio using it and it seems to work there).

Next step: User interface (also focusing on Mac for now).
Supporting other platforms will be tackled when the basic terminal is working on the Mac.

Software developmentProjects

Creating a terminal application - 1 - Intro

Creating a terminal application - Intro

History

From time to time I have to maintain servers, connect to Raspberries or access my Synology Diskstation. All this is done via SSH connections.
On Windows I loved to use an app called MobaXTerm for this. It has some really convenient features. From getting a glimpse into the system parameters of the remote machine (CPU load, RAM, storage, …) to having a X Server running and getting the SSH connection configured so that X-forwarding works flawlessly.
But the best feature is getting a parallel SCP connection that can follow your current directory and provides the possibility to edit files using local applications by managing copying it locally and re-uploading it after it changed under the hood.

MacOS situation

For MacOS I can’t find a terminal client that provides the same functionality. Of course, there are plenty of terminal applications and there are also programs that allow SCP connections but not everything in one package. Not the all in one SSH toolbox.

Incident

I stumbled across a project from Miguel de Icaza (Mono founder) called XTermSharp. This is basically the terminal business logic that allows plugging the shell process underneath and an UI on top.
So I thought why not use QML.NET to build a UI for XTermSharp and build a terminal application I need myself? I’m a software developer after all.

Project start

So I decided to build my own terminal application.
I will update you with the current state of the terminal application and all the things I encounter along the way. Not sure about the frequency (this depends on the progress I make with the terminal application)

Software developmentProjects

Why even consider switching from C++ for embedded UI SW development?

Why even consider switching from C++ for embedded UI SW development?

History

In the past, when the hardware was much more limited than today, the software for that hardware was a monolithic binary containing everything. The (embedded) OS, the drivers and the application itself.
The embedded operating systems evolved and introduced things like multi threading or something in the middle of threads and processes.
At that stage hardware resources were the biggest constraint so that switching from C to C++ was considered a crazy step. All those virtual tables that eat up RAM and ROM!
At that time the applications were rather simple. No complex UI flows or things like content that can be downloaded by the user.

Today

Fast forward to today.

We now have Linux as an operating system.
We have multiple processes running on that hardware.
We have downloadable content.
We have OTA updates.
We have much more complex UI software (Features, UX, Animations, states)

Of course we have much more resources than in the old days.

The switch to Linux is key here. This - at least in theory - allows us to use all kinds of technologies because Linux is a common target that is supported by a very high number of technology stacks.

The software got much more complex over the years. The requirements what a UI software in an household appliance should look like and what features it has to support raised dramatically.

The question we have to ask now is:

Do we still use the right tool for the problems we solve?

C++ has been chosen basically as the only possible step away from C towards a more modern way of developing applications. There were no real other options to that language at that time given the constraints.

But considering the new context we have to question the status quo (which can be quite uncomfortable)

C++

C++ is a powerful language. No doubt. It has a couple of upsides:

Upsides (list is not complete)

1. Resource usage

You will have a hard time to find an alternative that uses less resources (besides C). C++ is fast and even when we don’t optimize every copy operation out of our codebase the result is in a range where it doesn’t matter for us.
We don’t have to tweak on a language usage level in order to get sufficient performance.

2. Status Quo

C++ is currently used and is also used in a whole bunch of products that are field-proven

3. Knowledge

Our SW developers do C++ for years now. There is plenty of knowledge available. If you have a problem just ask the colleagues. There will probably be a colleague that can help you.

But there are also a couple of downsides:

Downsides (list is not complete)

1. Native

This is also true for some of the alternatives.
C++ compiles natively.
You have binary dependencies to the system you compile for. This means if you have to support a number of target configurations you end up most likely to release for every target configuration (and do all the build steps over and over)
The level of dependency management that you have to do for a native compiled artifact is much higher than for a platform independent artifact.

2. Features

More modern programming languages and frameworks have a huge set of features that encapsulate much of the complexity. There are many examples but for instance watching a directory / file to get notified about changes is a breeze in Python / .Net / JS / you name it and is really hard using C++ and the standard library. Even if there are features available in the standard library or in boost the interface is often times more complex and error prone than with more modern languages.

3. Modernity

C++ is not the youngest language. It has some constructs that hint at its age.
Headers are one such thing. This idea is derived from C:
You have to create a cpp file containing your code. If other places in your code want to use that (which is the case in most situations) you have to declare your class and its members in an header file (redundant).
Or lets take interfaces. An interface in C++ is a pure virtual class. This is because interfaces were slapped onto the already available concept of classes instead of making them a 1st class citizen.
C++ is full of such weird things. Let’s take smart pointers. Those can be used (and you really should :)) to have an reference count based memory management. Those smart pointers work great but are not 1st class citizens. This is - for example - why you can’t use a smart pointer to this in a constructor. The smart pointer gets created after your class gets created so you can’t access the smart pointer (it’s simply not yet there).
I think those are all indications of the fact that C++ has gotten old. All the new standards and improvements in the Boost library are awesome: I really appreciate the amount of work and thought that went into it but in the end it is only another layer slapped onto C++.

4. Compilation time

Compiling C++ is very resource intensive. This leads to very long change - build - test - repeat cycles. If you factor running unit tests in then doing something like “live testing” is simply not possible. You thought your Android app takes long to compile because of that Kotlin-Java-DEX madness going on? This is nothing compared to a similar sized C++ program.

One could think this is also an upside:

XKCD compiling
XKCD - compiling - https://xkcd.com/303/

5. Complexity

C++ operates on an abstraction layer that is far below the abstraction layer of other languages or frameworks. This is a big “+” (haha) when you need control over behavior at that abstraction level but is a big “-“ when you don’t need that. The amount of ways a developer can do wrong is very high in C++. Often those errors even are of a very nasty kind that doesn’t happen near the root cause of the problem but much later in the control flow.

We don’t need that performance

The most resource intensive thing our software does is the rendering of the UI. This is not in control by us but by the rendering framework we use (Qt).
So in fact it wouldn’t matter if the business logic is a little bit slower than today as this is not the critical path. We don’t do complex calculations that require optimizing the business logic code.
The performance might get even better if some cycle based logic (there are still a few) could be more easily transformed into an event based approach using a modern language.
The only thing that is currently holding back a technology shift (technology wise) is the ROM size of our lowest end hardware. But this is something that will change.

We are heading into the right direction

We are currently heading into a direction where different parts of the software are more isolated. This will lead to more independent processes that can switch technology without affecting other parts of the system.
In such an environment it will be possible to be brave and test a new technology in a small project (that is able to react to risks without loosing much) without affecting anything else.
The groundwork for this will be to evaluate different technologies to get a first idea for what the problems might be to use it in our context.
For example I consider it crucial to have a working Qt/QML binding. The transition has to be as smooth as possible. The amount of resistance against such a change will be high enough. No need to even extend it.

The social factor

The technological aspect is one thing. It is clear and easy to tackle. Even getting a clear yes/no answer is only a matter of time invested.

The much bigger problem (at least for someone like me) is the social aspect of this.

Of course: change is always something that makes you feel uncertain. The amount of uncertainty is different between people. I think this is mostly a personality and age thing. Your personality lays the ground and the older you get the less open to change you will be. So the effective openness to change is a combination of that.

There are most likely other factors that also influence this:

  • Current status: Maybe the status you currently have in the company and / or the team is tied to the current technology stack (“Hero C++ dev”) and you fear to loose that?
  • History: Maybe you had to fight very hard some time ago to make the transition from C to C++ happen and you see a change away from C++ as loosing that battle in the long run?
  • Religion: What? We shall move to something that Microsoft / Google / whatever invented? Won’t happen!
  • Environment: Working in a big company vs a small startup is also a factor. Getting change through in a big company involves many discussions, many decision makers that want to get asked and last but sadly not least politics. It might happen that an idea or a proposal is ignored or countered only because there are political conflicts between different areas in the company.

Communication

The key here will be to encourage the people that change can also be something good. This is why the technical aspect is so important. Having something to show and demo some advantages that affect peoples everyday work will (hopefully) make them more open to that change.

Also a clear talking about the risks and benefits is crucial. Change has always risks tied to it but also can bring benefits. A clear plan for how to encounter each risk and what to do if the risks will happen is needed to get trust and also reduce the uncertainty for the people that have to decide.

Software developmentMeta

Selecting the right MacBook

Selecting the right MacBook

As the decision to buy a MacBook has settled the next step was to select what MacBook to buy.
Normally I like to have a notebook in the 13” range as it is portable and when used at home I always connect an external monitor to it.
My previous device was a Microsoft Surface Pro 6 so I was quite used to such a screen size and didn’t need to change it.

The keyboard

There is one problem with the MacBooks that was very prominent in the media: The butterfly keyboard seemed not to be the best decision Apple made. So it was clear to me: I don’t want to get a MacBook with a butterfly keyboard.
This ruled out all 13” Models before 2020 and all 15” models. So the available options were: 13” 2020 or 16” 2019.
The 2019 MacBook is roughly half a year old so it can be bought at some discount. The situation for a i7-10, 16GB RAM and 512GB storage device was that the 16” costs 100€ more than the 13”. I also noticed that the reviewers mentioned that the 16” has a redesigned cooling system that works much better than the old one and that the 13” sadly doesn’t have this improvement. Another thing that I’ve seen in the news is that the 13” has some USB 2.0 problems.
So I decided to go with the bigger device because it promised more power. Having a dedicated GPU can’t hurt, right?

First impressions

The piece of art came and I couldn’t wait to test it. I set everything up and plugged in the external monitor. Fans spin up. What was this? “Maybe some background task from the setup is still working?”
No, as it turned out: Plugging in an external monitor has to activate the dedicated GPU wich immediately pulls 20W and needs the fans to run. Only slow but still.
I decided that I can live with that knowing I have the raw power under my control.

The new Monitor

I decided to upgrade my monitor. I had a WQHD monitor from Asus. I found a rather cheap UWQHD monitor from LC-Power and pulled the trigger on that monster.
The monitor came and I plugged it into the MacBook. Works, looks good, curved monitors are great!
End of the story - no
Some days later I recognized that moving windows around felt strange. The reason was that the mouse halted every 3-5 seconds for a very brief moment.

I tried everything to get rid of that:

  • tested another mouse
  • plugged the monitor directly to the MacBook without using my Docking Station
  • plugged the muse directly to the MacBook without using the DockingStation
  • used a Bluetooth mouse
  • reduced the frequency (default for this monitor is 100Hz)

Nothing helped.

Then I searched the internet and found other people having the same problem. It seems to be a bug in the AMD kernel extension. Not sure if the special resolution of that monitor was the culprit or something different.

This is something that I can’t live with. Even on a 500€ Windows Notebook I would not accept this. Of course - this bug might be fixed one day. But I would have to see that stuttering mouse the whole time. And now that I know the problem I recognize every stutter.

MacBook exchange

Gladly still inside my return window I decided to reset the 16” and send it back to Amazon. In the meantime Apple fixed the USB 2.0 issue on the 13” so I decided to go with the smaller model.

This puppy has now arrived and - of course - I immediately checked the performance on the external monitor. The result? No problems at all. No fan spins up no mouse stuttering just super smooth as expected.

Conclusion

Not sure? How can this happen? A customer that buys a 2300€ notebook should be able to expect more than that. If this model was new then OK - can happen, fix is probably in the works.
But this thing is half a year old. We are at 10.15.6 now!
Ok - I have to admit that I don’t know for how long this problem exists. Maybe it has been introduced in one of the patches.
So if you don’t really need an GPU then the 13” might be a better option. Apple care is also cheaper for the smaller one :)

PersonalDev Setup

Hello Mac (again)

Hello Mac (again)

Having the perfect machine for development and also for stuff I’m doing besides development (Web browsing, online banking, Netflix, …) is always a thing I strive for. As most of you will do.
A special thing about the “dev machine” (and also about my mobile phone) is something that is very hard to explain, but I will try nevertheless.

The social factor

I always try to have a setup that is achievable by “normal” people. Being gifted with a hobby that I’m quite good at that I can do as a full time job and is also payed very well I can afford almost any setup (dev machine, mobile phone, infrastructure) that I want to have.
But I always try to find the best setup that is not dependent on having much money to spare still considering things that are important to me like privacy and robustness.

Mobile phone

My first contact with mobile phones was the iPhone of a friend of mine. At that time that iPhone was ridiculously priced compared to the feature phones available at that time but also compared to the Windows CE devices I had. At that time the main reason to not have an iPhone was simple: money. I simply could not afford having one.
Then the Android devices popped up. My first Smartphone was a Samsung Galaxy S. Me and my wife both bought one by switching the mobile contract we had.
From that moment on I started looking into App development for Android as I really liked that idea to have creations of mine with me all the time (and I still do ;))
I kept searching for the best compromise between cost, privacy concerns and features by switching phones regularly. Often 4 times a year.
Having an iPhone was not really an option for me. Its limitations and not being able to do anything you want was one aspect but the much more important aspect for me has always been: This is something for the rich. I know plenty of people that simply couldn’t afford an iPhone. Regardless what they are doing.
So I stayed with Android over the years always knowing that this is a compromise on many levels:

  • Support - Android devices get old very fast. Manufacturers loose interest even in top smartphones quite fast. If you get 3 years security updates you were lucky
  • Privacy - Of course this is a very loaded topic. But even if not perfect, at least Apple is trying to preserve as much privacy as it can. Of course this doesn’t prevent users from installing Facebook, WhatsApp and Instagram - This is another story. On Googles side of the story privacy is only a topic as long it affects the relationship of their users with other people. Google wants and needs all your data in order to exist.

The notebooks

A similar thing goes for the MacBooks out there. I had a MacBook for roughly a year because it was almost the same price as a comparable windows notebook. Then I wanted to upgrade the specs (it had 8 GB of RAM and 256 GB of storage) and the price of that machines raised into areas that were no longer something that those mentioned people can afford. So I decided to switch back to Windows / Linux.
Over the years I often switched OS and devices. Gave Linux a try more than a couple of times always finding out that there is something not working (battery life of notebooks is one thing, getting that beamer to work another, the list continues).
Then Microsoft made that big step and was opening. It was possible to run Linux applications in Windows. They integrated Android (at least a bit) and did (and still do) so much work for the community. So I switched back to Windows.

The turnaround

Then Apple dropped the iPhone SE in an iPhone 8 body with an iPhone 11 chip and a improved camera to a price that everyone can afford.
My main argument against the iPhone dropped and all the mentioned disadvantages of Android stayed.
Another key situation for me was me looking for a tablet with a pen to make notes during meetings. The cheapest and arguably the best solution to this problem has been an iPad. I never expected that outcome when I started this journey.
Those two things made me thinking about switching back to iPhones and also switching back to Macs.

The MacBooks

As I decided that I will switch back to an iPhone with the iPhone 12 I also thought about developing apps for it (even if those apps are only for me)
The MacBook prices are still way too high in my opinion and there is no budget option for getting a MacBook today. But to develop apps for an iPhone you need to have something running MacOS. So I decided to switch back to a MacBook even if it is “something for the rich”

Back at the Mac

Now I’m sitting here with my brand new 13” MacBook Pro. I know what you think: He bought a new Intel MacBook only months before Apple will drop their ARM MacBooks.
Yes, I know. Of course I thought about that. But as I expect a ~1 year transition phase where not everything works out of the box (x86 only Docker images, hardware compilation toolchains, …) I decided to stick with the proven system and wait until the dust of the transition has been settled.
More on the selection process on the next post.

PersonalDev Setup

Hello internet

Hello Internet (again)

This is my nth attempt to start something like a blog.
The last attempts all ended with me not writing anything. This time I try to force myself to do so every week.

Regarding topics here I think you can expect mainly software development related stuff. Embedded SW development (If this can be called like that nowadays), .NET stuff, Android development and whatever is engaging me currently.

My name is Michael Lamers (also known as Devmil in the internets) and I work for B/S/H/, a Bosch company creating household appliances of different brands. The most known ones are Bosch, Siemens, Neff, Thermador and Gaggenau. My job is defining the SW architecture of the UI software in those appliances. I also work in the UI Framework team providing all those UI projects a basis to create their UI software on.
As a technology stack we use C++ and Qt but even after 10 years of doing C++ I still don’t like that language. It is messy, error prone and leads to very long development cycles (change - build - test - repeat takes really long)
So a hobby of mine is to look out for alternatives that we might be able to use at work.

We recently moved to Linux as operating system which opened a huge door for alternative technology stacks.
At our current stage we are still very limited in hardware resources so currently most of the alternatives have no chance because of the size of the compiled binary + the framework needed to run it.

But I’m very confident that there will be a time for us where we will have enough resources so that a shift will be technologically possible. The next step then will be to convince all the experienced C++ developers to switch to something that they don’t know and where they have to start at the beginning.
Being open to such a change is a personality thing but also gets less when people get more old. I think the task of convincing the devs that a new language might bring a bunch of advantages and convincing the management that the risk of switching is worth to be taken will be much more challenging than getting the bits to run on our hardware.

In the context of looking at alternatives I already pushed a project called Qml.Net. The idea is: use .Net Core on our hardware and use Qt (as currently) as UI layer. Fast business logic development combined with a performant UI layer. The problem is size. ~20 MB of Framework gets added to the application. Too much for our lowest level hardware.

Interesting languages I want to look at are:

  • Go
  • Python
  • Kotlin Native

So plenty of stuff for blog posts I would say :)

So long
Devmil

Personal