Author Archives: zo0ok

Code it yourself!

I have worked for 20 years creating business value and utility using software. A common discussion is: should we build this piece of software ourselves, or should we search for something that already solves our problem and download or buy that?

I will share my experiences. It is just anecdotes of course. But it is also about learning from mistakes I have seen and experienced.

BizTalk 2002

My first job was about implementing BizTalk 2002 as a message broker between several ERP systems. In hindsight, the actual requirements were:

  • Moving XML files from one network drive to another
  • Simple XML file mapping / transformation (in the actual case, this could probably have been avoided altogether with a proper architecture in the first place)
  • Error handling bad messages – what was wrong with a particular message
  • Different queues, so that many messages that are not time critical do not delay an urgent message
  • Being able to flush queues in case large amounts of messages are received in error (this happened several times)

I think a competent programmer could have built this (in 2002) using Python, IIS (for GUI) and the filesystem on Windows in a few weeks.

We were using BizTalk 2002 and MSMQ (Microsoft Message Queueing) instead of file shares and we ran into problems like:

  • GUI based BizTalk configuration stored in SQL database made automated configuration and deployment a nightmare.
  • MSMQ was limited to (somewhat less than) 2GB of total message storage, viewing and deleting messages in MSMQ was quite combersome when there were thousands or tens of thousands of messages
  • BizTalk throughput was quite horrible, since messages passing through the platform was read and written to multiple SQL tables along the way
  • Errors were logged in Windows Event Log (all in the same log), with a GUID identifying to the actual message. All failed messages (regardless of type or integration) were in a single view, identified by its guid, and only cross searching the event log would give details about a particular message.
  • All messages were being processed in a single queue, with no way to give anything priority.
  • Deleting messages that had been received in errors took literally several hours of manual work in the GUI

At the time, the license cost alone for a cluster with two BizTalk 2002-servers and two SQL Servers amounted to approximately 12-24 months of salary for a programmer.

The SQL Server Cluster is its own story. We had dozens or hundreds of SQL Servers in our organisation, but this was the only application considered critical enough for a cluster. So nobody knew how the cluster really worked and we had so many problems with it. After a few years we just turned off one of the nodes and the other node was doing fine until the plug was eventually plugged for the entire system.

BizTalk 2002 probably lived in production for 10-12 years, half of that time neither BizTalk, Windows 2000 or Windows 2000 SQL Server were supported by Microsoft.

Upgrading to BizTalk 2004 was impossible, as it was a completely different product, and it was in many ways even worse then BizTalk 2002. We did evaluate it thoroughly.

webMethods

After the disaster with BizTalk 2002 we decided to get a new message broker. By now I had learnt the real requirements and I could have built what we needed in Python on IIS (or any reasonable language and any reasonable web server). However relying on own code was to scary for management so after a long evaluation we purchased webMethods (for a license cost of more than a yearly salary for a senior programmer). webMethods was not particularly suitable out of the box, but it was a decent Java-based development and server environment, with good support for working with XML. So I built the integration platform that I knew we needed using webMethods instead of Python. This was a great success for many years. webMethods was eventually purchased by Software AG, the direction of the product changed but the core features that we had limited ourselves to using were still in place (had we many of the advanced features of webMethods our architecture and investment would have broken down much quicker). Finally, a conflict over licensing, not technical issues, made my employer invest many man hours in a 1-1-migration from webMethods to another (proprietary) platform, but by that time I was busy doing other things. If we on the other had had built the message broker we needed in Python, it could have still been running 20 years later with little need for maintenance.

SQL Server Integration Services

Data was to be exported from an Oracle database to text files, and then loaded into a MS SQL Server database. We talk about 1 GB of data every day, completely replacing the old data in place. However, the two databases had different purposes and different structures, so it was not entirely trivial.

Of course, some genious project manager decided that some genious SQL Server consultants were going to load data into SQL Server using SSIS (a new version of it, that none of the consultants had any real proven experience with). After spending about a coffee break on modelling the SQL Database and months of work with SSIS, the SSIS package was a catastrophy. No data validation, no error handling, unacceptable performance. The data exported from Oracle was not 100% good, that was obvious, but the SSIS package could not output any sensible errors that could be passed back to the Oracle people (remember, same story as with BizTalk, all happy path and no reasonable error handling is what you get when you pay Microsoft large amounts of money for advanced software).

At this point I said: I will help by writing a Python script that will validate the Oracle data. How to validate that the data can be properly loaded? I wrote a script that loaded the data into the a SQL database. It took two weeks to program in Python, and we had fully working error handling, validation and good performance. Two consultants and many months of SSIS-work left the project.

SQL Server

The same data from above was now stored in a SQL Server that was the backend for a customer facing statistics application. The data was structured in hierarchies (locations, customers, articles) and writing SQL queries for arbitrarily deep recursive hierarchies is not that nice. It did not perform well either. In 2010 a typical report could take 10 seconds to generate on a very decent Windows server. This can be done properly with star schemas and denormalized data, but that was not how the consultants thought when they struggled to get any data imported with SSIS.

I was recovering from illness and was free to explore some concepts freely. The first Raspberry Pi was popular at the time (700MHz single core ARMv6, 512MB RAM). I took the data from the SQL Database above and tried to squeeze 2 GB of SQL exported data into a file that would fit in the RAM of the Raspberry Pi. It was easy. I basically just made an array of C-structs that contained denormalised records. All strings I put in a separate memory space, removing duplicates, and referenced them from the C-structs. It all fit in a 400 Mb file on the Raspberry Pi SD-card, which gave med 100Mb for Linux, the web server and the web application.

I wrote a cgi-binary in C, and that binary mmap-ed the 400 Mb file, did a “full table scan” and returned a few basic reports. That took less than a second, much faster than the Windows Server with SQL Server on vastly better hardware. This was just a proof-of-concept, but it was surprisingly simple and straight forward to write it in C, and traversing recursive hierarchies is a pleasure in C compared to SQL.

I am quite sure a properly designed SQL database would have performed just as well. But the thing is that SQL is used because it is supposed to help, when in fact it is not particularly suitable for full-table-scan style reports and hierarchical data. Even though competent people can make fantastic things with SQL it is not easy, and in many case not easier than just writing real code instead.

Optimization

In the company we had an old resource optimisation module (travelling salesman type problem), implemented directly in an ERP system. That system was to be replaced and it was obvious that the optimisation code could not be migrated in any way. The original developer did not want to make a new implementation in a new language. Many options were evaluated and no good solutions were found.

Finally I said: give me two weeks and I will write a proof-of-concept optimiser, and if you are happy with it we can add the extra features and make it production-worthy.

Inspired by my C-CGI project above, I wrote this thing in C and it was faster and better than the old module, which had taken several minutes, even occasionally blocking parts of that ERP system. My CGI-program typically ran in less than a second. It was so fast that most other operations, that were internal to the new ERP system, were slower than calling my external C-CGI code that was an NP-hard-problem-solver.

Obviously people were sceptic to my choice of C, even more sceptic when they learnt that I used C89 ANSI C with zero dependencies (for CGI, HTTP, SOAP, etc). But I had little use for that.

Now, 10 years later, a new project was again replacing the ERP system. This time everybody was very happy to see that it took very little effort to put the C-CGI into a docker container and running it in the cloud. It was also no problem adding a few features and improvements to the 10 year old code. Imagine if this had been written with some 2013 .NET web technology and a lot of dependencies, it would have been much harder to move on.

My role in this

I understand if the reader at this point wonders why I let the projects fail miserably all the time when there was an easy solution. The truth is that as a junior developer it is hard to see problems in the big picture, even harder to change peoples mind. Even as a senior developer I had my role and responsibilities (in another projects or teams), and often the only thing I could do was share my mind and then watch the train crash. I also suffered from burnout from trying to take responsibility for things I was not responsible for.

Management often wants big vendors that they can negotiate with and complain with, rather than putting their trust in their own developers.

I now had enough experience, and there were enough failed projects, for my words to weight a bit heavier with management.

Learning from this

I was privileged in the sense that I stayed for many years in the organisation and I saw what became of the systems and code after the projects were long closed. Many developers are sent into projects, they are given limited background information, they have limited and often exaggerated experience of the project tools, and then they leave the project about the time the system goes live.

What happens a month, a year, or five years after go live, they never know. The entire industry of software consultants is missing learning from long term feedback. Instead it is obsessed with always new knowledge of new unproven versions of relatively short lived tools.

I stayed in the organisation because I wanted to learn.

I realised that I need to work with people who share my passion for delivering long term stable solutions, not jump to new projects and technology every 6-12 months.

A new way of doing things

I formulated some principles for new projects and a new platform:

  • Everything should be made as simple as possible, but not simpler (Einstein)
  • Text files over binary files, for data and programs, whenever possible
  • Use local files over any other forms of storage, whenver possible
  • Minimize work related to dependency lifecycles (upgrades, retirement of components)
  • Often it is better to rewrite something from scratch, so it is kind of more important to build replaceable small pieces of software that does one thing and does it well (unix principle), than implementing things perfectly or even documenting the internals
  • Performance matters: slow code itself cause instability, bugs, extra work and problems
  • As a learning developer, real learning is about methods, algorithms, concepts, analysis of requirements, design, testing – ideas that are valid decade after decade. Learning tools that have short life cycle is a waste of time and effort
  • Most time should be spent learning about the (business/customer) problem (domain) and writing value-adding code.
  • Working software is often replaced not because of new requirements but because the underlying technology is obsolete or not supported. This is waste. Often, progress is not possible because we are only dealing with yesterdays problems. Thus, software should be based on standards and stable products. As a developer I am intrigued by solving new problems with new code, not by replacing old working systems creating no real business value.
  • Linux is superior to Windows. People who only work with Windows or appreciate Windows, have slowly over the years been trained to bad taste when it comes to software design choices.

Apart from this defensive programming (focus on error handling) and the Agile Manifesto are good principles.

On Complexity

Another skill that is critical, when seeing any software problem, is to being able to ask yourselves how long time it would take to implement a solution in a suitable general purpose language of choice (C, Python, JavaScript) and/or using the standard library. If you know your problem can be solved in 2 days using C, with 15 lines of JS-code and JS standard library, or in two weeks using Python you may not need a dependency (that will cost you time to learn and configure, require that you keep it updated, may be upgraded or abandoned, will add complexity and possibly bugs to your code). More often than not, the solution is already available as a standard command in your linux shell, or in the or the standard library. If you just look for it and know what you have.

I have found that many productivity tools (like XML mappers) simplify the easy cases and complicate the difficult cases. If you have a lot of simple cases and a lot of stupid programmers, perhaps that makes sense. But if your work is to solve problems, you are not helped by tools that assume your problem is easy. When you have a difficult problem you don’t want to solve the problem while also working around the tool. This is somewhat similar to the happy-path and error handling. Many tools and technologies (like Promises) seem to simplify the happy-path, but when error handling is priority (which it always should be) the tools often dont help much.

Typescript is supposed to be superior to JavaScript because it checks parameter types. I learnt programming ADA so I am not impressed. If you are serious about type checking or validation, you need to check that numbers are within valid ranges, strings represent valid things, arguments are coherent with each other, objects make sense in the real world. Typescript helps some stupid simple errors, but not to define and validate real business objects in a stable way.

How is it going?

Since almost 10 years I am running a software platform with multiple applications based on Node.js and Vue.js and the principles mentioned above. Stability and performance is good, new code is deployed to production almost daily and test coverage is decent. Even better, the business has confidence in what I do so I have more or less complete freedom as developer/architect, and I spend no time on project plans, budgets, time reporting, or bureaucratic processes.

I will describe a few relevant choices.

Angular.JS

In 2014 Angular.js (Angular v1) seemed to be a good choice and it was quite trendy. Since then the people behind Angular.js have abandoned it, Vue and React have emerged, and current versions of Angular are mostly not compatible at all with Angular.js.

I adopted Angular.js completely. The good thing is that only a small subset of all features of Angular.js were used and no Angular.js extensions/plugins (such as a router) were used, so it has been quite easy to simplify the old Angular controllers and migrate them to Vue2 and later Vue3. This migration is almost complete by now.

According to my own principles I should not have used Angular.js but relied on “Vanilla” JS and direct DOM manipulation instead. I do not know, in hindsight, if that would have been better. But I didn’t have and still do not have the knowledge about direct DOM manipulation to build big complex SPAs without using Angular or Vue.

But it is interesting to note that Angular.js was the only major dependency the platform relied on, it aged badly, and replacing it has been rather costly.

I can note that Web Components (the standard) was not a realistic option in 2015, and it was not a realistic option years later when I evaluated it either. So possibly Vue3 is a good choice today.

Argon2

Password “encryption” is done using Argon2. I wrote a separate post about it. I am obviously not implementing the encryption code myself, I would never do anything like that.

HTML Canvas / Graphs / Maps

There have been some need to generate graphs or maps. This has been done with standard HTML/JavaScript Canvas. None of the graphics generated are particularly standard-looking and a simple graph/map library would not have met the requirements.

Other dependencies

There are other dependencies for things that make no sense to implement directly in JavaScript.

  • wkhtmltopdf – used to create PDF, comes with Debian
  • libxlswriter – used to create Excel files (via a small C-program)
  • trumbowyg – used for end user editing HTML capability
  • nginx – https reverse proxy
  • NPM packages (dev only)
    • mocha – a test runner – quite unnecessary dependency, especially now when Node.js has its own test capability
    • eslint – to validate JS code
    • htmllint – to validate HTML
    • c8 – to get test coverage and other statistics

Conclusions

Working in software projects, loaded with bureaucracy, unrealistic expectations and new and unproven tools that nobody masters is like driving a convertible on the highway with the roof off. It is tiring, you can not go on for long.

Working in a maintenance situation where all you do is replacing working software because the dependencies are no longer supported is the same thing.

However, when you stop relying on tools and dependencies that run out of fashion and support, when you focus on understanding and solving the real problem – not stupid invented technical problems – you start to deliver real value. That way, you build trust and can get rid of the bureaucratic overhead, it is like getting the roof back on, just cruising comfortably, making fast progress.

Whisky tasting notes 2024

Macallan Sherry Oak Cask vs Macallan Quest: Similar color and aroma. There is something un-fresh about Sherry Oak Cask that is “missing” in Quest. Surprisingly much bourbon flavour in Sherry Oak Cast, not bad flavour. Quest has a more flat, bitter flavour. Sherry Oak Cask wins.

Bushmills Black Bush vs Macallan Quest: Bushmills is paler, with a more flowery and fruity aroma. Macallan surprisingly subtle in comparison. Bushmills tastes sweet, flowery and some caramel, rather thin but not bad. Macallan has more bourbon flavour, probably some sherry but that is not what I think of after Bushmills. Another taste of Bushmills and it still holds up, and back to Macallan a bit sharp and alcohol. Bushmills wins.

Johnny Walker White Walker vs Macallan Quest: JW is paler, smells a bit of caramel, Macallan is more subtle on the nose. JW has a sweet, somewhat chemical flavour, Macallan more bourbon and the real thing. Macallan wins.

7 dlight Ichiro Mizunara Reserve vs Macallan Quest: Macallan a bit darker. Ichiro has a sweet, somewhat raw aroma, Macallan softer and more subtle. Tasting both I find the same thing, Macallan is a bit more the safe classic choice, Ichiro is not as easy to drink but it has more to offer. Ichiron wins.

Canadian Club 100 Rye vs Hudson Four Grain: Both quite dark, Canadian Club not as dark and not as red as Hudson. Canadian Club is rather fruity on the nose, I kind of think of strawberries and cherries. Hudson smells like sawing in a piece of oak wood. Tasting Canadian club is a bit underwhelming, it is not bad but not much happens, after a while I think I am drinking perhaps grappa or something. Hudson has a bit more complexity in the mouth, but not balance whatsoever. It tastes oak, bourbon and it is both sour and bitter. Of course there is more to talk about when it comes to Hudson, but more is not always better, and I think Canadian Club must win.

Miyagikyo Single Malt vs Yamazaki Distillers Reserve: Same color. Miyagikyo has nice sweet maltiness on the nose. Yamazaki is more dry, more spicy, even more malty – Miyagikyo is more fruity. I taste Miyagikyo and find it quite sweet, rich, soft, and something that resembles peat. Over to Yamazaki it is less rich, less complex, less balanced – yet good, some hints of fruitiness in the background, but next to Miyagikyo I think it fades.

Glenfiddich 12 vs Yamazaki Distillers Reserve: Similar color. Yamazaki has a more malty and rich aroma, Glenfiddich a little bit odd making me think of fusel oil. I taste Glenfiddich, it is quite light, but it has some complexity and develops nicely in the mouth. Yamazaki has a more integrated powerful and rich flavour. Both taste quite much standard malty whisky. Quite similar in flavour and quality, especially on the nose I think Yamazaki is a little bit ahead.

Dufftown Malt Masters Selection vs Glenfarclas 12: Very similar color. Dufftown has a bit fruitier nose, Glenfarclas drier more hay or mint or something. Quite similar. Dufftown has a very typical speyside flavour, not bad but nothing extra. Glenfarclas a bit more malty, bready, salty. I prefer Glenfarclas.

Dufftown Malt Masters Selection vs Glenrothes 12: Dufftown is paler. Glenrothes has a more sweet bourbon-like aroma. Dufftown has a rather clean dry speyside flavour, Glenrothes is sweeter, more flowery, kind of odd. I prefer the more classic Dufftown.

Dufftown Malt Masters Selection vs Edradour 10: Edradour is darker, with a more powerful aroma of what makes me think of sweet red fruits. Edradour has an odd flavour that makes me think of other things than whisky, and Dufftown being more dry and malty tastes like a classic Speyside. Dufftown wins.

Bulleit Rye vs Canadian Club 100 Rye: Bulleit slightly darker, but much more powerful on the nose, bourbon with some flowery hints (from the Rye typically). Canadian Club more caramel, a bit chemical. Tasting Canadian club it is smooth, caramel, a bit sweet. Bulleit Rye is much more powerful, raw, bourbon. Those who really like rye and bourbon probably prefer Bulleit, but I have to say I would rather drink Canadian Club.

Glefarclas 12 vs Glenlivet Founders Reserve: Glenfarclas wins.

Glenlivet Founders Reserve vs Glenmorangie 12 Lasanta: Glenmorangie wins.

Highgrove Organic vs Yamazaki Distillers Reserve: Highgrove is paler in color, and has lighter more pure, caramel-malty aroma. Yamazaki is sweeter, deeper, a bit oily perhaps even with a hint of peat. Tasting Highgrove it tastes a lot of citrus and without water it is not particularly soft. Yamazaki is softer, richer, it fills the mouth more, and it has more complexity. Highgrove is a bit intense without complexity, and it can not match Yamazaki. Back to Yamazaki I would guess that its flavour is dominated by some wood not typical for scotch whisky, but in a balanced way.

Chivas Regal 18 vs Tamnavulin Double Cask: Chivas is darker, at first they are quite similar on the nose, classic, a bit sweet, with Chivas a little bit more mellow and Tamnavulin a bit more sour fruity and fresh. Tamnavulin tastes soft, a bit sweet more than malty and if it is sherry it is very well balanced, rather easy to drink. Now Chivas both tastes and smells a bit peated, a bit more oily, a bit more complex and a bit more challenging. Tamnavulin is good also after Chivas, it is a bit simple though. Chivas is both more complex and balanced, and wins.

Ballantines 17 vs Glen Ord 18: Ballantines is paler, with a fresh malty aroma. Glen Ord smells, liqourice? I am a little confused about the aroma here, Ballantines is lighter and less powerful, Glen Ord is a bit heavier. Both are rather balanced on the nose. I taste Ballantines and it is light, balanced, lingering nicely, quite excellent without being anything extra. Glen Ord is a bit extra, and not so little extra, a bit oily, some citrus. Ballantines is good, but more plain than Glen Ord. Glen Ord wins.

Famous Grouse vs Glen Garioch Founders Reserve: Famouse Grouse is slightly paler, with a fresher aroma. Glen Garioch smells a bit oily in a bad way, not fresh. Tasting Famous Grouse, a bit thin, a bit chemical and a little bitter but drinkable. Glen Garioch is a bit sour, raw wood, young and unrefined. Back to Famous Grouse it tastes like a very safe choice. And Glen Garioch again, no I do not like it. Famous Grouse wins.

Glen Ord 18 vs Springbank 18: Glen Ord a bit more red. Big difference on the nose, Glen Ord malty and fresh, Springbank heavy and peated (but not that peated). Tasting Glen Ord, it is rich, malty, complex and tasty, very nice. Springbank, a bit sour and then a bit of sulphur. I do not like it. Glen Ord wins.

Glenlivet 16 Nadurra vs Macallan Whisky Makers: Glenlivet paler even if cask strength. Both have a sweet fruity sherry arama, Glenlivet a bit more fresh and sweet, Macallan a bit more velvet mellow. Glenlivet first quite much bourbon in the mouth, then sweet-sour fruit, a bit unbalanced. Macallan more balanced, dry sherry. I dont particularly like Glenlivet, Macallan wins.

Macallan Fine Oak vs Macallan Whisky Makers: Fine oak is paler, with a slightly more sour and raw aroma than Whisky Makers. Very similar flavour, Whisky Makers being more soft and balanced, and wins.

Glenfiddich 15 Solera vs Macallan Whisky Makers: Glenfiddic is paler, with more soft sweet caramel araoma, Macallan being more sherry. Glenfiddich is soft, rich, fruity, malty, lingering nicely. Macallan is more sherry. A sherry fan will probably go with Macallan, but I enjoy Glenfiddich better.

Macallan Whisky Makers vs Yamazaki Distillers Reserve: Yamazaki paler, and a bit more subtle on the nose, and Macallan is more sherry. Yamazaki tastes good, malty, classic, a bit light. Macallan tastes sherry, and I enjoy Yamazaki more.

Macallan Whisky Makers vs Redbreast 12: Redbreast a bit darker, with more bourbon aroma. Redbreast tastes a bit dry bourbon, with some fruity finish. Macallan tastes more sherry, a bit sour. I prefer the flavour of Redbreast.

Macallan Whisky Makers vs Mackmyra Reserve Elegant Ambassadör: Macallan slightly darker, more sherry aroma, Mackmyra a bit unusual wood aroma. Mackmyra also has a bit unusual woody flavour, a bit pepper, quite sweet and soft. Macallan more a classic sherry flavour. I like Mackmyra better.

Mackmyra Vit Ek vs Svensk Whisky för Ukraina: Mackmyra ha a bit more dark brown color, also with a richer, more raw wood, aroma. Tasting Svensk Whisky för Ukraina, seems young, a bit of bourbon. Mackmyra has a more rich, balanced and elegant flavour. Mackmyra wins.

Bergslagen Gast vs Mackmyra Vit Ek: Mackmyra a bit darker, Bergslagen a bit more sour peated aroma. Mackmyra is a bit softer, a bit desert wine like. Bergslagen has a fresh flavour, both smoke and sweetness. Mackmyra a bit more balanced, softer, lingers with some fruitiness. I would say this comes down to if you prefer a more peated salty whisky, or a softer more fruity whisky. Too me, Mackmyra has more quality.

Bergslagen Two Hearts vs Mackmyra Vit Ek: Very similar quite dark color. Bergslagen has a salted caramel aroma, mackmyra more like a lightly peated desert wine, more fruit. Tasting Bergslagen, quite sweet, a bit raw, lingers but not very softly. Mackmyra is softer, it really makes me think of the color yellow (plums, oranges, I dont know). Back to Bergslagen, it is a bit harsch and bitter. Mackmyra wins, its light peat, softness, and lingering fruitiness is a winning concept.

Dufftown 18 vs Mackmyra Vit Ek: Mackmyra very slightly darker. Dufftown a bit more salty and fresh on the nose, Mackmyra a bit more yellow fruit and sweet, with some peat. Dufftown has a soft, fruity, malty flavour, both balanced and complex. Mackmyra tastes much younger, a bit raw, and sour. Dufftown is less powerful, not as strong, but I think it tastes better and has more quality.

Mackmyra Vit Ek vs Yamazaki Distillers Reserve: Mackmyra slightly darker, with a more powerful aroma, but Yamazaki has a more classic malt aroma. I taste Yamazaki and find it quite classic malt with some fruity sweetness to it, perhaps most orange. Mackmyra more powerful, more sour (hint of peat). I think Yamazaki has more quality, and tastes better.

Old Pulteney 18 vs Writers Tears Japanese Cask Finished: Writers Tears very slightly darker. Old Pulteney is more malty salty on the nose, something chemical oily that I do not like so much. Writers Tears is much more fruity, perhaps on the chemical side. Tasting both quite same impression, mixed feelings about both, but I think Old Pulteney is better.

Hibiki Harmony vs Miyagikyo Single Malt: Similar quite pale color. Hibiki mostly caramel on the nose, Miyagikyo much more fruity, almost like a wine. Hibiki, soft, nutty, caramel a bit salty easy to drink. Miyagikyo immediately and unexpectedly a bit peated after Hibiki. Back to Hibiki, a bit sweet punch like, a bit synthetic. Miyagikyo is both fruity and lightly peated in an unusual way, but it works. I prefer Miyagikyo.

Hazelburn 10 vs Yamazaki Distillers Reserve: Hazelburn is paler with a somewhat more raw, salty and dry aroma. Yamazaki is more balanced, full bodied, on the nose. Hazelburn is dry in the mouth, even a bit burnt, quite light. Yamazaki has a less malty, more alcohol-flavour. Back to Hazelburn, it has an open complexity with no flavour dominating and it lingers nicely. Yamazaki tastes more closed, less developed, the flavours hidden in each other (perhaps I should have added water), resulting in something somewhat chemical. Hazelburn wins.

Glen Scotia Campbeltown Harbour vs Mackmyra Vit Ek: Mackmyra slightly darker, with a more powerful, deep and oaky aroma. Glen Scotia very subtle in comparison, a bit fruity. Tasting Glen Scotia it is light, but it has some complexity with some peaty hints. Mackmyra is a bit stronger, a bit more spicy and in your face, and not as smooth and easy as Glen Scotia. Back to Glen Scotia it is thin, makes me think of a blend. Mackmyra wins.

Glen Scotia Campbeltown Harbour vs Jura Superstition: Jura very slightly darker, with a somewhat more oily aroma. Glen Scotia a bit more fruity and peat. Jura has a nice saltiness to it, but also a raw woodiness and somthing lightly Floki-like about it. Glen Scotia is lighter, more flawless, but a bit blend-alcohol-like. Jura tastes a bit nasty (like Floki), victory to Glen Scotia.

Glenfarclas 12 vs Glen Scotia Campbeltown Harbour: Similar color. Glenfarclas more dry, a bit herblike on the nose, Glen Scotia more fruity in a somewhat artificial way. Tasting Glenfarclas it is soft, salty, balanced, a typical speyside with some odd herbiness to it. Glen Scotia is more oily, slighly peated, a bit lighter, and it tastes alcohol like a blend. Glenfarclas wins.

Chivas Regas 12 vs Oban Distillers Edition: Oban darker in color, more powerful and sweet on the nose. Tasting Chivas I find it quite thin, with a body of nutty caramel, not bad. Oban, on the other hand is sweet, raw and not so little sulphur or whatever it is that tastes old margarine. Not nice. Chivas wins.

7 dlight Three Ships vs Glen Garioch Founders Reserve: Glen Garioch a bit darker, with a slightly nasty fusel oil aroma. Three Ships has a light smokiness to it (resembles Mackmyra Vit Ek from the other day). Tasting Glen Garioch, a kind of spicy malty flavour, better than I expected. 7 dlight, a rather sour with some peat. Back to Glen Garioch, it is more classic that 7 dlight, but a bit unpleasant. Its odd, after tasting Mackmyra Vit Ek I now appreciate 7 dlight for the same reason, I think. 7 dlight wins.

Longrow vs Mackmyra Vit Ek: Longrow is paler, with a more dry more peated aroma. Mackmyra more fruity desert wine. Longrow has a dry, straight simple quite balanced, fresh and good flavour. Mackmyra more young wood, not very raw but still woody, a bit bitter with the sourness. Longrow wins.

Highgrove Organic vs Highland Park 10 Viking Scars: Highgrove is much paler, with a lighter more fresh aroma. HP has a somewhat sour aroma that I find a bit challenging at first. Highgrove light in flavour too, very fresh, clean maltiness. HP a bit more rough, dirty and oily, some of that fusel-oil smell also in the flavour, but it is kind of part of the character in a nice way. Highgrove, somewhat bitter and uncharming. HP is richer and softer, I think that makes it an HP victory.

Dufftown 18 vs Yamazaki Distillers Reserve: Dufftown a bit darker, with a more dry, fresh clean aroma. Yamazaki is a bit more powerful, spicy, nutty. Dufftown has a rather clean neutral Speyside flavour, not bad but not much write about. Yamazaki a bit more oily and more flavourful, also very balanced. Yamazaki wins.

Glen Ord 18 vs Yamazaki Distillers Reserve: Glen Ord a bit darker, and a bit more powerful and sweet arama. Both have a classic balanced aroma though, flavours quite similar, Yamazaki somewhat more accurately on spot winning narrowly.

Glen Ord 18 (2019 special release) vs Yamazaki Distillers Reserve: Very similar color and very similar aroma. Also, flavour not so different, but Glen Ord is more complex, lingers longer, and gives a richer experience. Glen Ord wins.

Arran Heavily Peated Sherry Cask vs Bowmore 18: Arran is more dark and red. Arran a bit raw wood, quite peated and sweet. Bowmore more classic malted, more dry, a bit peated but not as Arran, and more sherry rather than just sweet. Tasting Bowmore, surprisingly peated flavour and salty/dry in an elegant way. Surprisingly little peat flavour, a bit raw wood, artificially sweet, subtle balanced sherry. Bowmore wins, but two good peated sherry whiskies.

Bushmills 16 vs Writers Tears Japanese Cask Finished: Bushmills is darker with quite a bourbon aroma. Writers Tears a bit dryer (yet a bit fruity), less powerful. Back to Bushmills, yes bourbon. Writers Tears, harder to describe the aroma. Tasting Writers Tears, after adding a bit of water, a bit bitter-sweet, balanced, not overwhelming. Bushmills is more powerful, like a soft sweet balanced bourbon. Now back to Writers Tears it smells more flowery fruity. I think this thing with Japanese finish on Mizunara wood is a quite delicate things for enthusiasts and something that does not necessarily translate straight to a superior casual experience. I think Bushmills wins.

Agitator Select Cask Ex Islay vs Bunnahabhain Peat & Fruit Coopers Choice: Agitator is paler, with a kind of light, winelike, peated aroma. Bunnahabhain is more sweet with the peat in the background. Both are cask strength so I add water to both. Now Bunnahabhain is more peated, a sweet oily peat rather than smoke. Agiator changed less with water. I taste Agitator and find a dry, almost ashy peat, and a pure clean flavour. Bunnahabhain struggles more with what it wants to be, both sweet and smoky, with a bit of sulphur. Agitator wins.

Agitator Select Cask Ex Islay vs Bowmore 12: Agitator much paler, Bowmore amber. I am beginning to find orange in Bowmore and that, together with some peat, is what hits my nose first. Agitator is a bit lighter, both are peated, but when compared to each they seem to be peated to a similar degree and neither is very peated. Bowmore is more oily, chemical, heavy, and Agiator is more wine and smoke. I taste Bowmore and find it surprisingly sweet, soft, I think I can say orange with a bit of smoke. Agitator is also soft, but more salt and burnt. I find Bowmore a bit odd, perhaps they want it to taste like this, but too me it is a bit chemical. Agitator is a more refreshing peated experience. Agitator wins.

Agitator Select Cask Ex Islay vs Bowmore 15: Bowmore is a dark whisky, Agitator nearly colorless. On the nose Bowmore is more sweet, orange, oily. Agiator is more light white wine and smoke. Tasting Bowmore it is surprisingly dry, a bit peated, and with a fruity finish. Agitator is a more simple experience, it is dry and a bit of smoke. Trying Bowmore again, there is much to discover in Bowmore, it lingers nicely. I don’t completely love Bowmore, but Agitator is to plain and simple to match the more complex and interesting Bowmore.

Glen Scotia Campbeltown Harbour vs Johnny Walker White Walker: Glen Scotia somewhat more pink and clear in color, Johnny Walker more yellow. Glen Scotia a bit sour (hint of peat), almost flowery on the nose. Johnny Walker is more sweet, spicy, creamy, coffee? I taste Johnny Walker and it has a thick feeling in the mouth, like it is actually sweetened, and it has a soft, somewhat chemical easy to drink flavour. Glen Scotia is also soft, balanced, clean but with some complexity. Back to Johnny Walker it is mostly sweet, probably vanilla. Campbeltown Harbour is nice in a classic way, victory to Glen Scotia.

Nikka Coffey Malt vs Tamnavulin Double Cask: Tamnavulin slightly darker, or at least more red. Surprisingly similar aroma, both have a quite classic malt whisky aroma on the sweet side. Nikka is more vanilla, cream and spice. Tamnavulin is more fruit. I taste Nikka and it has something spiced about it, like punch. I taste Tamnavulin, and it actually tastes quite similar, yet a bit fruity. Back to Nikka it makes me think of honey rather than fruit. I find the Nikka experience more convincing somehow, Nikka wins.

Bushmills Original vs Tullamore Dew: Bushmills slightly paler, with a slightly more sweet caramel and soft aroma. Tullarmore Dew smells more pure alcohol. Bushmills also tastes a bit softer and sweeter, Tullamore is more burnt, more bitter and tastes more alcohol. Bushmills wins.

Excessive SYS CPU with NodeJS 20 on Linux

I am running a system, a collection of about 20 Node.js-processes on a single machine. Those processes do some I/O to disk and they communicate with each other using HTTP. Much of the code is almost 10 years old and this system first ran on Node 0.12. I can run the system on many different machines and I have automated tests as well.

The problem demonstrated for idle system using top

I will now illustrate the problem of excessive SYS CPU load under Node 20.10.0 compared to Node 18 on an idle system, using top.

TEST (production identical cloud VPS, Debian 11.8)

Here the system running on Node 18 has been idling for a little while.

top - 12:44:46 up 3 days, 23:21,  4 users,  load average: 0.02, 0.44, 0.35
Tasks: 109 total,   1 running, 108 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.5 us,  0.7 sy,  0.0 ni, 96.4 id,  0.1 wa,  0.0 hi,  0.3 si,  0.1 st
MiB Mem :   3910.9 total,    948.2 free,   1484.8 used,   1478.0 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   2166.2 avail Mem

Upgrading to Node.js 20.10.0 and letting the system idle a while gives:

top - 12:54:20 up 3 days, 23:30,  2 users,  load average: 0.79, 1.74, 1.16
Tasks: 108 total,   3 running, 105 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.3 us, 20.4 sy,  0.0 ni, 76.0 id,  0.0 wa,  0.0 hi,  1.3 si,  0.0 st
MiB Mem :   3910.9 total,    809.8 free,   1316.8 used,   1784.3 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.   2347.7 avail Mem

As you can see, the SYS CPU load is massive under Node 20.

RPI v2, Raspbian 12.1

Here the system running on Node 18 has been idling on a RPi2 for more than 15 minutes.

top - 12:38:36 up 42 min,  2 users,  load average: 0.13, 0.11, 0.63
Tasks: 133 total,   2 running, 131 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.2 us,  1.2 sy,  0.0 ni, 95.6 id,  0.0 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :    971.9 total,    436.0 free,    324.3 used,    263.3 buff/cache    
MiB Swap:   8192.0 total,   8192.0 free,      0.0 used.    647.6 avail Mem

This is a very under powered machine, but it is ok.

Upgrading to Node.js 20.10.0 and letting the machine idle gives:

top - 12:55:09 up 59 min,  2 users,  load average: 0.56, 1.38, 1.32
Tasks: 139 total,   1 running, 138 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.3 us, 12.6 sy,  0.0 ni, 82.7 id,  0.3 wa,  0.0 hi,  0.1 si,  0.0 st
MiB Mem :    971.9 total,    429.5 free,    327.9 used,    266.5 buff/cache    
MiB Swap:   8192.0 total,   8192.0 free,      0.0 used.    644.0 avail Mem

Again, a quite massive increase in SYS CPU load.

The problem demonstrated using integration tests and “time”

On the same TEST system as above, I run my integration tests on Node 18:

$ node --version
v18.13.0
$ time ./tools/local.sh integrationtest ALL -v | tail -n 1
Bad:0 Void:0 Skipped:8 Good:1543 (1551)

real 0m27.277s
user 0m17.751s
sys 0m4.251s

Changing to Node 20.10.0 instead gives:

$ node --version
v20.10.0
$ time ./tools/local.sh integrationtest ALL -v | tail -n 1
Bad:0 Void:0 Skipped:8 Good:1542 (1551)

real	0m56.958s
user	0m12.875s
sys	0m36.931s

As you can see, SYS CPU load increased dramatically.

Affected Node versions

There is never a problem with Node.js 18 or lower.

Current Node.js 20.10.0 shows the problem (on some hosts).

My tests (on one particular host) indicate that the excessive SYS CPU load was introduced with Node.js 20.3.1. The problem is still there with Node 21.

There is an interesting open Github issue.

Affected hosts

I can reproduce the problem on some computers with some configurations. Successful reproduction means that Node 18 runs fine and Node 20.10.0 runs with excessive SYS CPU load.

Hosts where problem is reproduced (Node 20 runs with excessive SYS CPU load)

  1. Raspberry Pi 2, Raspbian 12.1
  2. Intel NUC i5 4250U, Debian 12.1
  3. Cloud VPS, Glesys.com, System container VPS, x64, Debian 11.8

Host where problem is not reproduced (Node 20 runs just fine)

  1. Apple M1 Pro, macOS
  2. Dell XPS, 8th gen i7, Windows 11
  3. Raspberry Pi 2, Raspbian 11.8
  4. QNAP Container Station LXD, Celeron J1900, Debian 11.8
  5. QNAP Container Station LXD, Celeron J1900, Debian 12.4

Comments to this

On the RPi, upgrading from 11.8 to 12.1 activated the problem.
On QNAP LXD, both 11.8 and 12.4 do not show the problem.

Thus we have Debian 11.8 hosts that exhibit both behaviours, and we have Debian 12 hosts that exhibit both behaviours.

Conclusion

This problem seems quite serious.

It affects recent versions of Debian in combination with Node 20+.

I have seen no problems on macOS or Windows.

I have tested no other Linux distributions than Debian (Raspbian).

Solution

It it seems this is a kernel bug with io_uring, at least according to Node.js/Libuv people. That is consistent with my findings above about affected machines.

There is a workaround for Node.js:

UV_USE_IO_URING=0

It appears to be intentionally undocumented, which I interpret as it will be removed from Node.js when no common kernels are affected.

I will stay away from Node.js 20, at least in prodution, for a year and see how this develops.

Elemental Dice – Cerium Problem

After a few months in a box in a close, it is obvious that Cerium has a problem.

I have received a replacement Cerium die, with cerium in resin.

Debian 12 on a 2-drive NUC

After my relative success with Debian 12 on my Hades Canyon I decided to install Debian 12 on an older NUC as well, the NUC D54250WYKH with an i5-4250U. The nice thing with this NUC is that it both has an mSATA slot and room for a good old 2.5-inch drive. So I have:

  • 1 TB 5400 rpm HDD
  • 240 GB SSD

The annoying thing is that the BIOS/UEFI only wants to boot from the SATA drive, and the SATA drive shows up first in Linux. The easy way for me to install this computer would be

  • 240 GB SSD: /, /boot, /swap, /home
  • 1000 GB HDD: /home/sync (for syncthing data)

I could do a simple guided-encrypted-lvm-all-drive on the 240 GB, and a single encrypted partition on 1TB. But Debian 12 installation fails when it comes to installing GRUB, and the installed system does not boot.

Using LVM to make a logical volume spanning a small fast SSD and a large slow HDD makes no sense.

Partitioning in Debian

There is a guided option and a manual option to do Partitioning in Debian. I feel neither is good for me.

  • Guided: fails to lay out things easily on the two drives in a way that works
  • Manual: honestly, too complicated, particularly:
    • LVM and encryption hide few details, requires many steps, and hard to undo half-way
    • I understood that LILO needed to go the beginning of the drive BIOS was set to boot, and that LILO needed to see /boot (whether its own partition or root). However, with GRUB and UEFI, there are two separate extra partitions (/boot and some FAT-partition I think) and I am not allowed to control where the GRUB code goes (if anywhere). So I do not dare to set up this manually.

To make things worse (admittedly, I used the minimal.iso Debian installer which pulls things over the network to make things slower), when restarting the computer/installer there are quite many steps until my drives are detected and I can even erase partition tables and start over.

What I did

After two failed installation attempts, and several more restarts of the installer, I found a working solution.

I first erased all traces of partitions and boot code on both drives to be on the safe side. /dev/sda is the installation media.

  1. # dd if=/dev/zero of=/dev/sdb bs=1024 count=10240
  2. # dd if=/dev/zero of=/dev/sdc bs=1024 count=10240
  3. Guided non encrypted setup of 1000 TB drive, with separate /home
  4. I didn’t even install X/Gnome this time to save time

This gave me working computer that makes no use of my SSD. As root on the console I did:

  1. Backup the home directory of my non-root-user (just in case) to /root
  2. Remove /home from fstab
  3. Restart
  4. install cryptsetup and cryptsetup-run
  5. encrypt /dev/sda4 using cryptsetup (900GB+ HDD partition)
  6. encrypt /dev/sdb1 using cryptsetup (240GB SSD only partition)
  7. add entries to /etc/crypttab:
    sda4enc /dev/sda4
    sdb1enc /dev/sdb1
  8. Restart
  9. Give master encryption password (just once since I used the same)
  10. mkfs.ext4 /dev/mapper/sda4enc
  11. mkfs.ext4 /dev/mapper/sdb1enc
  12. add entries to /etc/fstab
    /dev/sdb1enc /home +options
    /dev/sda4enc /home/sync +options
  13. Restart

The result is almost 100% good. A few comments:

  • swap ended up on slow 1TB HDD, which I am fine with since I have 16GB RAM
  • root filesystem (with /usr, /root, /var, /etc and more) is not encrypted now, but I can live with having only my data (/home, /home/sync) encrypted
  • using cryptsetup/luks directly on partitions, not bothering with LVM, is much more simple
  • with /etc/crypttab and cryptsetup-run, encryption is really simple and understandable

As long as I do not run into something strange with X/Wayland/Gnome and drivers for this old NUC, I think I am good now.

What I would have wanted

I hear people have been fearing the Debian installer, up to Debian 12. I have not feared it in the past, but now I kind of do (after having issues installing two different NUCs the same week).

This is the partitioning experience I would have liked. My input/selections as [ ].

You have three drives with multiple partitions. Select all you want to keep, use as is, or delete:

/dev/sda (Debian installation media)
[KEEP] /dev/sda1 ...
[KEEP] /dev/sda2 ...
[KEEP] /dev/sda3 ...

/dev/sdb (1000 GB HITACHI)
[DELETE] /dev/sdb1  200 GB NTFS
[DELETE] /dev/sdb2  750 GB ext4
[/mnt/backup] /dev/sdb3 50 GB (just an example of something to keep

/dev/sdc (240 GB SAMSUNG)
[DELETE] /dev/sdc1  500MB FAT
[DELETE] /dev/sdc2  400MB ext2
[DELETE] /dev/sdc3  239GB ext4

With that out of the way, I would like Debian to ask me:

What device should contain 2 small partitions for boot purposes?
[X] /dev/sdb  -- 950 GB free
[ ] /dev/sdc  -- 240 GB free

Where do you want swap partitions, and what size?
[      ] /dev/sdb -- 950 GB free
[ 16GB ] /dev/sdc -- 240 GB free

Where do you want /, and what size
[      ] /dev/sdb -- 950 GB free
[ 30GB ] /dev/scd -- 224 GB free

Do you want a separate /home, and what size
[       ] /dev/sdb -- 950 GB free
[ 194GB ] /dev/scd -- 194 GB free

Do you want a separate /var, and what size
[       ] /dev/sdb -- 950 GB free
[       ] /dev/scd --   0 GB free

Do you want to set up extra non-standard mounts?
[ 950GB ] [ /home/sync ] /dev/sdb -- 950 GB free

Now it is time to choose encryption and format options:

UEFI-BOOT    500MB   [ FAT ]
/boot        500MB   [ ext2 ]
/             30GB   [ ext4 + encrypt ]
/home        194GB   [ ext4 + encrypt ]
/home/sync   950GB   [ ext4 + encrypt ]
/mnt/backup   50GB   [ KEEP ]

Finally, choose encryption password (the same, or separate).

This would have been a much better experience for me. I understand there can be more cases:

  • Computers with multiple disks may want to use LVM for to make logical volumes spanning several physical volumes. That would probably be a question between (1) and (2) above.
  • Multiple filesystems could live on a common encrypted volume, with a common encryption key, making use of LVM. That could be a question in the end:
    /usr and /var are on the same disk, do you want them to share encryption key on a common volume

Summary

I would guess that the use cases are:

  • 80% Simple 1-drive computers (Guided, automatic, defaults)
  • 10% Multi-disk servers with specific requirements (Manual, expert mode)
  • 10% 2-3 drive computers (not well supported today with Debian 12)

I am just making 80/10/10 up, of course. The unsupported 10% can be made up of:

  • Laptops or desktops that come with a small SSD and a large HDD (it happens)
  • Desktop computers with extra drives installed
  • Simple servers

Perhaps in Debian 13!

Debian 12 on Hades Canyon NUC

I have a Hades Canyon NUC (NUC8i7HVK) that I have been running Ubuntu and later Fedora on. Ubuntu has been fine for years but I didn’t want Snap (especially not for Firefox) so I tried out Fedora and that was also fine.

I realize that I did not leave Ubuntu because I did not want to have Snap, I left it because I want 100% apt. So in the long run I feel a bit alienated with Fedora and with Debian 12 out and getting good reviews I thought about giving it a try.

This desktop computer is a bit like your typical laptop when it comes to Linux, not sure everything works out of the box. I used to struggle a bit with Bluetooth and Audio, but I don’t do those things on this machine anymore. Ubuntu and Fedora are kind of already configured with proprietary non-free drivers for this NUC, but Debian is not.

TLDR

I am running Debian 12 now, installed from the “minimal.iso” debian image, and with a number of extra packages installed. The InstallingDebianOn-page for this machine is ok. All I actually did was to add non-free and contrib to sources.list and install the extra packages recommended:

I have done no extra configuration or tweaking on Debian 12, but I am not using Audio-IN, Bluetooth or Wifi so I have not tested.

Broken Live Image

I didn’t throw Fedora 38 out without doing some testing first, so I downloaded the Live image for Debian 12 and successfully tried it. Then I installed Debian 12 from the Live image (choosing install immediately at the Grub menu), which was 99% successful. But it left some Raspberry-Pi packages and some stuff in /boot, resulting in that apt could not finish rebuilding the ramdisk. Computer started, but error remained. I searched on forums, it is a known problem with the Live image, there are solutions and when I tried I just got more errors. So I ended up reinstalling Debian 12 from scratch.

minimal.iso

I downloaded the minimal.iso, convenient so I did not have to use a large USB-key, and installed from it. What a nice text/curses based installation! Then I got a non booting system!

I had to disable “Intel IGD” (I think that was how it was called) in “BIOS” (it is not BIOS anymore), becuase this machine has an Intel GPU that is not connected to any output, and with this rudimentary Debian install, somehow the system would not start.

When that was done, and I started Debian and logged in, Gnome (and neofetch I presume) reported GPU=Software. I could watch Youtube with high CPU load. That was when I installed the extra packages listed above, and since then I have been happy.

Conclusion

Debian 12 is fine on Hades Canyon NUC8i7HVK. The InstallingDebianOn-page linked above tells you more than you need. It was written from Debian 10.7.

Trying tmux

It seems screen is old and tmux is what I should use. Here are some findings and notes.

Cheat Sheet

I found a decent Cheat Sheet.

macOS backspace issue

There seems to be a problem with backspace in tmux on macOS. I installed tmux via pkgin, so if you use brew or something, perhaps the situation is different.

The simple fix I found here was to create a ~/.tmux.conf and add one line:

set -g default-terminal "screen"

or

set -g default-terminal "screen-256color"

Other solutions fixing tmux-256color with infocmp and tmp failed for me. I probably just didn’t use the right versions of the commands in the right way.

macOS resizing panes

As I understand it, panes are resized with CTRL+B+ArrowKey. But CTRL+ArrowKey does something else on macOS. I have not decided if I need to solve this yet.

Scrolling

Scrolling was always a hassle in screen. In tmux, this is a nobrainer for me (again ~/.tmux.conf):

set -g mouse on

On RHEL and downstream clones

I have been using Linux, being fascinated with Linux, since 1997. It makes me sad to see the current situation with RHEL, Alma and Rocky.

I have since long been a user of Debian and different versions of Ubuntu. Recently I have switched to Fedora on my workstations because I don’t appreciate Snap in Ubuntu.

I think Linux, how it is delivered, compared to Windows, has two advantages (apart from price):

  • Everyone can use the same version of Linux (I don’t have arbitrary limitations on my Home computer compared to my Professional computer, or my Server computer)
  • Anyone can make their own flavour (with KDE, for Gaming, for sound engineers, for servers, without systemd, for network routers and firewalls)

To me, this is about economy. Not purchase price, but about not doing the same work over and over again, on different computers, in different projects, or in different organisations. This is about maximising synergy, and minimising waste.

RHEL

RHEL is, from my perspective, about

  • Not everyone can use the same version of Linux (because RHEL is dominant but not for everyone)
  • Since last weeks, nobody should make their own flavours of RHEL

I understand it makes sense from a corporate perspective, but it makes less sense from a holistic Linux perspective. But this was kind of true for RHEL even before last weeks shutting off patches downstream.

To me, RHEL is less free, in lack of a better word. I can have it for 0 USD, I can get the source under GPL, but it still comes with strings attached that I rather don’t have.

Alma and Rocky

I have occasionally logged in to a RHEL computer but I have never done anything with Alma or Rocky. I understand if you technically want RHEL but you do not want a relationship with Red Hat, Alma or Rocky solves that. And perhaps RHEL (or Alma or Rocky) is more fit-for-purpose for you than any alternative (like Debian or Ubuntu).

I always refused to use pirated Windows because I argued that even if I pay Microsoft nothing, I am still supporting their entire ecosystem, not helping things to get better. To me, Alma and Rocky are not pirated versions of RHEL (of course not). But to me, they also do not contribute to making RHEL or any other Linux system better. And they do not make the REAL alternatives to RHEL any more viable, while supporting the RHEL ecosystem. They are just community effort to duplicate work, and from my perspective that effort could have been used for something better (like Debian – if you want free Linux).

Fedora -> CentOS Stream -> RHEL -> Clones

I kind of agree with the Red Hat position, that supporting Fedora and CentOS Stream, upstream, is their best way of serving the community. And that the clones themselves add nothing.

To me Fedora and CentOS Stream makes more sense and have more appeal, than Alma and Rocky. But I don’t need to run some enterprise applications so perhaps I do not understand.

Red Hat business model

As I understand it (and I just run Debian on my servers, so I may not know) Canonical has free download available for all versions of Ubuntu (also enterprise server versions that compete with RHEL). But you can pay for support if you want.

If Red Hat did the same, Alma and Rocky would disappear. Or they would turn into niche variants/remixes of RHEL. I have seen other places in the open source world where you need to pay for extended support, which seems to be what RHEL and the cost of RHEL is much about.

I read that Red Hat realised that customers had 1 paid RHEL computer, and 999 CentOS computers, and the support was always for the RHEL computer. That was why Red Hat moved CentOS upstream. Perhaps that was the wrong move to increase customer RHEL support loyalty, and perhaps this late move of Red Hat is also the wrong move for the same old problem.

Conclusion

Alma and Rocky exist only because Red Hat and RHEL comes with strings attached that many people do not want in the Linux world. However, there were still strings, and now Red Hat pulls them.

There are only two good solutions:

  1. Red Hat understands the real need for no strings attached
  2. People understand to move away from RHEL entirely, and truly support the real alternatives

I hope for any of these. Not for a RHEL-Alma-Rocky conflict situation.

Oracle Free Compute Instance: Incoming TCP

I learnt that Oracle is offering a few free virtual machines to individuals. There are few strings attached and the machines available are quite potent. Search for Oracle always free compute instance.

The very basics are:

  • 1 CPU AMD or 1-4 CPU ARM
  • 1 GB RAM (AMD) or up to 6 GB RAM (ARM)
  • 47 GB of Storage
  • 10 TB of network traffic per month
  • Choice of Linux distribution (Fedora, Alma, Ubuntu, Oracle, not Debian) with custom image options.

Setting up a virtual machine is quite straight forward (although there are many options). At one point you download ssh-keys to connect. You save them in .ssh and connect like (username is different for non-ubuntu distributions):

$ ls ./ssh
my-free-oracle.key my-free-oracle.key.pub

$ ssh -i ./ssh/my-free-oracle.key ubuntu@<IP ADDRESS>

That was all very good and easy, but then I wanted to open up for incoming traffic…

Incoming traffic is very easy!

The Oracle cloud platform is rather complex. There are many different things you can configure that are related to traffic. What you need to configure is:

  • Virtual Cloud Network -> Security List -> Add Ingress Rule
  • Configure linux firewall
    On ubuntu for proof of concept: $ sudo iptables -F INPUT

If you set up apache and add an ingress rule for port 80 as above, you shall have a working web server.

What I did

In my defence, when something does not work and you see a number of possible problems, it is hard to locate which problem you have. In the end, there could have been a checkbox in my Oracle Profile to agreeing on some terms to allow incoming traffic, and all other configuration would have been in vain. That was how it felt. What, in the end, is not needed to create or configure, are:

  • Load Balancer
  • Network Load Balancer
  • Custom route tables
  • Network security group
  • Service Gateways

The Oracle Cloud infrastructure GUI is both complex and slow, and at some point I started wondering if I should wait a few minutes for a setting to take effect (no – it is quite instant).

I made the mistake of starting with Oracle Linux, which I have never used before, so the number of possible faults in my head was even higher. I have not been playing with linux firewalls for a few years, I started looking at UFW for Ubuntu, got all confused and it wasn’t until I started looking into iptables directly things worked.

I think, my machine is in what Oracle calls a virtual network with only my own machines, and Oracle provides firewall rules (Security List, mentioned above), so I quite don’t see the need for having restrictive iptables settings on the virtual machine itself.

My new basic .vimrc

I decided to improve my Vim situation a bit from disabling most everything to a basic .vimrc I stole from someone online and modified slightly.

set nocompatible
syntax on
set modelines=0
set ruler
set encoding=utf-8
set wrap

set tabstop=2
set shiftwidth=2
set softtabstop=2
set autoindent
set copyindent
set expandtab
set noshiftround

set hlsearch
set incsearch
set showmatch
set smartcase

set hidden
set ttyfast
set laststatus=2

set showcmd
set background=dark

" from ThePrimeagen
nnoremap <C-d> <C-d>zz
nnoremap <C-u> <C-u>zz

set colorcolumn=80
set relativenumber

With that done, I had a few more questions.

Q: How do I stop search highlight when I am done searching?
A: :nohls

Q: How (outside vim) do I check number of columns of my terminal?
A: $tput cols