Tag Archives: Simple solutions

Oracle Free Compute Instance: Incoming TCP

I learnt that Oracle is offering a few free virtual machines to individuals. There are few strings attached and the machines available are quite potent. Search for Oracle always free compute instance.

The very basics are:

  • 1 CPU AMD or 1-4 CPU ARM
  • 1 GB RAM (AMD) or up to 6 GB RAM (ARM)
  • 47 GB of Storage
  • 10 TB of network traffic per month
  • Choice of Linux distribution (Fedora, Alma, Ubuntu, Oracle, not Debian) with custom image options.

Setting up a virtual machine is quite straight forward (although there are many options). At one point you download ssh-keys to connect. You save them in .ssh and connect like (username is different for non-ubuntu distributions):

$ ls ./ssh
my-free-oracle.key my-free-oracle.key.pub

$ ssh -i ./ssh/my-free-oracle.key ubuntu@<IP ADDRESS>

That was all very good and easy, but then I wanted to open up for incoming traffic…

Incoming traffic is very easy!

The Oracle cloud platform is rather complex. There are many different things you can configure that are related to traffic. What you need to configure is:

  • Virtual Cloud Network -> Security List -> Add Ingress Rule
  • Configure linux firewall
    On ubuntu for proof of concept: $ sudo iptables -F INPUT

If you set up apache and add an ingress rule for port 80 as above, you shall have a working web server.

What I did

In my defence, when something does not work and you see a number of possible problems, it is hard to locate which problem you have. In the end, there could have been a checkbox in my Oracle Profile to agreeing on some terms to allow incoming traffic, and all other configuration would have been in vain. That was how it felt. What, in the end, is not needed to create or configure, are:

  • Load Balancer
  • Network Load Balancer
  • Custom route tables
  • Network security group
  • Service Gateways

The Oracle Cloud infrastructure GUI is both complex and slow, and at some point I started wondering if I should wait a few minutes for a setting to take effect (no – it is quite instant).

I made the mistake of starting with Oracle Linux, which I have never used before, so the number of possible faults in my head was even higher. I have not been playing with linux firewalls for a few years, I started looking at UFW for Ubuntu, got all confused and it wasn’t until I started looking into iptables directly things worked.

I think, my machine is in what Oracle calls a virtual network with only my own machines, and Oracle provides firewall rules (Security List, mentioned above), so I quite don’t see the need for having restrictive iptables settings on the virtual machine itself.

Auto-start user service in screen on Debian 11

I have a QNAP with container station. It allows me to essentially have a number of single-purpose simple linux servers running on a single small nice headless computer.

It is annoying to start everything up on each container whenever the QNAP is restarted. It is quite easy to start things automatically, but as usual, a few steps of configuration can take a while to get 100% correct before it works properly.

In my case I have:

  • Debian 11 container
  • A user named zo0ok
  • zo0ok shall run screen, and in screen run the service (in this case sonarqube)

This is what I needed to do (assuming screen and sonarqube are already in place):

Create /etc/rc.local

This is my /etc/rc.local file (it does not exist before):

sudo -u zo0ok screen -d -m /home/zo0ok/screen-startup.sh

This will run the screen-startup.sh script as zo0ok (not root) when the Debian starts.

Enable rc-local

Lets not complain about systemd and systemctl, but this shit has go be added to a new file


 Description=/etc/rc.local Compatibility

 ExecStart=/etc/rc.local start


And the service needs to be enabled:

# systemctl enable rc-local.service

Create screen-startup.sh

Finally, as your non privilaged user, create the file (with your content, of course):


cd /home/zo0ok/opt/sonarqube-
./sonar.sh console

Conclusion and final words

This is obvioulsly more convenient than logging in and running screen manually, and obviously if you need any kind of error handling or restart-management that is a different story.

An alternative to systemd/systemctl is to use cron.

It looks very easy, but I had minor errors in all steps above that were a bit tricky to find, before it all worked.

Qnap, SonarQube and Elastic Search

Update 2021-10-20: Solution in the end of the post

I use SonarQube (and SonarScanner) to analyze my JavaScript source code in a project that has almost 200k lines of code.

SonarQube is a non-trivial install and it should be on a server so different developers can access it. Thus, SonarQube is an application that it would make sense to put in a packaged container, easy to set up.

I have a QNAP, with Container Station installed. That allows me to run Docker images and Linux (LXC/LXD) image. To me, this sounds like a perfect match: Just install a SonarQube Docker container on the QNAP and be happy!

Well, that was not how they thought it.

Last time I checked the SonarQube docker image did not come with a database. That would have been the entire point! Most work related to setting up SonarQube is related to the database. Docker support data folders, so it would be easy to configure the docker container with a single datafolder for the database and everything. No. You need two docker images.

The next problem is that SonarQube comes bundled with ElasticSearch which has some remarkable system requirements. Your operating system needs to be configured to support

  • 65535 open file descriptors (link)
  • 262144 vm.max_map_count (link)

Now the first problem is that Docker in QNAP does not support this. However it works with LXC.
The second problem is that QNAP is getting rid of LXC in favour of LXD, but you cant have 65535 open file descriptors with LXD (on QNAP – hopefully they fix it). So I am stuck with unsupported LXC.

But the real problem is – who the f**k at Elastic Search thought these were reasonable requirements?

I understand that if you have hundreds of programmers working on tens of millions of code you need a powerful server. And perhaps at some point the values above make sense. But that these are minimum requirements to just START ElasticSearch? How f***ing arrogant can you be to expect people to modify /etc/security and kernel control parameters to run an in-memory database as a priviliged user?

The above numbers seem absolutely arbitrary (I see that it is 2^16-1 of course). How can 65535 file descriptors be kind of fine, if 32000 are not? Or 1000? I understand if you need to scale enormously. But before you need to scale to enormous amounts of data, it would be absolutely wasteful, stupid and complicated to open 50k+ files at the same time. And if 32000 file descriptors are not enough for your clients big data, how long are you going to be fine with 65535? For a few more weeks?

This is arrogant, rotten, low-quality engineering (and I will change my mind and apologize if anyone can provide a reasonable answer).

All the data SonarQube of goes to a regular database. ElasticSearch is just some kind of report-processing thing serving the frontend. I did a backup of mine today, a simple pg_dump that produces an INSERT line in a text file for every database entry. Not very optimized. My database was 36Mb. So if Elastic Search would use just 36000 file descriptors, each file correspond to 1k of actual data.

I don’t know if I am the most disappointed with the idiots at ElasticSearch, or the idiots of SonarQube who made their quite ordinary looking GUI dependent on this tyrannosaurus of a dependence.

Hopefully the QNAP people can raise the limits to ridiculous values, so nobody at ElasticSearch needs to write sensible code.

And if anyone knows a hack so you can make ElasticSearch start with lower values (at my own risk), please let me know!


QNAP support helped me with the perhaps obvious solution. Log in as admin with ssh to the QNAP and run:

[~] # sysctl -w vm.max_map_count=262144
[~] # lx config set deb11-sonarqube limits.kernel.nofile 65535

The first command I already knew. You have to run it whenever the QNAP has restarted.

The second command is for setting the file limit in the particular container (deb11-sonarqube is the name of my container). I guess you just need to do it once (and then restart the container), and that the setting remains.

CSV file: convert list to table/matrix

Update 2014-05-04: Added -N1/-N2 flags for sorting rows/columns numerically, and uploaded a windows binary.

I found myself having data in CSV-files with three columns; two dimensions and a value. It could look like:


Data typically looks like this because it is very easy to output transactions on this format. That is very nice if you want to load it into a database. But for other purposes (like plotting a graph using LibreOffice Calc or even Excel) it would be much nicer with a table/matrix-layout:


I could not find a standard tool for this. I thought about different options, and finally decided it was quite easy to just write a little program. So I did. You use it like this:

$ ./csv-list2table -t < list.csv > table.csv

There are a few things to think about:

  • The switches -t or -T decides if column 1 or 2 will be rows
  • -N1 and -N2 can be used to treat/sort column 1 and 2 numerically
  • Rows and columns are outputed sorted
  • Holes/missing values outputed as ,,
  • Comma is the only accepted delimiter
  • Input must have exactly 3 columns
  • Pre/post-process data with sed and cut

As the last item mentions, sed can fix a file with other delimiters than comma, and cut can pick the columns you need from a list with more data than you need.

Finally, the code written in standard C: csv-list2table-1.1.c.
Old version: csv-list2table-1.0.c
Windows binary: csv-list2table-1.1.exe (should be no tricky dependencies)
Test file: csv-list2table-test.txt

Building should be trivial:

gcc -O2 -o csv-list2table csv-list2table.c

I dont think the code contains anything that should confuse any c-compiler on any reasonable platform.

Remote Desktop & Windows 7 Home Premium

Windows 7 Home Premium seems to be the right version of Windows for home usage, right? I mean for media, gaming, communication and casual work. And it is the version MS ships with a PC you can buy in a store. Well, Microsoft believes that Remote Desktop (the server part) is not a Home Premium feature, but a Professional feature. I think there are plenty of home-related uses for Remote Desktop (supporting a relative remotely, sitting in the sofa with your tablet while controlling your PC, access your computer from work or when traveling).

The upgrade to Professional is 180 Euros, and requires some work. This is simply Microsoft making their products suck for their paying customers! I’d say people who buy a PC in a store really don’t have the choice to pick Professional or Ultimate. The wares people are probably smart enough to not bother with anything less than Ultimate.

Luckily, there is a simple hack for Home Premium, and this one works today (well, yesterday), despite SP1 and everything.

Quickly and easily transfer files over network

You want to copy a file between two computers or Symbian phones, and the usual methods don’t feel that attractive? Network drives, scp and ftp requires configuring a server (that later might be a security risk). USB cables are never available when needed. DropBox and Bluetooth are too slow.

A while ago I described how to copy files with netcat, but that works best on *nix and is not so easy for people who do not like the command line. And, it does not work on mobile phones.

So, I wrote a little program that does what netcat does but is simple to use and has a GUI. And, I wrote it in QT, so it works in Mac OS X, Linux, Windows and Symbian. It is exactly the same on all platforms.

Do you want a simple way to copy files over the network? Download ParrotCopy and give it a try – instructions included:


The Symbian version probably only works on Symbian^3 and has just been tested on Nokia N8. Let me know if you need a binary for an older Symbian device or a Maemo or Harmattan device.

In server mode, the program tries to connect to www.google.com:80, to figure out its own IP. It is simply an ugly hack because I had problems with other methods. You may have problems if internet is not accessible. I will not release 1.0 until this is fixed.

You have to manually name the file you receive, and you can transfer just one file at a time. These are not bugs, but future versions will probably do better. Netcat can be combined with tar, gzip etc. I hope to add at least tar in the future. For now, making a zipfile is a simple way to transfer many files.

Release 0.9.7
Now possible to copy contents of status field and file/folder fields (if you want to paste it into other application).

DOS flashing in 2011

Still BIOS updates and other firmware flash operations mostly need to be done from DOS. Seriously, DOS, in 2011?

I found a decent strategy. When you install your computer, make a small (256Mb) FAT partition with a FAT16 filesystem on it first on your harddrive (with some skills and luck you can use it also as linux /boot partition). Whenever you need to flash BIOS or anything else, put the DOS flash utility on this partition.

Make yourself a bootable FreeDOS CD. When you boot into freedos you just do:

  a:\> c:
  c:\> dir
  (dir output - to find the name of the update program)
  c:\> BIOS1234.exe

When everything is done you can just reset the computer with the reset-button, eject the CD, and boot your normal OS.

So, what is clever about this? It is not so easy to make FreeDOS read USB-keys or access the network, and the bootable CD is naturally read-only. But a little FAT16 filesystem first, on your first hard drive, is very easy to access. And you can easily put any upgrade files there, from any OS you like to run normally.

On a brand new computer (or hard drive), the first thing you do is boot FreeDOS (or your Linux live cd of choice) and create the little FAT partition. Then you can install your other OSs. If you got the computer with Windows already installed, and you intend to keep it, you probably dont need to flash anything anyway.

Youtube performance on PowerBook G4

My Apple PowerBook G4@866 Mhz can not handle youtube vidoes… when they are based on flash. However, go to the HTML 5 version of YouTube and performance is very reasonable.

Printing to Sharp AR-M450 in Ubuntu (using CUPS)

Printing to a Sharp AR-M450 using CUPS (the standard print system for most free *NIX and even Mac OS X) is possible, but took me some time to find the right settings. First, you need the printers IP address, in my case

In CUPS you chose “Network Printer” and “lpd/lpr”. Then you set address=”″ (replace with your own) and queue=”lpr”.

As printer driver, I chose “Generic PCL 6/PCL XL Printer”, using “CUPS+Gutenberg vX.X.X Simplified”.

This should get you running!

Simple method for copying files over network

Copying files between computers is, fundamental. However, sometimes it is not so easy. When struggling with NFS I have been thinking; how did Microsoft get this one more right than the other ones? But Windows file sharing is not so very simple either, and it is not exactly getting easier with new versions.

However, copying files between two computers (on the same network) IS easy, especially between *NIX-based systems. Here are two methods, and they may work on Windows too, with the right tools installed (cygwin).

The following examples presume you want to copy files from a client computer to a Mac OS X machine with IP

sshfs allows you to mount any folder on the server, on any folder on the client. Normal permissions apply, and you need to have sshfs installed on the client, and sshd on the server (you have it if you can ssh to the server).

Lets assume I have a user zo0ok on the Mac OS X machine, and I want to access that users home directory. Then I do (on the client):

  > mkdir zo0ok-on-osx
  > sshfs zo0ok@ zo0ok-on-osx

I need to authenticate. Done! Now the contents of the remote home directory is available locally. Note; performance is not optimal, and things like random access to files might not work. But for many purposes it works perfectly.

Netcat (nc)
netcat (or just nc) is an insanely powerful, and simple, tool. You can pipeline things, not just between programs, but over the network. More people should know about it! The following instructions assume you are logged in and have a shell on both machines.

First just see that everything works. On the mac (server), listen to port 9999:

  macosx$ nc -l 9999

Second, on the client, send message:

  client$ echo Hello World | nc 9999

If everything is fine, the message was sent to the Mac and displayed on its prompt. nc should quit as it reaches end of file.

Copy a file (hello.txt – it must exist on the client):

  macosx$ nc -l 9999 > hello.txt
  client$ nc 9999 < hello.txt

If everything is fine, you have a file on the Mac, identical to the one on the client (use md5sum if in doubt).

Copy a large file:

  macosx$ nc -l 9999 | gunzip > ubuntu.iso
  client$ cat ubuntu.iso | gzip | nc 9999

If your CPU is faster than you network, file transfer will be faster when you compress the data.

Copy a folder:

  macosx$ nc -l 9999 | tar -x
  client$ tar -c mp3collection | nc 9999

Of course you can use the -z switch for tar to enable compression, but for you mp3-collection it is not a wise idea.

Only your imagination limits what you can do with nc!