Category Archives: Programming

Testing Business Central with Docker

There are many articles and sources on the internet about Business Central in Docker. Most of them are very specific about some detail. With this post I hope to share some ideas about why we run BC in docker, and the challenges from a top-down perspective.

When you set up any non-trivial system, automated testing is helpful. It can go something like:

  1. Write new tests
  2. Update system (code or configuration)
  3. Start system (initiate from scratch)
  4. Run tests
  5. Stop system (discard everything)
  6. Repeat

The key here is repeatability. So you want to know that the system starts to identical state every time, so the tests work every time and you know exactly what you are testing.

This used to be very hard with complex systems like Business Central (NAV). It is still not very easy, but with Business Central being available as Docker images, automated tests are viable.

Assets

I think it is important to understand exactly what defines the running system. In my Business Central tests, those essential assets are:

  • A docker image (mcr.microsoft.com/dynamicsnav:10.0.19041.508-generic)
  • An artifact (https://bcartifacts.azureedge.net/sandbox/16.5.15897.16650/se)
  • Parameters for docker run (to create container from image)
  • A Business Central license file
  • A custom script (AdditionalSetup.ps1)
  • Several Business Central Extensions
  • A rapid start configuration package

Other non-BC assets could be

  • Version of Windows and Docker
  • Code for automating 3-5 (start-test-stop) above
  • Test code

Sharing those assets with my colleagues, we shall be able to set up identical Business Central systems and run the same tests with the same result. Any upgrade of an asset may break something or everything and that can be reproduced. Also the fix can be reproduced.

Business Central in Docker

Business Central is a rather large and complex beast to run in Docker. It is not just start and stop. And you will run into complications. Primary resources are:

  • Freddys blog (you will end up there when using Google anyway)
  • NAV Container Helper (a set of PS-scripts, even reading the source code has helped me)
  • Official Documentation: APIs, Automation APIs, Powershell Tools

This is still far from easy. You need to design how you automate everything. My entire start-to-stop-cycle looks something like:

  1. I download image
  2. I run image (with parameters) to create container
  3. I start container (happens automatically after 2)
  4. Artifact is being downloaded (unless cached from before)
  5. Initial container setup is being done (user+pass created)
  6. Business Central is starting up
  7. AdditionalSetup.ps1 is run (my opportunity to run custom PS code in container)
  8. I install extensions
  9. I add (Damage Inc) and delete (CRONUS) companies
  10. I install rapid start package
  11. I run read-only-tests
  12. I run read-write-tests
  13. I stop container
  14. I remove container

There are a few things to note.

  • 1 and 3 only if not already downloaded
  • 4,5,6,7 is happening automatically inside BC docker, all I can do is observe the result (like keep user+pass)
  • It is possible to run 3-13 (when using same image and artifact, and as long as container works and gives expected results) only
  • It is possible to run 8-12 (on already running container)
  • It is possible to run 11 only (on already running container)
  • 8/9 should probably switch order in the future

Tooling

In order to automate, and automate tests, you need some tool. It can be just a scripting language or something more complicated. You need to pick tools for:

  • Starting, testing, stopping the whole thing
  • Step 8-10 can be done using Powershell (invoked step 7) or using Microsoft Automation API (so you need a tool to make http-requests)
  • Step 11-12 is about testing the BC APIs, using http-requests, so you need a tool that can handle that

I already have other systems that are being tested in a similar way, so for me Business Central is just a part of a bigger integration test process. I am using Node.js and Mocha since before, so I use it for most everything above. However, some things need to be done in Powershell (AdditionalSetup.ps1) as well, more on that later.

System Requirements

You need a reasonably good Windows 10 computer. 16GB or RAM is acceptable so far, but if you have other heavy things running or perhaps later when you get more data in BC you will find that 16GB is too little. I am doing quite fine with my 8th gen i7 CPU.

The number 19041.508 in the docker image name corresponds to my Windows version. You may not find images for some older version of Windows 10.

You are probably fine with a recent Windows Server. I have not tried.

Bascially, Windows docker images can only run on Windows computers, so Linux and Mac OS will not just work (there may be ways with virtualization, Wine or something, I dont know).

Performance

Ideally, when you do automated testing you want to be able to iterate fast. I have found that two steps take particularly long time (~10 min each).

  1. Downloading image and artifact
  2. Importing the Rapid Start Configuration Package (your company data)

Fortunately, #1 is only done the first time (or when you upgrade version).

Unfortunately, #2 is something I would like to do every time (so my tests can update data, but always run on the same data set).

Given the unfortunate #2 it does not make so much sense to put much effort into reusing the container (docker start container, instead of docker run image). I think eventually I will attempt to write read-write-tests that clean up after themselves, or perhaps divide the rapid package into several packages so I can only a last small one every test run. This is not optimal, but it is about optimization.

Nav Container Helper

Freddy (and friends) have written Nav Container Helper. You should probably use it. Since I am a bit backwards, Nav Container Helper is not part of my automated test run. But I use it to learn.

I can invoke Nav Container Helper with version and country arguments to learn what image and artifact to use.

Unfortunately documentation of BC in docker itself is quite thin. I have needed to read the source of Nav Container Helper, and run Nav Container Helper, to understand what options are available when creating a container.

Nav Container Helper annoys me. It kind of prefers to be installed and run as administrator. It can update the hosts-file when it creates a container, but that is optional. However, when removing a container it is not optional to check the hosts file, so I need to remove containers as administrator. I am also not very used to PowerShell, admittedly.

Nav Container Helper will eventually be replaced by the newer BC Container Helper.

Image and Artifact

The images are managed by docker. The artifacts are downloaded the first time you need them and stored in c:\bcartifacts.cache. You can change that folder to anything you like (see below). The image is capable of downloading the artifacts itself (to the cache folder you assign), so you don’t need NavContainerHelper for this.

To find the best generic image for you computer:

Get-BestGenericImageName

To find artifact URLs for BC, run in powershell (you need to install Nav-ContainerHelper first):

Get-BCArtifactUrl -version 16.4

Docker option and environment

When you run a docker image, which creates and starts a container, you can give options and parameters. When you later start an already existing container, it will use the same options as when created.

Since I don’t use NavContainerHelper to run image, here are options (arguments to docker run) I have found useful.

  -e accept_eula=Y
  -e accept_outdated=Y
  -e usessl=N
  -e enableApiServices=Y
  -e multitenant=Y
  -m 6G
  -e artifactUrl=https://bcartifacts.azureedge.net/sandbox/16.5.15897.16650/se
  -e licenseFile=c:\run\my\license.flf
  --volume myData:c:\Run\my
  --volume cacheData:c:\dl
  -p 8882:80
  -p 7048:7048

I will not get into too many details but:

  • You just need to accept EULA
  • The image may be old (whatever that is), use it anyway
  • I don’t care about SSL when testing things locally
  • You need to enable API to use it (port 7048)
  • Since 16.4 multitentant is required to be able to create or remove companies (you usually need to add ?tenant=default to all URLs)
  • 4GB is kind of recommended, I use 6GB now when importing rapid start packages of significant size
  • For doing anything real you most likely will need a valid license file. The path given is in the container (not on your host)
  • I have a folder (replace myData with your absolute path) on the host computer with a license file, my AdditionalSetup.ps1, and possibly more data. –volume makes that folder available (rw) as c:\run\my inside the docker container.
  • I have a folder (replace cacheData with your absolute path) where artifacts are downloaded. This way they are saved for next container.
  • Business Central UI listens to http:80. I expose that on my host on 8882.
  • Business Central API services are available on http:7048. I expose that on my host on 7048.

NavContainerHelper will do some of these things automatically and allow you to control other things with parameters. You can run docker inspect on a container to see how it was actually created.

Username & Password

First time you run a container (that is, when you create it using docker run) it will output username, password and other connection information to stdout. You may want to collect and save this information so you can connect to BC. There are ways to set a password too – I am fine with a generated one.

AdditionalSetup.ps1

If there is a file c:\run\my\AdditionalSetup.ps1 in the container, it will be run (last). You can do nothing, or a lot with this. It turned out that installing extensions via the API requires something to be installed first. So right now I have this in my AdditionalSetup.ps1:

Write-Host 'Starting AdditionalSetup.ps1'
if ( -not (Get-Module -ListAvailable -Name ALOps.ExternalDeployer) ) {
  Write-Host 'Starting ALOps installation'
  Write-Host 'ALOps 1/5: Set Provider'
  Install-PackageProvider -Name NuGet -Force
  Write-Host 'ALOps 2/5: Install from Internet'
  install-module ALOps.ExternalDeployer -Force
  Write-Host 'ALOps 3/5 import'
  import-module -Name ALOps.ExternalDeployer
  Write-Host 'ALOps 4/5 install'
  Install-ALOpsExternalDeployer
  Write-Host 'ALOps 5/5 create deployer'
  New-ALOpsExternalDeployer -ServerInstance BC
  Write-Host 'ALOps Complete'
} else {
  Write-Host 'ALOps Already installed'
}

This is horrible, because it downloads something from the internet every time I create a new container, and it occationally fails. I tried to download this module in advance and just install/import it. That did not work (there is something about this NuGet-provider that requires extra magic offline). The Microsoft ecosystem is still painfully imature.

To try things out in the container, you can get a powershell shell inside the container:

docker exec -ti <containername> powershell

Install Extensions

A usually install extensions with the cmdlets:

  1. Publish-NAVApp
  2. Sync-NAVApp
  3. Install-NAVApp

in AdditionalSetup.ps1 (before setting up companies – that seems to not matter so much). You need to “import” those cmdlets before using them:

import-module 'c:\Program Files\Microsoft Dynamics NAV\160\Service\Microsoft.Dynamics.Nav.Apps.Management.psd1'

I can also use the automation API, if I first install ALOps.ExternalDeployer as above (but that is a download, which I don’t like)

Set Up Companies

Depending on your artifact you may get different companies from the beginning. It seems you always get “My Company”. And then there is a localized CRONUS company (except for the w1 artifact), that can be named “CRONUS USA” or “CRONUS International Inc” or something.

I work for Damage Inc, so that is the only company I want. However, it seems not to be possible to delete the last company. This is what I have automated:

  1. If “Company Zero” does not exist, create it
  2. Delete all companies, except “Company Zero”
  3. Create “Damage Inc”
  4. Delete “Company Zero” (optional – if it distrubs you)

This works the first time (regardless of CRONUS presence), when creating the container. This also works if I run it over and over again (for example when restarting an already created container, or just running some tests on an already started container): I get the same result, just a new “Damage Inc” every time, just as the first time.

Install Rapid Start Package

I install a rapid start package using the automation API. It should be possible to do it from AdditionalSetup.ps1 as well. This takes long time. I see some advantage using the API because I can monitor and control the status/progress in my integration-test-scripts (I could output things from AdditionalSetup.ps1 and monitor that, too).

Rapidstart packages are tricky – by far the most difficult step of all:

  1. Exporting a correct Rapidstart package is not trivial
  2. Importing takes long time
  3. The GUI inside business central (Rapidstart packates are called Configuration Packages) gives more control than the API – and you can see the detailed errors in the GUI (only).
  4. I have found that I get errors when importing using the API, but not in the GUI. In fact, just logging in to the GUI, doing nothing, and logging out again, before using the API makes the API import successful. Perhaps there are triggers being run when the GUI is activated, setting up data?

Run tests!

Finally, you can run your tests and profit!

Use the UI

You can log in to BC with the username and password that you collected above. I am not telling you to do manual testing, but the opportunities are endless.

Stopping

When done, you can stop the container. If run was invoked with “–rm” the container will be automatically removed.

Depending on your architecture and strategy, you may be able to (re)use this container for later use.

Webhooks & Subscriptions

Business Central has a feature called webhooks (in the api it is subscriptions). It is a feature that makes BC call you (your service) when something has updated, so you don’t need to poll regularly.

This is good, but beware, it is a bit tricky.

First M$ has decided BC will only call an HTTPS service. When I run everything on localhost and BC in a container, i am fine with HTTP, actually. Worse, even if I run HTTPS, BC is not accepting my self signed certificate. This sucks! Perhaps there is a way to allow BC to call an HTTP service. I couldn’t find out so now I let my BC Container call a proxy on the internet. That is crap.

Also, note that the webhooks trigger after about 30s. That is probably fine, for production. For automated testing it sucks. Perhaps there is a way to speed this up on a local docker container, please let me know.

Finally, the documentation for deleting a webhook is wrong. In short, what you need to (also) do is:

  1. add ‘ around the id, as in v1.0/subscriptions(‘asdfasdfasdfasdfasf’)
  2. set header if-match to * (or something more sofisticated)

I found it in this article.

Conclusion

This – running NAV/BC inside Docker and automate test cases – is somewhat new technology. There have been recent changes and sources of confusion:

  • NAV rebranding to Business Central
  • Replacing images with Artifacts
  • Multitenant needed from 16.4
  • The onprem vs SAS thing

To do this right requires effort and patience. But to me, not doing this at all (or doing it wrong) is not an option.

PHP validation of UTF-8 input

Last weeks I have done some PHP programming (my web hotel where I run wordpress supports PHP, and it is trickier to run Node.js on a simple web hotel). I like to do input validation:

function err($status,$msg) {
  http_response_code($status);
  echo $msg;
}

if ( 1 !== preg_match('/^[a-z_]+$/',$_REQUEST['configval']) ) {
  return err(400,'invalid param value: configval=' . $_REQUEST['configval']);
}

Well, that is good until I wanted a name of something (like Düsseldorf, that becomes D%C3%BCsseldorf when sent from the browser to PHP). It turned out such international characters encoded as Unicode/UTF-8 can not be matched/tested in a nice way with PHP regular expressions.

PHP does not support UTF-8. So ü in this case becomes two characters, neither of them matches [A-Za-z] or [[:alpha:]]. However, PHP can process it as text, use it in array keys, and output valid JSON without corrupting it, so not all is lost. Just validation is hard.

I needed to come up with something good enough for my purposes.

  • I can consider ALL such unicode characters (first byte 128+) valid (even though there may be strange characters, like extra long spaces and stuff, I don’t expect them to cause me problems if anyone bothers to enter them)
  • I don’t need to consider case of Ü/ü and Å/å
  • I don’t need full regexp support
  • It is nice to be able to check length correctly, and international characters like ü and å counts as two bytes in PHP.
  • I don’t need to match specific characters in the ranges A-Z, a-z or 0-9, but when it comes to special characters: .,:,#”!@$, I want to be able to include them explictly

So I wrote a simple (well) validation function in PHP that accepts arguments for

  • minimum length
  • maximum length
  • valid characters for first position (optional)
  • valid characters
  • valid characters for last position (optional)

When it comes to valid characters it is simply a string where characters mean:

  • u: any unicode character
  • 0: any digit 0-9
  • A: any capital A-Z
  • a: any a-z
  • anything else matches only itself

So to match all letters, & and space: “Aau &”.

Some full examples:

utf8validate(2,10,’Aau’,’Aau 0′,”,$str)

This would match $str starting with any letter, containing letters, spaces and digits, and with a length of 2-10. It allows $str to end with space. If you dont like that, you can do.

utf8validate(2,10,’Aau’,’Aau -&0′,’Aau0′,$str)

Now the last character can not be a space anymore, but we have also allowed – and & inside $str.

utf8validate_error

The utf8validate function returns true on success and false on failure. Sometimes you want to know why it failed to match. That is when utf8validate_error can be used instead, returning a string on error, and false on success.

Code

I am not an experienced PHP programmer, but here we go.

function utf8validate($minlen, $maxlen, $first, $middle, $last, $lbl) {
  return false === utf8validate_error($minlen, $maxlen,   
                                      $first, $middle, $last, $lbl);
}

function utf8validate_error($minlen, $maxlen, $first, $middle, $last, $lbl) {
  $lbl_array = unpack('C*', $lbl);
  return utf8validate_a(1, 0, $minlen, $maxlen,
                        $first, $middle, $last, $lbl_array);
}

function utf8validate_utfwidth($pos,$lbl) {
  $w = 0;
  $c = $lbl[$pos];
  if ( 240 <= $c ) $w++;
  if ( 224 <= $c ) $w++;
  if ( 192 <= $c ) $w++;
  if ( count($lbl) < $pos + $w ) return -1;
  for ( $i=1 ;$i<=$w ; $i++ ) {
    $c = $lbl[$pos+$i];
    if ( $c < 128 || 191 < $c ) return -2;
  }
  return $w;
}

function utf8validate_a($pos,$len,$minlen,$maxlen,$first,$middle,$last,$lbl) {
  $rem = 1 + count($lbl) - $pos;
  if ( $rem + $len < $minlen )
    return 'Too short';
  if ( $rem < 0 )
    return 'Rem negative - internal error';
  if ( $rem === 0 )
    return false;
  if ( $maxlen <= $len )
    return 'Too long';

  $type = NULL;
  $utfwidth = utf8validate_utfwidth($pos,$lbl);
  if ( $utfwidth < 0 ) {
    return 'UTF-8 error: ' . $utfwidth;
  } else if ( 0 < $utfwidth ) {
    $type = 'u';
  } else {
    $cv = $lbl[$pos];
    if ( 48 <= $cv && $cv <= 57 ) $type = '0';
    else if ( 65 <= $cv && $cv <= 90 ) $type = 'A';
    else if ( 97 <= $cv && $cv <= 122 ) $type = 'a';
    else $type = pack('C',$cv);
  }

// type is u=unicode, 0=number, a=small, A=capital, or another character

  $validstr = NULL;
  if ( 1 === $pos && '' !== $first ) {
    $validstr = $first;
  } else if ( '' === $last || $pos+$utfwidth < count($lbl) ) {
    $validstr = $middle;
  } else {
    $validstr = $last;
  }

  if ( false === strpos($validstr,$type) ) {
    return 'Pos ' . $pos . ' ('
         . ( 'u'===$type ? 'utf8-char' : pack('C',$lbl[$pos]) )
         . ') not found in [' . $validstr . ']';
  }
  return utf8validate_a(1+$pos+$utfwidth,1+$len,$minlen,$maxlen,
                        $first,$middle,$last,$lbl);
}

That is all.

Tests

I wrote some tests as well.

$err = false;
if (false!==($err=utf8validate_error(1,1,'','a','','g')))
  throw new Exception('g failed: ' . $err);
if (false===($err=utf8validate_error(1,1,'','a','','H'))) 
  throw new Exception('H should have failed');
if (false!==($err=utf8validate_error(3,20,'Aau','Aau -','Aau','Edmund')))
  throw new Exception('Edmund failed: ' . $err);
if (false!==($err=utf8validate_error(3,20,'Aau','Aau -','Aau','Kött')))
  throw new Exception('Kött failed: ' . $err);
if (false!==($err=utf8validate_error(3,20,'Aau','Aau -','Aau','Kött-Jan')))
  throw new Exception('Kött-Jan failed: ' . $err);
if (false!==($err=utf8validate_error(3,3,'A','a0','0','X10')))
  throw new Exception('X10 failed: ' . $err);
if (false!==($err=utf8validate_error(3,3,'A','a0','0','Yx1')))
  throw new Exception('Yx1 failed: ' . $err);
if (false===($err=utf8validate_error(3,3,'A','a0','0','a10')))
  throw new Exception('a10 should have failed');
if (false===($err=utf8validate_error(3,3,'A','a0','0','Aaa')))
  throw new Exception('Aaa should have failed');
if (false===($err=utf8validate_error(3,3,'A','a0','0','Ax10')))
  throw new Exception('Ax10 should have failed');
if (false===($err=utf8validate_error(3,3,'A','a0','0','B0')))
  throw new Exception('B0 should have failed');
if (false!==($err=utf8validate_error(3,3,'u','u','u','äää')))
  throw new Exception('äää failed: ' . $err);
if (false===($err=utf8validate_error(3,3,'','u','','abc'))) 
  throw new Exception('abc should have failed');
if (false!==($err=utf8validate_error(2,5,'Aau','u','Aau','XY')))
  throw new Exception('XY failed: ' . $err);
if (false===($err=utf8validate_error(2,5,'Aau','u','Aau','XxY')))
  throw new Exception('XxY should have failed');
if (false!==($err=utf8validate_error(0,5,'','0','',''))) 
  throw new Exception('"" failed: ' . $err);
if (false!==($err=utf8validate_error(0,5,'','0','','123'))) 
  throw new Exception('123 failed: ' . $err);
if (false===($err=utf8validate_error(0,5,'','0','','123456')))
  throw new Exception('123456 should have failed');
if (false===($err=utf8validate_error(2,3,'','0','','1'))) 
  throw new Exception('1 should have failed');
if (false===($err=utf8validate_error(2,3,'','0','','1234'))) 
  throw new Exception('1234 should have failed');

Conclusions

I think input validation should be taken seriously, also in PHP. And I think limiting input to ASCII is not quite enough 2020.

There are obviously ways to work with regular expressions and UTF8 too, but I do not find it pretty.

My code/strategy above should obviously only be used for labels and names where international characters make sense and where the form of the input is relatively free. For other parameters, use a more accurate validation method.

Simple Password Hashing with Node & Argon2

When you build a service backend you should keep your users’ passwords safe. That is not so easy anymore. You should

  1. hash and salt (md5)
  2. but rather use strong hash (sha)
  3. but rather use a very expensive hash (pbkdf2, bcrypt)
  4. but rather use a hash that is very expensive on GPUs and cryptominers (argon2)

Argon2 seems to be the best choice (read elsewhere about it)!

node-argon2

Argon2 is very easy to use on Node.js. You basically just:

$ npm install argon2

Then your code is:

/* To hash a password */
hash = await argon2.hash('password');

/* To test a password */
if ( await argon2.verify(hash,'password') )
  console.log('OK');
else
  console.log('Not OK');

Great! What is not to like about that?

$ du -sh node_modules/*
 20K  node_modules/abbrev
 20K  node_modules/ansi-regex
 20K  node_modules/aproba
 44K  node_modules/are-we-there-yet
348K  node_modules/argon2
 24K  node_modules/balanced-match
 24K  node_modules/brace-expansion
 24K  node_modules/chownr
 20K  node_modules/code-point-at
 40K  node_modules/concat-map
 32K  node_modules/console-control-strings
 44K  node_modules/core-util-is
120K  node_modules/debug
 36K  node_modules/deep-extend
 40K  node_modules/delegates
 44K  node_modules/detect-libc
 28K  node_modules/fs-minipass
 32K  node_modules/fs.realpath
104K  node_modules/gauge
 72K  node_modules/glob
 20K  node_modules/has-unicode
412K  node_modules/iconv-lite
 24K  node_modules/ignore-walk
 20K  node_modules/inflight
 24K  node_modules/inherits
 24K  node_modules/ini
 36K  node_modules/isarray
 20K  node_modules/is-fullwidth-code-point
 48K  node_modules/minimatch
108K  node_modules/minimist
 52K  node_modules/minipass
 32K  node_modules/minizlib
 32K  node_modules/mkdirp
 20K  node_modules/ms
332K  node_modules/needle
956K  node_modules/node-addon-api
240K  node_modules/node-pre-gyp
 48K  node_modules/nopt
 24K  node_modules/npm-bundled
 36K  node_modules/npmlog
172K  node_modules/npm-normalize-package-bin
 28K  node_modules/npm-packlist
 20K  node_modules/number-is-nan
 20K  node_modules/object-assign
 20K  node_modules/once
 20K  node_modules/osenv
 20K  node_modules/os-homedir
 20K  node_modules/os-tmpdir
 20K  node_modules/path-is-absolute
 32K  node_modules/@phc
 20K  node_modules/process-nextick-args
 64K  node_modules/rc
224K  node_modules/readable-stream
 32K  node_modules/rimraf
 48K  node_modules/safe-buffer
 64K  node_modules/safer-buffer
 72K  node_modules/sax
 88K  node_modules/semver
 24K  node_modules/set-blocking
 32K  node_modules/signal-exit
 88K  node_modules/string_decoder
 20K  node_modules/string-width
 20K  node_modules/strip-ansi
 20K  node_modules/strip-json-comments
196K  node_modules/tar
 28K  node_modules/util-deprecate
 20K  node_modules/wide-align
 20K  node_modules/wrappy
 36K  node_modules/yallist

That is 69 node modules of 5.1MB. If you think that is cool for your backend password encyption code (in order to provide two functions: encrypt and verify) you can stop reading here.

I am NOT fine with it, because:

  • it will cause me trouble, one day, when I run npm install, and something is not exactly as I expected, perhaps in production
  • how safe is this? it is the password encryption code we are talking about – what if any of these libraries are compromised?
  • it is outright ugly and wasteful

Well, argon2 has a reference implementation written in C (link). If you download it you can compile, run test and try it like:

$ make
$ make test
$ ./argon2 -h
Usage: ./argon2-linux-x64 [-h] salt [-i|-d|-id] [-t iterations] [-m log2(memory in KiB) | -k memory in KiB] [-p parallelism] [-l hash length] [-e|-r] [-v (10|13)]
Password is read from stdin
Parameters:
 salt The salt to use, at least 8 characters
 -i   Use Argon2i (this is the default)
 -d   Use Argon2d instead of Argon2i
 -id  Use Argon2id instead of Argon2i
 -t N Sets the number of iterations to N (default = 3)
 -m N Sets the memory usage of 2^N KiB (default 12)
 -k N Sets the memory usage of N KiB (default 4096)
 -p N Sets parallelism to N threads (default 1)
 -l N Sets hash output length to N bytes (default 32)
 -e   Output only encoded hash
- r   Output only the raw bytes of the hash
 -v (10|13) Argon2 version (defaults to the most recent version, currently 13)
 -h   Print ./argon2-linux-x64 usage

It builds to a single binary (mine is 280kb on linux-x64). It does most everything you need. How many lines of code do you think you need to write for node.js to use that binary instead of the 69 npm packages? The answer is less than 69. Here comes some notes and all the code (implementing argon2.hash and argon2.verify as used above):

  1. you can make binaries for different platforms and name them accordingly (argon2-linux-x64, argon2-darwin-x64 and so on), so you can move your code (and binaries) between different computers with no hazzle (as JavaScript should be)
  2. if you want to change argon2-parameters you can do it here, and if you want to pass an option-object to the hash function that is an easy fix
  3. options are parsed from hash (just as the node-argon2 package) when verifying, so you don’t need to “remember” what parameters you used when hashing to be able to verify
/* argon2-wrapper.js */

const nodeCrypto = require('crypto');
const nodeOs    = require('os');
const nodeSpawn = require('child_process').spawn;
/* NOTE 1 */
const binary    = './argon2';
// st binary    = './argon2-' + nodeOs.platform() + '-' + nodeOs.arch();

const run = (args,pass,callback) => {
  const proc = nodeSpawn(binary,args);
  let hash = '';
  let err = '';
  let inerr = false;
  proc.stdout.on('data', (data) => { hash += data; });
  proc.stderr.on('data', (data) => { err += data; });
  proc.stdin.on('error', () => { inerr = true; });
  proc.on('exit', (code) => {
    if ( err ) callback(err);
    else if ( inerr ) callback('I/O error');
    else if ( 0 !== code ) callback('Nonzero exit code ' + code);
    else if ( !hash ) callback('No hash');
    else callback(null,hash.trim());
  });
  proc.stdin.end(pass);
};

exports.hash = (pass) => {
  return new Promise((resolve,reject) => {
    nodeCrypto.randomBytes(12,(e,b) => {
      if ( e ) return reject(e);
      const salt = b.toString('base64');
      const args = [salt,'-id','-e'];
/* NOTE 2 */
//    const args = [salt,'-d','-v','13','-m','12','-t','3','-p','1','-e'];
      run(args,pass,(e,h) => {
        if ( e ) reject(e);
        else resolve(h);
      });
    });
  });
};

exports.verify = (hash,pass) => {
  return new Promise((resolve,reject) => {
    const hashfields = hash.split('$');
    const perffields = hashfields[3].split(',');
/* NOTE 3 */
    const args = [
        Buffer.from(hashfields[4],'base64').toString()
      , '-' + hashfields[1].substring(6) // -i, -d, -id
      , '-v', (+hashfields[2].split('=')[1]).toString(16)
      , '-k', perffields[0].split('=')[1]
      , '-t', perffields[1].split('=')[1]
      , '-p', perffields[2].split('=')[1]
      , '-e'
    ];
    run(args,pass,(e,h) => {
      if ( e ) reject(e);
      else resolve(h===hash);
    });
  });
};

And for those of you who want to test it, here is a little test program that you can run. It requires

  • npm install argon2
  • argon2 reference implementation binary
const argon2package = require('argon2');
const argon2wrapper = require('./argon2-wrapper.js');

const bench = async (n,argon2) => {
  const passwords = [];
  const hashes = [];
  const start = Date.now();
  let errors = 0;

  for ( let i=0 ; i<n ; i++ ) {
    let pw = 'password-' + i;
    passwords.push(pw);
    hashes.push(await argon2.hash(pw));
  }
  const half = Date.now();
  console.log('Hashed ' + n + ' passwords in ' + (half-start) + ' ms');

  for ( let i=0 ; i<n ; i++ ) {
    // first try wrong password
    if ( await argon2.verify(hashes[i],'password-ill-typed') ) {
      console.log('ERROR: wrong password was verified as correct');
      errors++;
    }
    if ( !(await argon2.verify(hashes[i],passwords[i]) ) ) {
      console.log('ERROR: correct password failed to verify');
      errors++;
    }
  }
  const end = Date.now();
  console.log('Verified 2x' + n + ' passwords in ' + (end-half) + ' ms');
  console.log('Error count: ' + errors);
  console.log('Hash example:\n' + hashes[0]);
};

const main = async (n) => {
  console.log('Testing with package');
  await bench(n,argon2package);
  console.log('\n\n');

  console.log('Testing with binary wrapper');
  await bench(n,argon2wrapper);
  console.log('\n\n');
}
main(100);

Give it a try!

Performance

I find that in Linux x64, wrapping the binary is slightly faster than using the node-package. That is weird. But perhaps those 69 dependencies don’t come for free after all.

Problems?

I see one problem. The node-argon2 package generates binary hashes random salts and sends to the hash algorithm. Those binary salts come out base64-encoded in the hash. However, a binary value (a byte array using 0-255) is not very easy to pass on the command line to the reference implementation (as first parameter). My wrapper-implementation also generate a random salt, but it base64-encodes it before it passes it to argon2 as salt (and argon2 then base64-encodes it again in the hash string).

So if you already use the node-package the reference c-implementation is not immediately compatible with the hashes you already have produced. The other way around is fine: “my” hashes are easily consumed by the node package.

If this is a real problem for you that you want to solve I see two solutions:

  1. make a minor modification to the C-program so it expects a salt in hex format (it will be twice as long on the command line)
  2. start supplying your own compatible hashes using the option-object now, and don’t switch to the wrapper+c until the passwords have been updated

Conclusion

There are bindings between languages and node-packages for stuff. But unix already comes with an API for programs written in different languages to use: process forking and pipes.

In Linux it is extremely cheap. It is quite easy to use and test, since you easily have access to the command line. And the spawn in node is easy to use.

Argon2 is nice and easy to use! Use it! Forget about bcrypt.

The best thing you can do without any dependencies is pbkdf2 which comes with node.js and is accessible in its crypto module. It is standardized/certified, that is why it is included.

SonarQube – I disagree

For someone like me working in a (very) small development team SonarQube is good. It reviews my code and it teaches me about both old and new things that I did not know about.

There are some things I don’t agree about though (all JavaScript below).

“switch” statements should have at least 3 “case” clauses

I understand the point. But I have my reasons. Let us say I have orders with different statuses (DRAFT,OPEN,CLOSED). Then often I have switch statements like:

switch (order.status) {
case 'DRAFT':
  ...
  break;
case 'OPEN':
  ...
  break;
case 'CLOSED':
  ...
  break;
default:
  throw new Error(...);
}

This switch over order.status happens in many places. Sometimes it is just:

switch (order.status) {
case 'CLOSED':
  ...
  break;
}

I don’t want to refactor the (rewrite if with switch) code if/when I need to do something for other states. And I like a consistent form.

Another case can be that I allow the user to upload files in different formats, but for now I just allow PDF.

switch ( upload.format ) {
case 'pdf':
  process_pdf();
  break;
default:
  throw new Error('Unsupported upload.format: ' + upload.format);
}

A switch is much more clear than an if-statement. However, if the above code was in a function called uploadPdf() then the code would have been:

if ( 'pdf' !== upload.format ) throw new Error(...);
...
...

But in conclusion, I think switch statements with few (even 1) case clause are often fine.

This branch’s code block is the same as the block for the branch on line …

This happens to me often. A good rule is to never trust user input. So I may have code that looks like this (simplified):

if ( !(user = validateToken(request.params.token)) ) {
  error = 401;
} else if ( !authorizeAction(user,'upload') ) {
  error = 401;
} else if ( !reqest.params.orderid ) {
  error = 400;
  errormsg = 'orderid required';
} else if ( user !== (order = getOrder(request.params.orderid)).owner ) {
  error = 401;
} else if ( !validateOrder(order) ) {
  error = 400;
  errormsg = 'order failed to validate';
} else ...

The point is that important part of the code is not the code blocks, but rather the checks. The content of the code blocks may, or may not, be identical. That hardly counts as duplication of code. The important thing is that the conditions are executed one-by-one, in order and that they all pass.

I could obviously write the first two ones as the same (with a more complex condition), since both gives 401. But code is debugged, extended and refactored and for those purposes my code is much better than:

if ( !(user = validateToken(request.params.token) ) ||
     !authorizeAction(user.upload) ) {
  error = 401;
} else ...

My code is easier to read, reorganize, debug (add temporary breakpoints or printouts) or improve (perhaps all 401 are not equal one day).

Remove this …… label

The reason I get this warning is mostly that I use what i learnt in my own post Faking a good goto in JavaScript.

SonarCube also complained when I wanted to break out of a nested loop this way:

break_block {
  for ( i=0 ; i<10 ; i++ ) {
    for ( j=0 ; j<10 ; j++ ) {
      if ( (obj=lookForObjInMatrix(matrix,i,j) ) ) break break_block;
    }
  }
  // obj not found, not set
}

It is a stupid simplified example, but it happens.

Web Components 2020

Web Components is standard technology (as opposed to libraries, frameworks and tools that come and go) for web development.

It allows you to simply write custom html elements in JavaScript. Such components become framework-independent and can be shared and reused. They are simple to use. In your html-file, simply:

<html>
  ...
  <script src="wc-blink.js"></script>
  ...
  <wc-blink>This should blink</wc-blink>
  ...
</html>

The code of wc-blink is quite simple (view), I stole it (here) and modified it slightly.

Another advantage of Web Components is the Shadow DOM allowing private CSS: the styles wont escape, override or be overridden.

This technology has been around for a while… but the bad news… it still does not work with Microsoft browsers (IE11, Edge).

I wrote a simple demo page (link).

With Polyfill you can use it with Edge. IE11 seems to be out of luck because the keyword class from ES6 must work (see code for wc-blink above) and this is not simply a matter of polyfill. There is technology to compile/transpile stuff to ES5 but in that case I can rather keep building my components in Vue.js.

Conclusion

It is actually sad that work that goes into web browser standards (to make it possible to build applications and components for the web in a good way) gets delayed by MS simply not supporting it.

I dont think web development should be more complicated than including scripts and writing html. Web Components allow just that.

If you need to support IE11, simply ignore Web Components until you don’t need to support IE11 anymore.

If you are fine supporting just Edge, there are ways not to need to include the 117kb polyfill for everyone.

I can not afford to break IE11 at this point, and neither am I willing to transpile stuff. I stick to Vue.js components.

JavaScript: Fast Numeric String Testing

Sometimes I have strings that (should) contain numbers (like ‘31415’) but I want/need to test them before I use them. If this happens in a loop I could start asking myself questions about performance. And if it is a long loop an a Node.js server the performance may actually matter.

For the purpose of this post I have worked with positives (1,2,3,…), and I have written code that finds the largest valid positive in an array. Lets say there are a few obvious options:

// Parse it and test it
const nv = +nc;
pos = Number.isInteger(nv) && 0 < nv;

// A regular expression
pos = /^[1-9][0-9]*$/.test(nc);

// A custom function
const strIsPositive = (x) => {
   if ( 'string' !== typeof x || '' === x ) return false;
   const min = 48; // 0
   const max = 57; // 9
   let   cc  = x.charCodeAt(0);
   if ( cc <= min || max < cc ) return false;
   for ( let i=1 ; i<x.length ; i++ ) {
     cc = x.charCodeAt(i);
     if ( cc < min || max < cc ) return false;
   }
   return true;
 }
pos = strIsPositive(nc);

Well, I wrote some benchmark code and ran it in Node.js, and there are some quite predictable findings.

It is no huge difference between the alternatives above, but there are differences (1ms for 10000 validations, on a 4th generation i5).

There is no silver bullet, the optimal solution depends on.

If all you want is validation, it is wasteful to convert (+nc). A Regular expression is faster, but you can easily beat a Regular expression with a simple loop.

If most numbers are valid, converting to number (+nc) makes more sense. It is expensive to parse invalid values to NaN.

If you are going to use the number, converting to number (+nc) makes sense (if you convert only once).

The fastest solution, both for valid and invalid numbers, is to never convert to number (but use the custom function above to validate) and find the max using string compare.

if ( strIsPositive(nc) &&
     ( max.length < nc.length ) || ( max.length === nc.length && max < nc )
   )
  max = nc; 

This is obviously not a generally good advice.

Other numeric formats

My above findings are for strings containing positives. I have tested both code that only validates, and code that use the value by comparing it.

You may not have positives but:

  • Naturals, including 0, which creates a nastier regular expression but an easier loop.
  • Integers, including negative values, which creates even nastier regular expressions.
  • Ranged integers, like [-256,255], which probably means you want to parse (+nc) right away.
  • Decimal values
  • Non standard formats (with , instead of . for decimal point, or with delimiters like spaces to improve readability)
  • Hex, scientific formats, whatever

In the end readability is usually more important than performance.

Force Vue Update ($forceUpdate)

Occationally you want to force some part of your Vue application to update. One situation is that I have a “big” web application not written in Vue, and somewhere I add a Vue component. Something in the world around it changes but it is not aware of it, so I want to force it to update.

It seems, the vm.$forceUpdate method is just updating the component itself, and children with slots. I didn’t use slots, and $forceUpdate was useless.

So, if you do myVue.$forceUpdate() without success, try:

    myVue.$children.forEach(c => c.$forceUpdate());

It might do what you want. It did for me.

Xcode findings

As I start experimenting with Xcode I realise that it is a tricky beast.

Xcode 10.2.1

I realised Xcode 10.2.1 used 100%+ CPU. I fixed that by reinstalling it completely.

Reainstalling Xcode I had managed to mess upp the simulators.
Error: Unable to boot device because it cannot be located on disk
Solution: Run in Terminal: xcrun simctl erase all

Xcode 7.3.1

Xcode 7.3.1 Fails to start on macOS 10.14.5.

A first iOS app with Xcode 10.2.1

Ten years too late I decided to look into iOS development. It is too late, because the Klondyke era of becoming a millionaire on simple apps is probably over. On the other hand Swift has arrived and reached version 5 so it should be a good time to get started.

What I have is

  • Mac OS 10.14.5
  • Xcode 10.2.1
  • iPhone 6s, iOS 12.2 to deploy to
  • iPad 3, iOS 9.3.5 (obsolete by Apple standard)
  • 20 years of programming experience
  • Very limited experience with Swift 5
  • No experience with Xcode, Objective-C or macOS development

I am mostly a backend-programmer, who have to do HTML/CSS/JavaScript as well. Xcode is creepy. I have thought about a few appoaches

  1. Buying a book (but a challenge to find a book with relevant complexity, mix of tutorial/reference, for Xcode 10 / Swift 5)
  2. Apples obsolete tutorial (but I was put off by the fact that it is written for Swift 3)
  3. Just playing around with Xcode (just kidding – that is too scary)
  4. Some online course, like Udemy (but it is not my way)
  5. A simple trumpet tutorial

I went for (5). It was good, because in a few hours it took me all the way from starting Xcode to running something on my iPhone.

Building for the simulator and running works. And I managed to deploy to my iPhone (it is actually quite self explanatory: connect the iPhone, select it as destination in Xcode, and later in the iPhone under settings -> general -> device management you allow the app to run).

The short version is that it all went well! But…

Obsolete iPad 3

I failed to build for my obsolete iPad 3. What happens is that all is fine, and then I come to this screen:

I type my password, and immediately it (building/signing) “Failed with exit code 1”. I can imagine two options right away

  1. I need a real developer license (not Personal Team) to do this
  2. I need an older version of Xcode to build for 9.3
    (and in that case I might need to use older project format, and perhaps not even Swift 5, I don’t know)
  3. I got some indication that with a Personal (free) developer license I can only deploy to a single test device, that would perhaps not include old devices

It actually only builds for Deployment target 12.2, no older versions in the list.

Update: Page 60 of the free Apple Book “App Development With Swift” tells clearly that a free account only supports a single device. So it is clearly a waste of time to ignore that restriction and try to deploy to my iPad.

Xcode

I have spent a few hours with this now. I wrote 4 lines of code. I have ctrl-clicked on things, dragged-and-dropped-things, added properties to things, added resources, opened panels and used shortcuts. If you are used to things like Visual Studio it will probably feel somewhat familiar. But for me, who mostly use Vim, it is very scary.

Update: Xcode turned out to use 100%+ CPU constantly. I completely removed it and reinstalled it, and it seemed to help.

Computer Requirements / Performance

I did these experiments on a MacBook Pro 6,2 (that officially does not support macOS 10.14). It has an SSD drive and 8GB or RAM. Building takes almost 10 seconds, but starting the simulator and loading the app takes almost a minute. The computer clearly gets warm. Neither Xcode nor the simulator consumes much memory (Activity Monitory says about 200Mb each). Obviously, if you run the simulator much in your daily work, a faster CPU is worth it.

I think my 1440×900 display may be the biggest problem if I want to do anything real thought.

Conclusion

I have mixed feelings, it could be worse and better. I clearly need to find a way to be quickly guided through building different types of apps. I think I need a few days being guided through Xcode until both Xcode and the different project artifacts feel somewhat natural.

I have a simple app I want to build for myself, but right now it feels much to intimidating.

I found that Apple has released a free online book (available in their Books application) called App Development with Swift. That seems to be a good option.

Simple Mobile First Design

If you build a web site today you need to think about the experience on mobiles, tablets and desktops with different screen sizes. This is not very easy. In this article I have applications (SPAs) in mind rather than sites/pages.

If you are a real, ambitious, skilled designer with a significant budget, there is nothing stopping you from doing it right. Responsive design is dead, because most often you have no choice, so it is just design.

However, you may not have that budget, skill, time and ambition, but you still need to think about vastly different screen sizes. Or perhaps you just need to build a simple native-app-like website.

Two separate implementations

In many cases I would argue that it makes sense to simply make a separate site for mobile and desktop. There are many arguments but I will give one: use cases are often very different. A desktop app is often opened, kept open for a long time, and much data may be presented and analysed on screen, in memory. A mobile app is often opened shortly, to accomplish a single task, and then closed. This means that you probably want to manage state, data and workflow very differently as well.

Bootstrap (or similar)

There are frameworks (like Bootstrap) and technologies like Flexbox to allow you to build a responsive app. Before using those, I think you should ask yourself a question.

How do you want to take advantage of more screen space?

Think of regular desktop applications (Word, Photoshop, Visual Studio) or your operating system: when you have more screen available you can have more stuff next to each other. You can have more windows and more panels at the same time. Mostly. Also, but less so, small things get larger (when they benefit from it). It helps to be able to see an entire A4 page when you work with Word. But when you have an Excel sheet with 4 used columns, those don’t use your entire screen just because they can.

Bootstrap tends to create larger space between elements, and larger elements where it is not needed (dropdown <select>, input fields). I say tends to, because if you are good and very careful, you can probably do a better job than I can. But it is not automatic and it is not trivial, to make it good

What I mean is that if my calendar/table looks gorgeous when it is 400px wide, what good does it make to make it larger if the screen gets larger? So I think a better approach to responsiveness is to say that my calendar/table takes 400px. If I have more space available, I can show something else as well.

Mobile Screen Sizes

To complicate things further, mobile phones have different screen sizes, different screen resolutions, and then there are hi-resolution screens that have different virtual and physical resolutions.

So you have your table that looks good on a “standard” mobile with 320px width. What do you want to do if the user has a better/larger screen?

  1. make it look exactly the same (just better/larger)?
  2. reactively change the way your app looks and works?

If you are opting for (2), I need to wonder why, really?

I argue that if you pick (1) you can make development, testing, documentation and support easier. And your users will have a more consistent experience. At the expense that those with a large mobile may not get the most out of it when using your app.

I propose a simple Mobile First Responsive design

What I propose is not for everyone and everywhere. It may suck for your product and project. That is fine, there are different needs.

I propose a Mobile First (Semi-)Responsive design:

  1. Pick a width (320px is fine).
  2. Design all parts, all pages, all controllers of your app for that width.
  3. On mobile, set the viewport to your width for consistent behaviour on all mobiles.
  4. Optionally, on desktop (and possibly tablets), allow pages to open next to each other rather than on top of (and hiding) each other to make some use of more screen when available.

Seems crazy? Please check out my Proof of Concept and decide for yourself! It is only a PoC. It is not a framework, not a working app, not demonstrating Vue best practices, and it is not very pretty. Under Settings (click ?) you can check/change between Desktop, Tablet and Mobile mode (there is a crude auto-discover mechanism in place but it is not perfect). You can obviously try it with “Responsive Design Mode” in your browser and that should work quite fine (except some elements don’t render correctly).

Implementation Details

First, I set (despite this is not normally a recommended thing to do):

<meta id="viewport" name="viewport" content="width=320">

Later I use JavaScript to change this to 640 on a tablet, to allow two columns. Desktops should ignore it.

Second, I use a header div fixed at the top, a footer div fixed at the bottom, and the rest of the page has corresponding margins (top/bottom).

.app_headers {
   position: fixed;
   top: 0;
   left: 0;
 }
 .app_header {
   float: left;
   height: 30px;
   width: 320px;
 }
 .app_footers {
   position: fixed;
   bottom: 0;
   left: 0;
 }
 .app_footer {
   float: left;
   height: 14px;
   width: 320px;
 }
 .app_pages {
   clear: both;
 }
 .app_page {
   margin-top: 30px;
   margin-bottom: 12px;
   width: 320px;
   float: left;
 }

In mobile mode I just add one app_header, app_footer and app_page (div with class). But for Tablets and Desktops I can add more of them (equally many) as the user navigates deeper into the app. It is basically:

<div class="app_headers">
  <div class="app_header">
    Content of first header (to the left)
  </div>
  <div class="app_header">
    Content of second header (to the right)
  </div>
</div>
<div class="app_pages">
  <div class="app_page">
    Content of first page (to the left)
  </div>
  <div class="app_page">
    Content of second page (to the right)
  </div>
</div>
<div class="app_footers">
  <div class="app_footer">
    Content of first footer (to the left)
  </div>
  <div class="app_footer">
    Content of second footer (to the right)
  </div>
</div>

I use little JavaScript to not add too many pages side-by-side should the display/window not be large enough.

It is a good idea to reset margins, paddings and borders to 0 on common items.

I also found that you need a font size of 16px on iPhone, otherwise the Apple mobile Safari browser will immediately zoom when user edits <input> and <select>.

Most effort when I wrote my Proof of Concept was

  1. Getting the HTML/CSS right and as simple as possible (I am simply not good enough with HTML/CSS to just get it right)
  2. Implementing a “router” that supports this behaviour

Being able to scroll the different pages separately would be possible, a bit more complicated, and perhaps not so desirable.

Conclusions

Exploiting the viewport you can build a web app that works fine on different mobiles, and where the issue with different screen sizes and screen resolution is quite much out of your way.

The site will truly be mobile-first, but with the side-by-side-strategy presented, your users can take advantage of larger screens on non-mobiles as well.

This way, you can build a responsive app, with quite little need for testing on different devices as the app grows. You just need to keep 320px in mind, and have a clear idea about navigating your site.