There are many articles and sources on the internet about Business Central in Docker. Most of them are very specific about some detail. With this post I hope to share some ideas about why we run BC in docker, and the challenges from a top-down perspective.
When you set up any non-trivial system, automated testing is helpful. It can go something like:
- Write new tests
- Update system (code or configuration)
- Start system (initiate from scratch)
- Run tests
- Stop system (discard everything)
- Repeat
The key here is repeatability. So you want to know that the system starts to identical state every time, so the tests work every time and you know exactly what you are testing.
This used to be very hard with complex systems like Business Central (NAV). It is still not very easy, but with Business Central being available as Docker images, automated tests are viable.
Assets
I think it is important to understand exactly what defines the running system. In my Business Central tests, those essential assets are:
- A docker image (mcr.microsoft.com/dynamicsnav:10.0.19041.508-generic)
- An artifact (https://bcartifacts.azureedge.net/sandbox/16.5.15897.16650/se)
- Parameters for docker run (to create container from image)
- A Business Central license file
- A custom script (AdditionalSetup.ps1)
- Several Business Central Extensions
- A rapid start configuration package
Other non-BC assets could be
- Version of Windows and Docker
- Code for automating 3-5 (start-test-stop) above
- Test code
Sharing those assets with my colleagues, we shall be able to set up identical Business Central systems and run the same tests with the same result. Any upgrade of an asset may break something or everything and that can be reproduced. Also the fix can be reproduced.
Business Central in Docker
Business Central is a rather large and complex beast to run in Docker. It is not just start and stop. And you will run into complications. Primary resources are:
- Freddys blog (you will end up there when using Google anyway)
- NAV Container Helper (a set of PS-scripts, even reading the source code has helped me)
- Official Documentation: APIs, Automation APIs, Powershell Tools
This is still far from easy. You need to design how you automate everything. My entire start-to-stop-cycle looks something like:
- I download image
- I run image (with parameters) to create container
- I start container (happens automatically after 2)
- Artifact is being downloaded (unless cached from before)
- Initial container setup is being done (user+pass created)
- Business Central is starting up
- AdditionalSetup.ps1 is run (my opportunity to run custom PS code in container)
- I install extensions
- I add (Damage Inc) and delete (CRONUS) companies
- I install rapid start package
- I run read-only-tests
- I run read-write-tests
- I stop container
- I remove container
There are a few things to note.
- 1 and 3 only if not already downloaded
- 4,5,6,7 is happening automatically inside BC docker, all I can do is observe the result (like keep user+pass)
- It is possible to run 3-13 (when using same image and artifact, and as long as container works and gives expected results) only
- It is possible to run 8-12 (on already running container)
- It is possible to run 11 only (on already running container)
- 8/9 should probably switch order in the future
Tooling
In order to automate, and automate tests, you need some tool. It can be just a scripting language or something more complicated. You need to pick tools for:
- Starting, testing, stopping the whole thing
- Step 8-10 can be done using Powershell (invoked step 7) or using Microsoft Automation API (so you need a tool to make http-requests)
- Step 11-12 is about testing the BC APIs, using http-requests, so you need a tool that can handle that
I already have other systems that are being tested in a similar way, so for me Business Central is just a part of a bigger integration test process. I am using Node.js and Mocha since before, so I use it for most everything above. However, some things need to be done in Powershell (AdditionalSetup.ps1) as well, more on that later.
System Requirements
You need a reasonably good Windows 10 computer. 16GB or RAM is acceptable so far, but if you have other heavy things running or perhaps later when you get more data in BC you will find that 16GB is too little. I am doing quite fine with my 8th gen i7 CPU.
The number 19041.508 in the docker image name corresponds to my Windows version. You may not find images for some older version of Windows 10.
You are probably fine with a recent Windows Server. I have not tried.
Bascially, Windows docker images can only run on Windows computers, so Linux and Mac OS will not just work (there may be ways with virtualization, Wine or something, I dont know).
Performance
Ideally, when you do automated testing you want to be able to iterate fast. I have found that two steps take particularly long time (~10 min each).
- Downloading image and artifact
- Importing the Rapid Start Configuration Package (your company data)
Fortunately, #1 is only done the first time (or when you upgrade version).
Unfortunately, #2 is something I would like to do every time (so my tests can update data, but always run on the same data set).
Given the unfortunate #2 it does not make so much sense to put much effort into reusing the container (docker start container, instead of docker run image). I think eventually I will attempt to write read-write-tests that clean up after themselves, or perhaps divide the rapid package into several packages so I can only a last small one every test run. This is not optimal, but it is about optimization.
Nav Container Helper
Freddy (and friends) have written Nav Container Helper. You should probably use it. Since I am a bit backwards, Nav Container Helper is not part of my automated test run. But I use it to learn.
I can invoke Nav Container Helper with version and country arguments to learn what image and artifact to use.
Unfortunately documentation of BC in docker itself is quite thin. I have needed to read the source of Nav Container Helper, and run Nav Container Helper, to understand what options are available when creating a container.
Nav Container Helper annoys me. It kind of prefers to be installed and run as administrator. It can update the hosts-file when it creates a container, but that is optional. However, when removing a container it is not optional to check the hosts file, so I need to remove containers as administrator. I am also not very used to PowerShell, admittedly.
Nav Container Helper will eventually be replaced by the newer BC Container Helper.
Image and Artifact
The images are managed by docker. The artifacts are downloaded the first time you need them and stored in c:\bcartifacts.cache. You can change that folder to anything you like (see below). The image is capable of downloading the artifacts itself (to the cache folder you assign), so you don’t need NavContainerHelper for this.
To find the best generic image for you computer:
Get-BestGenericImageName
To find artifact URLs for BC, run in powershell (you need to install Nav-ContainerHelper first):
Get-BCArtifactUrl -version 16.4
Docker option and environment
When you run a docker image, which creates and starts a container, you can give options and parameters. When you later start an already existing container, it will use the same options as when created.
Since I don’t use NavContainerHelper to run image, here are options (arguments to docker run) I have found useful.
-e accept_eula=Y -e accept_outdated=Y -e usessl=N -e enableApiServices=Y -e multitenant=Y -m 6G -e artifactUrl=https://bcartifacts.azureedge.net/sandbox/16.5.15897.16650/se -e licenseFile=c:\run\my\license.flf --volume myData:c:\Run\my --volume cacheData:c:\dl -p 8882:80 -p 7048:7048
I will not get into too many details but:
- You just need to accept EULA
- The image may be old (whatever that is), use it anyway
- I don’t care about SSL when testing things locally
- You need to enable API to use it (port 7048)
- Since 16.4 multitentant is required to be able to create or remove companies (you usually need to add ?tenant=default to all URLs)
- 4GB is kind of recommended, I use 6GB now when importing rapid start packages of significant size
- For doing anything real you most likely will need a valid license file. The path given is in the container (not on your host)
- I have a folder (replace myData with your absolute path) on the host computer with a license file, my AdditionalSetup.ps1, and possibly more data. –volume makes that folder available (rw) as c:\run\my inside the docker container.
- I have a folder (replace cacheData with your absolute path) where artifacts are downloaded. This way they are saved for next container.
- Business Central UI listens to http:80. I expose that on my host on 8882.
- Business Central API services are available on http:7048. I expose that on my host on 7048.
NavContainerHelper will do some of these things automatically and allow you to control other things with parameters. You can run docker inspect on a container to see how it was actually created.
Username & Password
First time you run a container (that is, when you create it using docker run) it will output username, password and other connection information to stdout. You may want to collect and save this information so you can connect to BC. There are ways to set a password too – I am fine with a generated one.
AdditionalSetup.ps1
If there is a file c:\run\my\AdditionalSetup.ps1 in the container, it will be run (last). You can do nothing, or a lot with this. It turned out that installing extensions via the API requires something to be installed first. So right now I have this in my AdditionalSetup.ps1:
Write-Host 'Starting AdditionalSetup.ps1' if ( -not (Get-Module -ListAvailable -Name ALOps.ExternalDeployer) ) { Write-Host 'Starting ALOps installation' Write-Host 'ALOps 1/5: Set Provider' Install-PackageProvider -Name NuGet -Force Write-Host 'ALOps 2/5: Install from Internet' install-module ALOps.ExternalDeployer -Force Write-Host 'ALOps 3/5 import' import-module -Name ALOps.ExternalDeployer Write-Host 'ALOps 4/5 install' Install-ALOpsExternalDeployer Write-Host 'ALOps 5/5 create deployer' New-ALOpsExternalDeployer -ServerInstance BC Write-Host 'ALOps Complete' } else { Write-Host 'ALOps Already installed' }
This is horrible, because it downloads something from the internet every time I create a new container, and it occationally fails. I tried to download this module in advance and just install/import it. That did not work (there is something about this NuGet-provider that requires extra magic offline). The Microsoft ecosystem is still painfully imature.
To try things out in the container, you can get a powershell shell inside the container:
docker exec -ti <containername> powershell
Install Extensions
A usually install extensions with the cmdlets:
- Publish-NAVApp
- Sync-NAVApp
- Install-NAVApp
in AdditionalSetup.ps1 (before setting up companies – that seems to not matter so much). You need to “import” those cmdlets before using them:
import-module 'c:\Program Files\Microsoft Dynamics NAV\160\Service\Microsoft.Dynamics.Nav.Apps.Management.psd1'
I can also use the automation API, if I first install ALOps.ExternalDeployer as above (but that is a download, which I don’t like)
Set Up Companies
Depending on your artifact you may get different companies from the beginning. It seems you always get “My Company”. And then there is a localized CRONUS company (except for the w1 artifact), that can be named “CRONUS USA” or “CRONUS International Inc” or something.
I work for Damage Inc, so that is the only company I want. However, it seems not to be possible to delete the last company. This is what I have automated:
- If “Company Zero” does not exist, create it
- Delete all companies, except “Company Zero”
- Create “Damage Inc”
- Delete “Company Zero” (optional – if it distrubs you)
This works the first time (regardless of CRONUS presence), when creating the container. This also works if I run it over and over again (for example when restarting an already created container, or just running some tests on an already started container): I get the same result, just a new “Damage Inc” every time, just as the first time.
Install Rapid Start Package
I install a rapid start package using the automation API. It should be possible to do it from AdditionalSetup.ps1 as well. This takes long time. I see some advantage using the API because I can monitor and control the status/progress in my integration-test-scripts (I could output things from AdditionalSetup.ps1 and monitor that, too).
Rapidstart packages are tricky – by far the most difficult step of all:
- Exporting a correct Rapidstart package is not trivial
- Importing takes long time
- The GUI inside business central (Rapidstart packates are called Configuration Packages) gives more control than the API – and you can see the detailed errors in the GUI (only).
- I have found that I get errors when importing using the API, but not in the GUI. In fact, just logging in to the GUI, doing nothing, and logging out again, before using the API makes the API import successful. Perhaps there are triggers being run when the GUI is activated, setting up data?
Run tests!
Finally, you can run your tests and profit!
Use the UI
You can log in to BC with the username and password that you collected above. I am not telling you to do manual testing, but the opportunities are endless.
Stopping
When done, you can stop the container. If run was invoked with “–rm” the container will be automatically removed.
Depending on your architecture and strategy, you may be able to (re)use this container for later use.
Webhooks & Subscriptions
Business Central has a feature called webhooks (in the api it is subscriptions). It is a feature that makes BC call you (your service) when something has updated, so you don’t need to poll regularly.
This is good, but beware, it is a bit tricky.
First M$ has decided BC will only call an HTTPS service. When I run everything on localhost and BC in a container, i am fine with HTTP, actually. Worse, even if I run HTTPS, BC is not accepting my self signed certificate. This sucks! Perhaps there is a way to allow BC to call an HTTP service. I couldn’t find out so now I let my BC Container call a proxy on the internet. That is crap.
Also, note that the webhooks trigger after about 30s. That is probably fine, for production. For automated testing it sucks. Perhaps there is a way to speed this up on a local docker container, please let me know.
Finally, the documentation for deleting a webhook is wrong. In short, what you need to (also) do is:
- add ‘ around the id, as in v1.0/subscriptions(‘asdfasdfasdfasdfasf’)
- set header if-match to * (or something more sofisticated)
I found it in this article.
Docker Issues
Occationally, not so rarely unfortunately, when I try to start a container I get something like:
Error response from daemon: hcsshim::CreateComputeSystem 3239b7231b2e3d1
b5aa46aa484e526e454fdd8ca230b324a34cfa91f5625583b: The requested resource is in
use.
It is hard to predict and it is hard to solve. Sometimes a restart of the computer works. Sometimes reinstalling Docker (or making som reset) works. Mostly the problem can not be solved for the moment.
Sometimes changing isolation from hyperv to process helps.
It seems the problem is that the current Windows version you are on (every cumulative update matters) is not working with the BC Image. But it is not like there is always a more current BC image that works. And on rare occations one BC image works and the other does not.
When this happens, I simply accept Business Central in Docker does not work on this computer, and then I try another day when I have applied a new cumulative update. So if you really NEED Business Central in Docker to be working, you need two working computers, and update one at a time. If one breaks do not update the other.
To be completely clear, this is the kind of bullshit that makes Microsoft technology not mature and stable enough for real applications. If what you are doing is important – don’t run it using Microsoft technology.
Conclusion
This – running NAV/BC inside Docker and automate test cases – is somewhat new technology. There have been recent changes and sources of confusion:
- NAV rebranding to Business Central
- Replacing images with Artifacts
- Multitenant needed from 16.4
- The onprem vs SAS thing
To do this right requires effort and patience. But to me, not doing this at all (or doing it wrong) is not an option.