Tag Archives: JavaScript

Great JavaScript Stuff 2020

I never write about great new JavaScript features on this blog. That is because there is really nothing you can not do without great new features, and I prefer writing compatible code (that runs in Internet Explorer).

However some code is backend-only, and if Node.js supports a feature I can use it.

So, here are two great new things in JavaScript that I will start using in Node.js.

Optional Chaining

It is very common in JavaScript with code like:

price = inventory.articles.apple.price;

If inventory, articles or apple is null or undefined this will throw an error. One common way to do this is:

price = inventory &&
        inventory.articles &&
        inventory.articles.apple &&
        inventory.articles.apple.price;

That is obviously not optimal. I myself have implemented a little function, so I do:

price = safeget(inventory,'articles','apple','price');

The elegant 2020 solution is:

price = inventory?.articles?.apple?.price;

Who wouldn’t want to do that? Well, you need to be aware that it will “fail silently” if something is missing, so you should probably not use it when you dont expect anything to be null or undefined, and not without handling the “error” properly. If you replace . with ?. everywhere in your code, you will just detect your errors later when your code gives the wrong result rather than crashing.

Nullish Coalescing

It is very common to write code like:

connection.port = port || 80;
connection.host = server || 'www.bing.com';
array = new Array(size || default_size);

The problem is that sometimes a “falsy value” (0, false, empty string) is a valid value, but it will be replaced by the default value. The port above can not be set to 0, and the host can not be set to ”;

I often do things like:

connection.port = 'number' === typeof port ? port : 80;
connection.host = null == server ? 'www.bing.com' : server;

However, there is now a better way in JavaScript:

connection.port = port ?? 80;
connection.host = server ?? 'www.bing.com';
array = new Array(size ?? default_size);

This will only fall back to the default value if port/server/size is null/undefined. Much of the time, this is what you want.

However, you still may need to do proper validation, so if you used to do:

connection.port = validatePort(port) ? port : 80;

you shall probably keep doing it.

Conclusion

If your target environment supports Optional chaining and Nullish coalescing, take advantage of it. Node.js 14 supports it.

Functional Programming is Slow – revisited

I have written before about Functional Program with a rather negative stand point (it sucks, it is slow). Those posts have some readers, but they are a few years old, and I wanted to do some new benchmarks.

Please note:

  • This is written with JavaScript (and Node.js) in mind. I don’t know if these findings apply to other programming languages.
  • Performance is important, but it is far from everything. Apart from performance, there are both good and bad aspects of Functional Programming.

Basic Chaining

One of the most common ways to use functional programming (style) in JavaScript is chaining. It can look like this:

v = a.map(map_func).filter(filter_func).reduce(reduce_func)

In this case a is an array, and three functions are sequentially called to each element (except reduce is not called on those that filter gets rid of). The return value of reduce (typically a single value) is stored in v.

  • What is the cost of this?
  • What are the alternatives?

I decided to calculate the value of pi by

  1. evenly distribute points in the[0,0][1,1] rectangle.
  2. for each point calculate the distance to origo (a simple map)
  3. get rid of each point beyond distance 1.0 (a simple filter)
  4. count the number of remaing points (a simple reduce – although in this simple case it would be enough to check the length of the array)

The map, filter and reduce functions looks like:

const pi_map_f = (xy) => {
  return xy.x * xy.x + xy.y * xy.y;
};
const pi_filter_f = (xxyy) => {
  return xxyy <= 1.0;
};
const pi_reduce_f = (acc /* ,xxyy */) => {
  return 1 + acc;
};

In chained functional code this looks like:

const pi_higherorder = (pts) => {
  return 4.0
       * pts.map(pi_map_f)
            .filter(pi_filter_f)
            .reduce(pi_reduce_f,0)
       / pts.length;
};

I could use the same three functions in a regular loop:

const pi_funcs = (pts) => {
  let i,v;
  let inside = 0;
  for ( i=0 ; i<pts.length ; i++ ) {
    v = pi_map_f(pts[i]);
    if ( pi_filter_f(v) ) inside = pi_reduce_f(inside,v);
  }
  return 4.0 * inside / pts.length;
};

I could also write everything in a single loop and function:

const pi_iterate = (pts) => {
  let i,p;
  let inside = 0;
  for ( i=0 ; i<pts.length ; i++ ) {
    p = pts[i];
    if ( p.x * p.x + p.y*p.y <= 1.0 ) inside++;
  }
  return 4.0 * inside / pts.length;
};

What about performance? Here are some results from a Celeron J1900 CPU and Node.js 14.15.0:

Iterate (ms)Funcs (ms)Higher Order (ms)Pi
10k88103.1428
40k34193.1419
160k33473.141575
360k661963.141711
640k11114043.141625
1000k17175593.141676
1440k252511603.14160278

There are some obvious observations to make:

  • Adding more points does not necessarily give a better result (160k seems to be best, so far)
  • All these are run in a single program, waiting 250ms between each test (to let GC and optimizer run). Obvously it took until after 10k for the Node.js optimizer to get things quite optimal (40k is faster than 10k).
  • The cost of writing and calling named functions is zero. Iterate and Funcs are practially identical.
  • The cost of chaining (making arrays to use once only) is significant.

Obviously, if this has any practical significance depends on how large arrays you are looping over, and how often you do it. But lets assume 100k is a practical size for your program (that is for example 100 events per day for three years). We are then talking about wasting 20-30ms every time we do a common map-filter-reduce-style loop. Is that much?

  • If it happens server side or client side, in a way that it affects user latency or UI refresh time, it is significant (especially since this loop is perhaps not the only thing you do)
  • If it happens server side, and often, this chaining choice will start eating up significant part of your server side CPU time

You may have a faster CPU or a smaller problem. But the key point here is that you choose to waste significant amount of CPU cycles because you choose to write pi_higherorder rather than pi_funcs.

Different Node Versions

Here is the same thing, executed with different versions of node.

1000kIterate (ms)Funcs (ms)Higher Order (ms)
8.17.01111635
10.23.01111612
12.19.01111805
14.15.01819583
15.1.01719556

A few findings and comments on this:

  • Different node version show rather different performance
  • Although these results are stable on my machine, what you see here may not be valid for a different CPU or a different problem size (for 1440k points, node version 8 is the fastest).
  • I have noted before, that functional code gets faster, iterative code slower, with newer versions of node.

Conclusion

My conclusions are quite consistent with what I have found before.

  • Writing small, NAMED, testable, reusable, pure functions is good programming, and good functional programming. As you can see above, the overhead of using a function in Node.js is practially zero.
  • Chaining – or other functional programming practices, that are heavy on the memory/garbage collection – is expensive
  • Higher order functions (map, filter, reduce, and so on) are great when
    1. you have a named, testable, reusable function
    2. you actually need the result, just not for using once and throwing away
  • Anonymous functions fed directly into higher order functions have no advantages whatsoever (read here)
  • The code using higher order functions is often harder to
    1. debug, becuase you can’t just put debug outputs in the middle of it
    2. refactor, because you cant just insert code in the middle
    3. use for more complex agorithms, becuase you are stuck with the primitive higher order functions, and sometimes they don’t easily allow you to do what you need

Feature Wish

JavaScript is hardly an optimal programming language for functional programming. One thing I miss is truly pure functions (functions with no side effects – especially no mutation of input data).

I have often seen people change input data in a map.

I believe (not being a JavaScript engine expert) that if Node.js knew that the functions passed to map, filter and reduce above were truly pure, it would allow for crazy optimizations, and the Higher Order scenario could be made as fast as the other ones. However, as it is now, Node.js can not get rid of the temporary arrays (created by map and filter), because of possible side effects (not present in my code).

I tried to write what Node.js could make of the code, if it knew it was pure:

const pi_allinone_f = (acc,xy) => {
  return acc + ( ( xy.x * xy.x + xy.y * xy.y <= 1.0 ) ? 1 : 0);
};

const pi_allinone = (pts) => {
  return 4.0
       * pts.reduce(pi_allinone_f,0)
       / pts.length;
};

However, this code is still 4-5 times slower than the regular loop.

All the code

Here is all the code, if you want to run it yourself.

const points = (n) => {
  const ret = [];
  const start = 0.5 / n;
  const step = 1.0 / n;
  let x, y;
  for ( x=start ; x<1.0 ; x+=step ) {
    for ( y=start ; y<1.0 ; y+=step ) {
      ret.push({ x:x, y:y });
    }
  }
  return ret;
};

const pi_map_f = (xy) => {
  return xy.x * xy.x + xy.y * xy.y;
};
const pi_filter_f = (xxyy) => {
  return xxyy <= 1.0;
};
const pi_reduce_f = (acc /* ,xxyy */) => {
  return 1 + acc;
};
const pi_allinone_f = (acc,xy) => {
  return acc + ( ( xy.x * xy.x + xy.y * xy.y <= 1.0 ) ? 1 : 0);
};

const pi_iterate = (pts) => {
  let i,p;
  let inside = 0;
  for ( i=0 ; i<pts.length ; i++ ) {
    p = pts[i];
    if ( p.x * p.x + p.y*p.y <= 1.0 ) inside++;
  }
  return 4.0 * inside / pts.length;
};

const pi_funcs = (pts) => {
  let i,v;
  let inside = 0;
  for ( i=0 ; i<pts.length ; i++ ) {
    v = pi_map_f(pts[i]);
    if ( pi_filter_f(v) ) inside = pi_reduce_f(inside,v);
  }
  return 4.0 * inside / pts.length;
};

const pi_allinone = (pts) => {
  return 4.0
       * pts.reduce(pi_allinone_f,0)
       / pts.length;
};

const pi_higherorder = (pts) => {
  return 4.0
       * pts.map(pi_map_f).filter(pi_filter_f).reduce(pi_reduce_f,0)
       / pts.length;
};

const pad = (s) => {
  let r = '' + s;
  while ( r.length < 14 ) r = ' ' + r;
  return r;
}

const funcs = {
  higherorder : pi_higherorder,
  allinone : pi_allinone,
  functions : pi_funcs,
  iterate : pi_iterate
};

const test = (pts,func) => {
  const start = Date.now();
  const pi = funcsfunc;
  const ms = Date.now() - start;
  console.log(pad(func) + pad(pts.length) + pad(ms) + 'ms ' + pi);
};

const test_r = (pts,fs,done) => {
  if ( 0 === fs.length ) return done();
  setTimeout(() => 
    test(pts,fs.shift());
    test_r(pts,fs,done);
  }, 1000);
};

const tests = (ns,done) => {
  if ( 0 === ns.length ) return done();
  const fs = Object.keys(funcs);
  const pts = points(ns.shift());
  test_r(pts,fs,() => {
    tests(ns,done);
  });
};

const main = (args) => {
  tests(args,() => {
    console.log('done');
  });
};

main([10,100,200,400,600,800,1000,1200]);

JavaScript: Fast Numeric String Testing

Sometimes I have strings that (should) contain numbers (like ‘31415’) but I want/need to test them before I use them. If this happens in a loop I could start asking myself questions about performance. And if it is a long loop an a Node.js server the performance may actually matter.

For the purpose of this post I have worked with positives (1,2,3,…), and I have written code that finds the largest valid positive in an array. Lets say there are a few obvious options:

// Parse it and test it
const nv = +nc;
pos = Number.isInteger(nv) && 0 < nv;

// A regular expression
pos = /^[1-9][0-9]*$/.test(nc);

// A custom function
const strIsPositive = (x) => {
   if ( 'string' !== typeof x || '' === x ) return false;
   const min = 48; // 0
   const max = 57; // 9
   let   cc  = x.charCodeAt(0);
   if ( cc <= min || max < cc ) return false;
   for ( let i=1 ; i<x.length ; i++ ) {
     cc = x.charCodeAt(i);
     if ( cc < min || max < cc ) return false;
   }
   return true;
 }
pos = strIsPositive(nc);

Well, I wrote some benchmark code and ran it in Node.js, and there are some quite predictable findings.

It is no huge difference between the alternatives above, but there are differences (1ms for 10000 validations, on a 4th generation i5).

There is no silver bullet, the optimal solution depends on.

If all you want is validation, it is wasteful to convert (+nc). A Regular expression is faster, but you can easily beat a Regular expression with a simple loop.

If most numbers are valid, converting to number (+nc) makes more sense. It is expensive to parse invalid values to NaN.

If you are going to use the number, converting to number (+nc) makes sense (if you convert only once).

The fastest solution, both for valid and invalid numbers, is to never convert to number (but use the custom function above to validate) and find the max using string compare.

if ( strIsPositive(nc) &&
     ( max.length < nc.length ) || ( max.length === nc.length && max < nc )
   )
  max = nc; 

This is obviously not a generally good advice.

Other numeric formats

My above findings are for strings containing positives. I have tested both code that only validates, and code that use the value by comparing it.

You may not have positives but:

  • Naturals, including 0, which creates a nastier regular expression but an easier loop.
  • Integers, including negative values, which creates even nastier regular expressions.
  • Ranged integers, like [-256,255], which probably means you want to parse (+nc) right away.
  • Decimal values
  • Non standard formats (with , instead of . for decimal point, or with delimiters like spaces to improve readability)
  • Hex, scientific formats, whatever

In the end readability is usually more important than performance.

Performance, Node.js & Sorting

I will present two findings that I find strange in this post:

  1. The performance of Node.js (V8?) has clearly gotten consistently worse with newer Node.js versions.
  2. The standard library sort (Array.prototype.sort()) is surprisingly slow, often slower than a simple textbook mergesort.

My findings in this article are based on running a simple program mergesort.js on different computers and different node versions.

You may also want to read this article about sorting in Node.js. It applies to V8 version 7.0, which should be used in Node.js V11.

The sorting algorithms

There are three sorting algorithms compared.

  1. Array.prototype.sort()
  2. mergesort(), a textbook mergesort
  3. mergesort_opt(), a mergesort that I put some effort into making faster

Note that mergesort is considered stable and not as fast as quicksort. As far as I understand from the above article, Node.js used to use quicksort (up to V10), and from V11 uses something better called Timsort.

My mergesort implementations (2) (3) are plain standard JavaScript. Nothing fancy whatsoever (I will post benchmarks using Node.js v0.12 below).

The data to be sorted

There are three types of data to be sorted.

  1. Numbers (Math.random()), compared with a-b;
  2. Strings (random numbers converted to strings), compared with default compare function for sort(), and for my mergesort simple a<b, a>b compares to give -1, 1 or 0
  3. Objects, containing two random numbers a=[0-9], b=[0-999999], compared with (a.a-b.a) || (a.b-b.b). In one in 10 the value of b will matter, otherwise looking at the value of a will be enough.

Unless otherwise written the sorted set is 100 000 elements.

On Benchmarks

Well, just a standard benchmark disclaimer: I do my best to measure and report objectively. There may be other platforms, CPUs, configurations, use cases, datatypes, or array sizes that give different results. The code is available for you to run.

I have run all tests several times and reported the best value. If anything, that should benefit the standard library (quick)sort, which can suffer from bad luck.

Comparing algorithms

Lets start with the algorithms. This is Node V10 on different machines.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
NUC i7 82 82 61 110 81 54 95 66 50
NUC i5 113 105 100 191 130 89 149 97 72
NUC Clrn 296 209 190 335 250 196 287 189 157
RPi v3 1886 1463 1205 2218 1711 1096 1802 1370 903
RPi v2 968 1330 1073 1781 1379 904 1218 1154 703

The RPi-v2-sort()-Numbers stand out. Its not a typo. But apart from that I think the pattern is quite clear: regardless of datatype and on different processors the standard sort() simply cannot match a textbook mergesort implemented in JavaScript.

Comparing Node Versions

Lets compare different node versions. This is on a NUC with Intel i5 CPU (4th gen), running 64bit version of Ubuntu.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 84 107 96 143 117 90 140 97 71
v10.15.3 109 106 99 181 132 89 147 97 71
v8.9.1 85 103 96 160 133 86 122 99 70
v6.12.0 68 76 88 126 92 82 68 83 63
v4.8.6 51 66 89 133 93 83 45 77 62
v0.12.9 58 65 78 114 92 87 55 71 60

Not only is sort() getting slower, also running “any” JavaScript is slower. I have noticed this before. Can someone explain why this makes sense?

Comparing different array sizes

With the same NUC, Node V10, I try a few different array sizes:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
10 000 10 9 11 8 12 6 4 7 4
15 000 8 15 7 13 14 11 6 22 7
25 000 15 35 12 40 27 15 11 25 18
50 000 35 56 34 66 57 37 51 52 30
100 000 115 107 97 192 138 88 164 101 72
500 000 601 714 658 1015 712 670 698 589 558

Admittedly, the smaller arrays show less difference, but it is also hard to measure small values with precision. So this is from the RPi v3 and smaller arrays:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
5 000 34 57 30 46 59 33 29 52 26
10 000 75 129 64 100 130 74 63 104 58
20 000 162 318 151 401 290 166 142 241 132
40 000 378 579 337 863 623 391 344 538 316

I think again quite consistently this looks remarkably bad for standard library sort.

Testing throughput (Version 2)

I decided to measure throughput rather than time to sort (mergesort2.js). I thought perhaps the figures above are misleading when it comes to the cost of garbage collecting. So the new question is, how many shorter arrays (n=5000) can be sorted in 10s?

(count)  ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 3192 2538 4744 1996 1473 2167 3791 2566 4822
v10.15.3 4733 2225 4835 1914 1524 2235 4911 2571 4811
RPi v3 282 176 300 144 126 187 309 186 330

What do we make of this? Well the collapse in performance for the new V8 Torque implementation in Node v11 is remarkable. Otherwise I notice that for Objects and Node v10, my optimized algorithm has no advantage.

I think my algorithms are heavier on the garbage collector (than standard library sort()), and this is why the perform relatively worse for 10s in a row.

If it is so, I’d still prefer to pay that price. When my code waits for sort() to finish there is a user waiting for a GUI update, or for an API reply. I rather see a faster sort, and when the update/reply is complete there is usually plenty of idle time when the garbage collector can run.

Optimizing Mergesort?

I had some ideas for optimizing mergesort that I tried out.

Special handling of short arrays: clearly if you want to sort 2 elements, the entire mergesort function is heavier than a simple function that sorts two elements. The article about V8 sort indicated that they use insertion sort for arrays up to length 10 (I find this very strange). So I implemented special functions for 2-3 elements. This gave nothing. Same performance as calling the entire mergesort.

Less stress on the garbage collector: since my mergesort creates memory nodes that are discarded when sorting is complete, I thought I could keep those nodes for the next sort, to ease the load on the garbage collector. Very bad idea, performance dropped significantly.

Performance of cmp-function vs sort

The relevant sort functions are all K (n log n) with different K. It is the K that I am measuring and discussing here. The differences are, after all, quite marginal. There is clearly another constant cost: the cost of the compare function. That seems to matter more than anything else. And in all cases above “string” is just a single string of 10 characters. If you have a more expensive compare function, the significance of sort() will be even less.

Nevertheless, V8 is a single threaded environment and ultimately cycles wasted in sort() will result in overall worse performance. Milliseconds count.

Conclusions

Array.prototype.sort() is a critical component of the standard library. In many applications sorting may be the most expensive thing that takes place. I find it strange that it does not perform better than a simple mergesort implementation. I do not suggest you use my code, or start looking for better sort() implementations out there right away. But I think this is something for JavaScript programmers to keep in mind. However, the compare function probably matters more in most cases.

I find it strange that Node v11, with Timsort and V8 Torque is not more of an improvement (admittedly, I didnt test that one very much).

And finally I find it strange that Node.js performance seems to deteriorate with every major release.

Am I doing anything seriously wrong?

JavaScript Double Linked List

JavaScript has two very powerful and flexible build in data structures: [] and {}. You can program rather advanced JavaScript for years without needing anything else.

Nevertheless I had a conversation about possible advantages of using a linked list (instead of an array). Linked lists are not very popular, Stroustrup himself has suggested they should be avoided. But what if you mostly do push(), pop(), shift() and unshift() and never access an item by its index? Higher order functions as map(), reduce(), filter() and sort(), as well as iterators should be just fine.

I decided to implement a Double Linked List in JavaScript making it (mostly) compatible with Array and do some tests on it. The code both of the DoubleLinkedList itself, and the unit tests/benchmarks are available.

Disclaimer

This is a purely theoretical, academical and nerdy experiment. The DoubleLinkedList offers no advantages over the built in Array, except for possible performance advantages in edge cases. The disadvantages compared to Array are:

  • Lower performance in some cases
  • Less features / limited API
  • Less tested and proven
  • An extra dependency, possible longer application loading time

So, my official recommendation is that you read this post and perhaps look at the code for learning purposes. But I really doubt you will use my code in production (although you are welcome to).

Benchmarks

Benchmarks are tricky. In this case there are three kinds of benchmarks:

  1. Benchmarks using array[i] to get the item at an index. This is horrible for the linked list. I wrote no such benchmarks.
  2. Benchmarks testing map(), reduce(), filter(), that I wrote but that show consistently no relevant and interesting differences between built in Array and my DoubleLinkedList (my code is essentially equally fast as the standard library array code, which on one hand is impressive, and on the other hand is reason not to use it).
  3. Benchmarks where my DoubleLinkedList does fine, mostly that heavily depends on push(), pop(), shift() and unshift().

The only thing I present below is (3). I have nothing for (1), and (2) shows nothing interesting.

The machines are in order an Hades Canyon i7-NUC, an old i5-NUC, a newer Celeron-NUC, an Acer Chromebook R13 (with an ARMv8 CPU), A Raspberry Pi v3, and a Raspberry Pi V2. The Chromebook runs ChromeOS, the i7 runs Windows, the rest run Linux.

My benchmarks use Math.random() to create test data. That was not very smart of me because the variation between test runs is significant. The below numbers (milliseconds) are the median value of running each test 41 times. You can see for yourself that the values are quite consistent.

The tested algorithms

The push(), pop(), shift(), unshift() tests use the array/list as a queue and push 250k “messages” throught it, keeping the queue roughly 10k messages.

The mergesort() test is a mergesort built on top of the datastructures using push()/shift().

The sort() test is the standard Array.sort(), versus a mergesort implementation for DoubleLinkedList (it has less overhead than mergesort(), since it does not create new objects for every push()).

Benchmark result

                    Node8   ============== Node 10 =====================
(ms) NUCi7 NUCi7 NUCi5 NUC-C R13 RPiV3 RPiV2
unshift/pop 250k
Array 679 649 1420 1890 5216 11121 8582
D.L.L. 8 13 10 20 40 128 165
push/shift 250k
Array 37 31 31 49 143 388 317
D.L.L. 10 12 10 19 44 115 179
mergesort 50k
Array 247 190 300 466 1122 3509 3509
D.L.L. 81 88 121 244 526 1195 1054
sort 50k
Array 53 55 59 143 416 1093 916
D.L.L. 35 32 42 84 209 543 463

What do we make of this?

  • For array, push/shift is clearly faster than unshift/pop!
  • It is possible to implement a faster sort() than Array.sort() of the standard library. However, this may have little to do with my linked list (I may get an even better result if I base my implementation on Array).
  • I have seen this before with other Node.js code but not published it: the RPiV2 (ARMv7 @900MHz) is faster than the RPiV3 (ARMv8 @1200Mhz).
  • I would have expected my 8th generation i7 NUC (NUC8i7HVK) to outperform my older 4th generation i5 NUC (D54250WYK), but not so much difference.

More performance findings

One thing I thought could give good performance was a case like this:

x2 = x1.map(...).filter(...).reduce(...)

where every function creates a new Array just to be destroyed very soon. I implemented mapMutate and filterMutate for my DoubleLinkedList, that reuse existing List-nodes. However, this gave very little. The cost of the temporary Arrays above seems to be practically insignificant.

However for my Double linked list:

dll_1 = DoubleLinkedList.from( some 10000 elements )
dll_1.sort()
dll_2 = DoubleLinkedList.from( some 10000 elements )

Now
dll_1.map(...).filter(...).reduce(...) // slower
dll_2.map(...).filter(...).reduce(...) // faster

So it seems I thought reusing the list-nodes would be a cost saver, but it turns out to produce cache-misses instead

Using the Library

If you feel like using the code you are most welcome. The tests run with Node.js and first runs unit tests (quickly) and then benchmarks (slower). As I wrote earlier, there are some Math.random() in the tests, and on rare occations statistically unlikely events occur, making the tests fail (I will not make this mistake again).

The code itself is just for Node.js. There are no dependencies and it will require minimal work to adapt it to any browser environment of your choice.

The code starts with a long comment specifying what is implemented. Basically, you use it just as Array, with the exceptions/limitations listed. There are many limitations, but most reasonable uses should be fairly well covered.

Conclusion

It seems to make no sense to replace Array with a linked list in JavaScript (Stroustrup was right). If you are using Array as a queue be aware that push/shift is much faster than unshift/pop. It would surprise me much if push/pop is not much faster than unshift/shift for a stack.

Nevertheless, if you have a (large) queue/list/stack and all you do is push, pop, shift, unshift, map, filter, reduce and sort go ahead.

There is also a concatMutate in my DoubleLinkedList. That one is very cheap, and if you for some reason do array.concat(array1, array2, array3) very often perhaps a linked list is your choice.

It should come as no surprise, but I was suprised that sort(), mergesort in my case, was so easy to implement on a linked list.

On RPiV2 vs RPiV3

I have on several occations before written about that the 900MHz ARMv7 of RPiV2 completely outperformes the 700MHz ARMv6 of RPiV1. It is about 15 times faster, and not completely clear why the difference is so big (it is that big for JavaScript, not for C typical code).

The RPiV3 is not only 300MHz faster than the RPiV2, it is also a 64-bit ARMv8 cpu compared to the 32-bit ARMv7 cpu of RPiV2. But V3 delivers worse performance than V2.

One reason could be that the RPi does not have that much RAM, and not that fast RAM either, and that the price of 64-bit is simply not worth it. For now, I have no other idea.

References

An article about sorting in V8: https://v8.dev/blog/array-sort. Very interesting read. But I tried Node version 11 that comes with V8 version 7, and the difference was… marginal at best.

Where to ‘use strict’ with Object.freeze()?

I have coded JavaScript short enough time to consider ‘use strict’ a mandatory and obvious feature of the language to use. I always use it unless I forget to.

A while ago I was aware of Object.freeze(). I have been thinking about different ways to exploit this (strict) feature for a while and I now have a very good use case: freeze indata in unit tests to ensure my tested functions don’t incidentally change indata (pure functions are good, pure functions don’t change indata, and it is hard to really guarantee a function in JavaScript is pure).

Imagine I am writing a function that calculates the average and I have a test for it.

const averageOfArray1 = (a) => {
let s = 0;
for ( let i=0 ; i<a.length ; i++ ) s+=a[i];
return s/a.length;
};

describe('test avg', () => {
it('should give the average value of 2', () => {
const a = [1,2,3];
assert.equal(2, averageOfArray1(a) );
});
});

If averageOfArray mutates its input, it would be a serious bug, and the above test would not detect it. Lets look at a different implementation:

const averageOfArray2 = (a) => {
for ( let i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};

describe('test avg', () => {
it('should give the average value of 2', () => {
const a = [1,2,3];
assert.equal(2, averageOfArray2(a) );
});
});

Some genious “optimized” the function by eliminating an unnecessary variable (s), and the test still passes! However, if the tests where written:

describe('test loop', () => {
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a) );
});
})

the tests would fail! Much better. How do the tests fail? This is what I get:

1) test avg
should give the average value of 2:

AssertionError [ERR_ASSERTION]: 2 == 0.3333333333333333
+ expected - actual
-2
+0.3333333333333333

So it appears that the first element [0] of the array was never changed, thus the return value of 0.3333. But no exception was thrown. If I instead would ‘use strict’ for the entire code:

 'use strict';

const assert = require('assert');

const averageOfArray2 = (a) => {
for ( let i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};
describe('test avg', () => {
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a));
});
});

instead I get:

1) test avg
should give the average value of 2:
TypeError: Cannot assign to read only property '0' of object '[object Array]'
at averageOfArray2 (avg.js:12:45)
at Context.it (avg.js:20:25)

which is what I really wanted.

So it APPEARS to me that without ‘use strict’ the frozen object is not changed, but changing it just fails silently. With ‘use strict’ I get an exception right way, which leads me to the question where I can put use strict? This is what I found:

 // 'use strict';  // GOOD

const assert = require('assert');

// 'use strict'; // BAD

const averageOfArray2 = (a) => {
// 'use strict'; // GOOD
let i;
// 'use strict'; // BAD
for ( i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};
describe('test avg', () => {
// 'use strict'; // BAD
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a));
});
});

That is, ‘use strict’ should be in place where the violation actually takes place. And ‘use strict’ must be placed first in whatever function it is placed, otherwise it is silently ignored! This is probably well known to everyone, but it was not to me.

Conclusion

Object.freeze() is very useful for improved unit tests. However, you should use it together with properly placed ‘use strict’ and that is in the function begin tested (not only the unit test).

And note, if you have done Object.freeze in a unit test, and someone refactors the tested function in a way that it both:

  1. Mutates the frozen object
  2. Removes or moves ‘use strict’ to an invalid place

your unit tests may still pass, even though the function is now very dangerous.

Best way to write compare-functions

The workhorse of many (JavaScript) programs is sort(). When you want to sort objects (or numbers, actually) you need to supply a compare-function. Those are nice functions because they are very testable and reusable, but sorting is also a bit expensive (perhaps the most expensive thing your program does) so you want them fast.

For the rest of this article I will assume we are sorting som Order objects based status, date and time (all strings).

The naive way to write this is:

function compareOrders1(a,b) {
if ( a.status < b.status ) return -1;
if ( a.status > b.status ) return 1;
if ( a.date < b.date ) return -1;
if ( a.date > b.date ) return 1;
if ( a.time < b.time ) return -1;
if ( a.time > b.time ) return 1;
return 0;
}

There are somethings about this that is just not appealing: too verbose, risk of a typo, and not too easy to read.

Another option follows:

function cmpStrings(a,b) {
if ( a < b ) return -1;
if ( a > b ) return 1;
return 0;
}

function compareOrders2(a,b) {
return cmpStrings(a.status,b.status)
|| cmpStrings(a.date ,b.date )
|| cmpStrings(a.time ,b.time );
}

Note that the first function (cmpStrings) is highly reusable, so this is shorter code. However, there is still som repetition, so I tried:

function cmpProps(a,b,p) {
return cmpStrings(a[p], b[p]);
}

function compareOrders3(a,b) {
return cmpProps(a,b,'status')
|| cmpProps(a,b,'date')
|| cmpProps(a,b,'time');
}

There is something nice about not repeating status, date and time, but there is something not so appealing about quoting them as strings. If you want to go more functional you can do:

function compareOrders4(a,b) {
function c(p) {
return cmpStrings(a[p],b[p]);
}
return c('status') || c('date') || c('time');
}

To my taste, that is a bit too functional and obscure. Finally, since it comes to mind and some people may suggest it, you can concatenate strings, like:

function compareOrders5(a,b) {
return cmpStrings(
a.status + a.date + a.time,
b.status + b.date + b.time
);
}

Note that in case fields “overlap” and/or have different length, this could give unexpected results.

Benchmarks

I tried the five different compare-functions on two different machines and got this kind of results (i5 N=100000, ARM N=25000), with slightly different parameters.

In these tests I used few unique values of status and date to often hit the entire compare function.

(ms)   i5    i5    ARM
#1 293 354 507
#2 314 351 594
#3 447 506 1240
#4 509 541 1448
#5 866 958 2492

This is quite easy to understand. #2 does exactly what #1 does and the function overhead is eliminated by the JIT. #3 is trickier for the JIT since a string is used to read a property. That is true also for #4, which also requires a function to be generated. #5 puts two strings on the stack needlessly when often only the first two strings are needed to compare anyway.

Conclusion & Recommendation

My conclusion is that #3 may be the best choice, despite it is slightly slower. I find #2 clearly preferable to #1, and I think #4 and #5 should be avoided.

Lambda Functions considered Harmful

Decades ago engineers wrote computer programs in ways that modern programmers scorn at. We learn that functions were long, global variables were used frequently and changed everywhere, variable naming was poor and gotos jumped across the program in ways that were impossible to understand. It was all harmful.

Elsewhere matematicians were improving on Lisp and functional programming was developed: pure, stateless, provable code focusing on what to do rather than how to do it. Functions became first class citizens and they could even be anonymous lambda functions.

Despite the apparent conflict between object oriented, functional and imperative programming there are some universally good things:

  • Functions that are not too long
  • Functions that do one thing well
  • Functions that have no side effects
  • Functions that can be tested, and that also are tested
  • Functions that can be reused, perhaps even being general
  • Functions and variables that are clearly named

So, how are we doing?

Comparing different styles
I read code and I talk to people who have different opinions about what is good and bad code. I decided to implement the same thing following different principles and discuss the different options. I particularly want to explore different ways to do functional programming.

My language of choice is JavaScript because it allows different styles, it requires quite little code to be written, and many people should be able to read it.

My artificial problem is that I have two arrays of N numbers. One number from each array can be added in NxN different ways. How many of these are prime? That is, for N=2, if I have [10,15] and [2,5] i get [12,15,17,20] of which one number (17) is prime. In all code below I decide if a number is prime in the same simple way.

Old imperative style (imperative)
The old imperative style would use variables and loops. If I had goto in JavaScript I would use goto instead of setting a variable (p) before I break out of the inner loop. This code allows for nothing to be tested nor reused, although the function itself is testable, reusable and pure (for practical purposes and correct input, just as all the other examples).

  const primecount = (a1,a2) => {
    let i, j;
    let d, n, p;
    let retval = 0;


    for ( i=0 ; i<a1.length ; i++ ) {
      for ( j=0 ; j<a2.length ; j++ ) {
        n = a1[i] + a2[j];
        p = 1;
        for ( d=2 ; d*d<=n ; d++ ) {
          if ( 0 === n % d ) {
            p = 0;
            break;
          }
        }
        retval += p;
      }
    }
    return retval;
  }

Functional style with lambda-functions (lambda)
The functional programming equivalent would look like the below code. I have focused on avoiding declaring variables (which would lead to a mutable state) and rather using the higher order function reduce to iterate over the two lists. This code also allows for no parts to be tested or reused. In a few lines of code there are three unnamed functions, none of them trivial.

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + ((n) => {
          for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
          return 1;
        })(a1val+a2val);
      }, 0);
    }, 0);
  };

Imperative style with separate test function (imperative_alt)
The imperative code can be improved by breaking out the prime test function. The advantage is clearly that the prime function can be modified in a more clean way, and it can be tested and reused. Also note that the usefulness of goto disappeared because return fulfills the same task.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += is_prime(a1[i] + a2[j]);
    return retval;
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

Functional style with lambda and separate test function (lambda_alt)
In the same way, the reduce+lambda-code can be improved by breaking out the prime test function. That function, but nothing else, is now testable and reausable.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + is_prime(a1val+a2val);
      }, 0);
    }, 0);
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

I think I can do better than any of the four above examples.

Functional style with reduce and named functions (reducer)
I don’t need to feed anonymous functions to reduce: I can give it named, testable and reusable functions instead. Now a challenge with reduce is that it is not very intuitive. filter can be used with any has* or is* function that you may already have. map can be used with any x_to_y function or some get_x_from_y getter or reader function that are also often useful. sort requires a cmpAB function. But reduce? I decided to name the below functions that are used with reduce reducer_*. It works quite nice. The first one reducer_count_primes simply counts primes in a list. That is (re)useful, testable all in itself. The next function reducer_count_primes_for_offset is less likely to be generally reused (with offset=1 it considers 12+1 to be prime, but 17+1 is not), but it makes sense and it can be tested. Doing the same trick one more time with reducer_count_primes_for_offset_array and we are done. These functions may not be reused. But they can be tested and that is often a great advantage during development. You can build up your program part by part and every step is a little more potent but still completely pure and testable (I remember this from my Haskell course long ago). This is how to solve hard problems using test driven development and to have all tests in place when you are done.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const reducer_count_primes = (s,n) => {
    return s + is_prime(n);
  };

  const reducer_count_primes_for_offset = (o) => {
    return (s,n) => { return reducer_count_primes(s,o+n); };
  };

  const reducer_count_primes_for_offset_array = (a) => {
    return (s,b) => { return s + a.reduce(reducer_count_primes_for_offset(b), 0); };
  };

  const primecount = (a1,a2) => {
    return a1.reduce(reducer_count_primes_for_offset_array(a2), 0);
  };

  const test = () => {
    if ( 1 !== [12,13,14].reduce(reducer_count_primes, 0) )
      throw new Error('reducer_count_primes failed');
    if ( 1 !== [9,10,11].reduce(reducer_count_primes_for_offset(3), 0) )
      throw new Error('reducer_count_primes_for_offset failed');
    if ( 2 !== [2,5].reduce(reducer_count_primes_for_offset_array([8,15]),0) )
      throw new Error('reducer_count_primes_for_offset_array failed');
  };

Using recursion (recursive)
Personally I like recursion. I think it is easier to use than reduce, and it is great for acync code. The bad thing with recursion is that your stack will eventually get full (if you dont know what I mean, try my code – available below) for recursion depths that are far from unrealistic.  My problem can be solved in the same step by step test driven way using recursion.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primes_for_offset = (a,o,i=0) => {
    if ( i === a.length )
      return 0;
    else
      return is_prime(a[i]+o) + primes_for_offset(a,o,i+1);
  }

  const primes_for_offsets = (a,oa,i=0) => {
    if ( i === oa.length )
      return 0;
    else
      return primes_for_offset(a,oa[i]) + primes_for_offsets(a,oa,i+1);
  }

  const primecount = (a1,a2) => {
    return primes_for_offsets(a1,a2);
  };

  const test = () => {
    if ( 2 !== primes_for_offset([15,16,17],2) )
      throw new Error('primes_with_offset failed');
  };

Custom Higher Order Function (custom_higher_order)
Clearly reduce is not a perfect fit for my problem since I need to nest it. What if I had a reduce-like function that produced the sum of all NxN possible pairs from two arrays, given a custom value function? Well that would be quite great and it is not particularly hard either. In my opinion this is a very functional approach (despite its implemented with for-loops). All the functions written are independently reusable in a way not seen in the other examples. The problem with higher order functions is that they are pretty abstract, so they are hard to name, and they need to be general enough to ever be reused for practical purposes. Nevertheless, if I see it right away, I can do it. But I don’t spend time inventing generic stuff instead of solving the actual problem at hand.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const combination_is_prime = (a,b) => {
    return is_prime(a+b);
  };

  const sum_of_combinations = (a1,a2,f) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += f(a1[i],a2[j]);
    return retval;
  };

  const primecount = (a1,a2) => {
    return sum_of_combinations(a1,a2,combination_is_prime);
  };

  const test = () => {
    if ( 1 !== is_prime(19) )
      throw new Error('is_prime(19) failed');
    if ( 0 !== combination_is_prime(5,7) )
       throw new Error('combination_is_prime(5,7) failed');
    if ( 1 !== sum_of_combinations([5,7],[7,9],(a,b)=> { return a===b; }) )
       throw new Error('sum_of_combinations failed');
  };

Lambda Functions considered harmful?
Just as there are many bad and some good applications for goto, there are both good and bad uses for lambdas.

I actually dont know if you – the reader – agrees with me that the second example (lambda) offers no real improvement to the first example (imperative). On the contrary, it is arguably a more complex thing conceptually to nest anonymous functions than to nest for loops. I may have done the lambda-example wrong, but there is much code out there, written in that style.

I think the beauty of functional programming is the testable and reusable aspects, among other things. Long, or even nested, lambda functions offer no improvement over old spaghetti code there.

All the code and performance
You can download my code and run it using any recent version of Node.js:

$ node functional-styles-1.js 1000

The argument (1000) is N, and if you double N execution time shall quadruple. I did some benchmarks and your results my vary depending on plenty of things. The below figures are just one run for N=3000, but nesting reduce clearly comes at a cost. As always, if what you do inside reduce is quite expensive the overhead is negligable. But using reduce (or any of the built in higher order functions) for the innermost and tightest loop is wasteful.

 834 ms  : imperative
874 ms  : custom_higher_order
890 ms  : recursive
896 ms  : imperative_alt
1015 ms  : reducer
1018 ms  : lambda_alt
1109 ms  : lambda

Other findings on this topic
Functional Programming Sucks


Vue components in Angular

I have an application written in AngularJS (v1) that I keep adding things to. Nowadays I prefer to write new code for Vue.js rather than AngularJS but rewriting the entire AngularJS application is out of the question.

However, when the need shows up for a new Page (controller in AngularJS) it is quite simple to write a Vue-component instead.

The AngularJS-html looks like this:

<div ng-if="page.showVue" id="{{ page.getVueId() }}"></div>

You may not have exactly “page” but if you have an AngularJS-application you know how to do this.

Your parent Angular controller needs to initiate Vue.

page.showVue = true;
var vue      = null;
var vueid    = null;

page.getVueId = function() {
    if ( !vueid ) {
        vueid = 'my_vue_component_id';
        var vueload = {
            el: '#' + vueid,
            template : '<my_vue_component />',
            data : {}
        };
        $timeout(function() {
            vue = new Vue(vueload);
        });
    }
    return vueid;
};

At some point you may navigate away from this vue page and then you can run the code:

vue.$destroy();
page.showVue = false;
vue          = null;
vueid        = null;

The way everything works is that when Angular wants to “show Vue” it sets page.showVue=true. This in turn activates the div, which needs an ID. The call to page.getVueId() will generate a Vue component (once), but initiate it only after Angular has shown the parent div with the correct id (thanks to $timeout).

You may use a router or have several different Vue-pages in your Angular-application and you obviously need to adjust my code above for your purposes (so every id is unique, and every component is initatied once).

I suppose (but I have not tried) that it is perfectly fine to have several different Vue-components mounted on different places in your Angular application. But I think you are looking for trouble if you want Vue to use (be a parent for) Angular controllers or directives (as children).

Vue.js is small enough that this will come at a quite acceptable cost for your current Angular application and it allows you to write new pages or parts in Vue in an existing AngularJS application.

Webpack: the shortest tutorial

So, you have some JavaScript that requires other JavaScript using require, and you want to pack all the files into one. Install webpack:

$ npm install webpack webpack-cli

These are my files (a main file with two dependencies):

$ cat main.js 

var libAdd = require('./libAdd.js');
var libMult = require('./libMult.js');

console.log('1+2x2=' + libAdd.calc(1, libMult.calc(2,2)));


$ cat libAdd.js 

exports.calc = (a,b) => { return a + b; };


$ cat libMult.js 

exports.calc = (a,b) => { return a * b; };

To pack this

$ ./node_modules/webpack-cli/bin/cli.js --mode=none main.js
Hash: 639616969f77db2f336a
Version: webpack 4.26.0
Time: 180ms
Built at: 11/21/2018 7:22:44 PM
  Asset      Size  Chunks             Chunk Names
main.js  3.93 KiB       0  [emitted]  main
Entrypoint main = main.js
[0] ./main.js 141 bytes {0} [built]
[1] ./libAdd.js 45 bytes {0} [built]
[2] ./libMult.js 45 bytes {0} [built]

and I have my bundle in dist/main.js. This bundle works just like original main:

$ node main.js 
1+2x2=5
$ node dist/main.js 
1+2x2=5

That is all I need to know about Webpack!

Background
I like the old way of building web application: including every script with a src-tag. However, occationally I want to use code I dont write myself, and more and more often it comes in a format that I can not easily just include it with a src-tag. Webpack is a/the way to make it “just” a JavaScript file that I can do what I want with.