I never write about great new JavaScript features on this blog. That is because there is really nothing you can not do without great new features, and I prefer writing compatible code (that runs in Internet Explorer).
However some code is backend-only, and if Node.js supports a feature I can use it.
So, here are two great new things in JavaScript that I will start using in Node.js.
Optional Chaining
It is very common in JavaScript with code like:
price = inventory.articles.apple.price;
If inventory, articles or apple is null or undefined this will throw an error. One common way to do this is:
Who wouldn’t want to do that? Well, you need to be aware that it will “fail silently” if something is missing, so you should probably not use it when you dont expect anything to be null or undefined, and not without handling the “error” properly. If you replace . with ?. everywhere in your code, you will just detect your errors later when your code gives the wrong result rather than crashing.
Nullish Coalescing
It is very common to write code like:
connection.port = port || 80;
connection.host = server || 'www.bing.com';
array = new Array(size || default_size);
The problem is that sometimes a “falsy value” (0, false, empty string) is a valid value, but it will be replaced by the default value. The port above can not be set to 0, and the host can not be set to ”;
I often do things like:
connection.port = 'number' === typeof port ? port : 80;
connection.host = null == server ? 'www.bing.com' : server;
However, there is now a better way in JavaScript:
connection.port = port ?? 80;
connection.host = server ?? 'www.bing.com';
array = new Array(size ?? default_size);
This will only fall back to the default value if port/server/size is null/undefined. Much of the time, this is what you want.
However, you still may need to do proper validation, so if you used to do:
connection.port = validatePort(port) ? port : 80;
you shall probably keep doing it.
Conclusion
If your target environment supports Optional chaining and Nullish coalescing, take advantage of it. Node.js 14 supports it.
I have written before about Functional Program with a rather negative stand point (it sucks, it is slow). Those posts have some readers, but they are a few years old, and I wanted to do some new benchmarks.
Please note:
This is written with JavaScript (and Node.js) in mind. I don’t know if these findings apply to other programming languages.
Performance is important, but it is far from everything. Apart from performance, there are both good and bad aspects of Functional Programming.
Basic Chaining
One of the most common ways to use functional programming (style) in JavaScript is chaining. It can look like this:
v = a.map(map_func).filter(filter_func).reduce(reduce_func)
In this case a is an array, and three functions are sequentially called to each element (except reduce is not called on those that filter gets rid of). The return value of reduce (typically a single value) is stored in v.
What is the cost of this?
What are the alternatives?
I decided to calculate the value of pi by
evenly distribute points in the[0,0][1,1] rectangle.
for each point calculate the (squared) distance to origo (a simple map)
get rid of each point beyond distance 1.0 (a simple filter)
count the number of remaing points (a simple reduce – although in this simple case it would be enough to check the length of the array)
I could use the same three functions in a regular loop:
const pi_funcs = (pts) => {
let i,v;
let inside = 0;
for ( i=0 ; i<pts.length ; i++ ) {
v = pi_map_f(pts[i]);
if ( pi_filter_f(v) ) inside = pi_reduce_f(inside,v);
}
return 4.0 * inside / pts.length;
};
I could also write everything in a single loop and function:
const pi_iterate = (pts) => {
let i,p;
let inside = 0;
for ( i=0 ; i<pts.length ; i++ ) {
p = pts[i];
if ( p.x * p.x + p.y*p.y <= 1.0 ) inside++;
}
return 4.0 * inside / pts.length;
};
What about performance? Here are some results from a Celeron J1900 CPU and Node.js 14.15.0:
Iterate (ms)
Funcs (ms)
Higher Order (ms)
Pi
10k
8
8
10
3.1428
40k
3
4
19
3.1419
160k
3
3
47
3.141575
360k
6
6
196
3.141711
640k
11
11
404
3.141625
1000k
17
17
559
3.141676
1440k
25
25
1160
3.14160278
There are some obvious observations to make:
Adding more points does not necessarily give a better result (160k seems to be best, so far)
All these are run in a single program, waiting 250ms between each test (to let GC and optimizer run). Obvously it took until after 10k for the Node.js optimizer to get things quite optimal (40k is faster than 10k).
The cost of writing and calling named functions is zero. Iterate and Funcs are practially identical.
The cost of chaining (making arrays to use once only) is significant.
Obviously, if this has any practical significance depends on how large arrays you are looping over, and how often you do it. But lets assume 100k is a practical size for your program (that is for example 100 events per day for three years). We are then talking about wasting 20-30ms every time we do a common map-filter-reduce-style loop. Is that much?
If it happens server side or client side, in a way that it affects user latency or UI refresh time, it is significant (especially since this loop is perhaps not the only thing you do)
If it happens server side, and often, this chaining choice will start eating up significant part of your server side CPU time
You may have a faster CPU or a smaller problem. But the key point here is that you choose to waste significant amount of CPU cycles because you choose to write pi_higherorder rather than pi_funcs.
Different Node Versions
Here is the same thing, executed with different versions of node.
1000k
Iterate (ms)
Funcs (ms)
Higher Order (ms)
8.17.0
11
11
635
10.23.0
11
11
612
12.19.0
11
11
805
14.15.0
18
19
583
15.1.0
17
19
556
A few findings and comments on this:
Different node version show rather different performance
Although these results are stable on my machine, what you see here may not be valid for a different CPU or a different problem size (for 1440k points, node version 8 is the fastest).
I have noted before, that functional code gets faster, iterative code slower, with newer versions of node.
Conclusion
My conclusions are quite consistent with what I have found before.
Writing small, NAMED, testable, reusable, pure functions is good programming, and good functional programming. As you can see above, the overhead of using a function in Node.js is practially zero.
Chaining – or other functional programming practices, that are heavy on the memory/garbage collection – is expensive
Higher order functions (map, filter, reduce, and so on) are great when
you have a named, testable, reusable function
you actually need the result, just not for using once and throwing away
Anonymous functions fed directly into higher order functions have no advantages whatsoever (read here)
The code using higher order functions is often harder to
debug, becuase you can’t just put debug outputs in the middle of it
refactor, because you cant just insert code in the middle
use for more complex agorithms, becuase you are stuck with the primitive higher order functions, and sometimes they don’t easily allow you to do what you need
Feature Wish
JavaScript is hardly an optimal programming language for functional programming. One thing I miss is truly pure functions (functions with no side effects – especially no mutation of input data).
I have often seen people change input data in a map.
I believe (not being a JavaScript engine expert) that if Node.js knew that the functions passed to map, filter and reduce above were truly pure, it would allow for crazy optimizations, and the Higher Order scenario could be made as fast as the other ones. However, as it is now, Node.js can not get rid of the temporary arrays (created by map and filter), because of possible side effects (not present in my code).
I tried to write what Node.js could make of the code, if it knew it was pure: