Tag Archives: Performance

JavaScript: Fast Numeric String Testing

Sometimes I have strings that (should) contain numbers (like ‘31415’) but I want/need to test them before I use them. If this happens in a loop I could start asking myself questions about performance. And if it is a long loop an a Node.js server the performance may actually matter.

For the purpose of this post I have worked with positives (1,2,3,…), and I have written code that finds the largest valid positive in an array. Lets say there are a few obvious options:

// Parse it and test it
const nv = +nc;
pos = Number.isInteger(nv) && 0 < nv;

// A regular expression
pos = /^[1-9][0-9]*$/.test(nc);

// A custom function
const strIsPositive = (x) => {
   if ( 'string' !== typeof x || '' === x ) return false;
   const min = 48; // 0
   const max = 57; // 9
   let   cc  = x.charCodeAt(0);
   if ( cc <= min || max < cc ) return false;
   for ( let i=1 ; i<x.length ; i++ ) {
     cc = x.charCodeAt(i);
     if ( cc < min || max < cc ) return false;
   }
   return true;
 }
pos = strIsPositive(nc);

Well, I wrote some benchmark code and ran it in Node.js, and there are some quite predictable findings.

It is no huge difference between the alternatives above, but there are differences (1ms for 10000 validations, on a 4th generation i5).

There is no silver bullet, the optimal solution depends on.

If all you want is validation, it is wasteful to convert (+nc). A Regular expression is faster, but you can easily beat a Regular expression with a simple loop.

If most numbers are valid, converting to number (+nc) makes more sense. It is expensive to parse invalid values to NaN.

If you are going to use the number, converting to number (+nc) makes sense (if you convert only once).

The fastest solution, both for valid and invalid numbers, is to never convert to number (but use the custom function above to validate) and find the max using string compare.

if ( strIsPositive(nc) &&
     ( max.length < nc.length ) || ( max.length === nc.length && max < nc )
   )
  max = nc; 

This is obviously not a generally good advice.

Other numeric formats

My above findings are for strings containing positives. I have tested both code that only validates, and code that use the value by comparing it.

You may not have positives but:

  • Naturals, including 0, which creates a nastier regular expression but an easier loop.
  • Integers, including negative values, which creates even nastier regular expressions.
  • Ranged integers, like [-256,255], which probably means you want to parse (+nc) right away.
  • Decimal values
  • Non standard formats (with , instead of . for decimal point, or with delimiters like spaces to improve readability)
  • Hex, scientific formats, whatever

In the end readability is usually more important than performance.

Stress testing with Raspberry Pi

I have a system – a micro service architecture platform – built on Node.js. It can run on a single computer or distributed. It is a quite small system but quite critical that it works correctly.

Under what circumstances would the system fail to work correctly? How much load can it handle? How does it behave under too heavy load?

Stress testing is difficult, and expensive. Ideally you have plenty of test clients simulating realistic usage. It can be done, but often not easily. A simple and cheap option is to run the system on less resources.

My system used to run perfectly on a Raspberry Pi. The tests work fine. I have also kept the integrationtests working (although there have been timing issues). However, the other day I tried to restore production data to the Raspberry Pi, and it failed to run properly. Problems were

  • High latency and timeouts
  • Heavy swapping
  • Escallating retries, making the situation worse

The last point is particularly interesting. Error handling is designed for stability and recovery, but it risks increasing the total load, making the system even more unstable.

I did make the system work on a RPi again, and in doing so I leant about real problems, and fixed them. It is an interesting excersise in finding problems in systems that don’t work properly, and it is a practical way to “measure first, optimize second”.

Does your system work, with a reasonable amount of production data, on a Raspberry Pi?

Performance, Node.js & Sorting

I will present two findings that I find strange in this post:

  1. The performance of Node.js (V8?) has clearly gotten consistently worse with newer Node.js versions.
  2. The standard library sort (Array.prototype.sort()) is surprisingly slow, often slower than a simple textbook mergesort.

My findings in this article are based on running a simple program mergesort.js on different computers and different node versions.

You may also want to read this article about sorting in Node.js. It applies to V8 version 7.0, which should be used in Node.js V11.

The sorting algorithms

There are three sorting algorithms compared.

  1. Array.prototype.sort()
  2. mergesort(), a textbook mergesort
  3. mergesort_opt(), a mergesort that I put some effort into making faster

Note that mergesort is considered stable and not as fast as quicksort. As far as I understand from the above article, Node.js used to use quicksort (up to V10), and from V11 uses something better called Timsort.

My mergesort implementations (2) (3) are plain standard JavaScript. Nothing fancy whatsoever (I will post benchmarks using Node.js v0.12 below).

The data to be sorted

There are three types of data to be sorted.

  1. Numbers (Math.random()), compared with a-b;
  2. Strings (random numbers converted to strings), compared with default compare function for sort(), and for my mergesort simple a<b, a>b compares to give -1, 1 or 0
  3. Objects, containing two random numbers a=[0-9], b=[0-999999], compared with (a.a-b.a) || (a.b-b.b). In one in 10 the value of b will matter, otherwise looking at the value of a will be enough.

Unless otherwise written the sorted set is 100 000 elements.

On Benchmarks

Well, just a standard benchmark disclaimer: I do my best to measure and report objectively. There may be other platforms, CPUs, configurations, use cases, datatypes, or array sizes that give different results. The code is available for you to run.

I have run all tests several times and reported the best value. If anything, that should benefit the standard library (quick)sort, which can suffer from bad luck.

Comparing algorithms

Lets start with the algorithms. This is Node V10 on different machines.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
NUC i7 82 82 61 110 81 54 95 66 50
NUC i5 113 105 100 191 130 89 149 97 72
NUC Clrn 296 209 190 335 250 196 287 189 157
RPi v3 1886 1463 1205 2218 1711 1096 1802 1370 903
RPi v2 968 1330 1073 1781 1379 904 1218 1154 703

The RPi-v2-sort()-Numbers stand out. Its not a typo. But apart from that I think the pattern is quite clear: regardless of datatype and on different processors the standard sort() simply cannot match a textbook mergesort implemented in JavaScript.

Comparing Node Versions

Lets compare different node versions. This is on a NUC with Intel i5 CPU (4th gen), running 64bit version of Ubuntu.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 84 107 96 143 117 90 140 97 71
v10.15.3 109 106 99 181 132 89 147 97 71
v8.9.1 85 103 96 160 133 86 122 99 70
v6.12.0 68 76 88 126 92 82 68 83 63
v4.8.6 51 66 89 133 93 83 45 77 62
v0.12.9 58 65 78 114 92 87 55 71 60

Not only is sort() getting slower, also running “any” JavaScript is slower. I have noticed this before. Can someone explain why this makes sense?

Comparing different array sizes

With the same NUC, Node V10, I try a few different array sizes:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
10 000 10 9 11 8 12 6 4 7 4
15 000 8 15 7 13 14 11 6 22 7
25 000 15 35 12 40 27 15 11 25 18
50 000 35 56 34 66 57 37 51 52 30
100 000 115 107 97 192 138 88 164 101 72
500 000 601 714 658 1015 712 670 698 589 558

Admittedly, the smaller arrays show less difference, but it is also hard to measure small values with precision. So this is from the RPi v3 and smaller arrays:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
5 000 34 57 30 46 59 33 29 52 26
10 000 75 129 64 100 130 74 63 104 58
20 000 162 318 151 401 290 166 142 241 132
40 000 378 579 337 863 623 391 344 538 316

I think again quite consistently this looks remarkably bad for standard library sort.

Testing throughput (Version 2)

I decided to measure throughput rather than time to sort (mergesort2.js). I thought perhaps the figures above are misleading when it comes to the cost of garbage collecting. So the new question is, how many shorter arrays (n=5000) can be sorted in 10s?

(count)  ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 3192 2538 4744 1996 1473 2167 3791 2566 4822
v10.15.3 4733 2225 4835 1914 1524 2235 4911 2571 4811
RPi v3 282 176 300 144 126 187 309 186 330

What do we make of this? Well the collapse in performance for the new V8 Torque implementation in Node v11 is remarkable. Otherwise I notice that for Objects and Node v10, my optimized algorithm has no advantage.

I think my algorithms are heavier on the garbage collector (than standard library sort()), and this is why the perform relatively worse for 10s in a row.

If it is so, I’d still prefer to pay that price. When my code waits for sort() to finish there is a user waiting for a GUI update, or for an API reply. I rather see a faster sort, and when the update/reply is complete there is usually plenty of idle time when the garbage collector can run.

Optimizing Mergesort?

I had some ideas for optimizing mergesort that I tried out.

Special handling of short arrays: clearly if you want to sort 2 elements, the entire mergesort function is heavier than a simple function that sorts two elements. The article about V8 sort indicated that they use insertion sort for arrays up to length 10 (I find this very strange). So I implemented special functions for 2-3 elements. This gave nothing. Same performance as calling the entire mergesort.

Less stress on the garbage collector: since my mergesort creates memory nodes that are discarded when sorting is complete, I thought I could keep those nodes for the next sort, to ease the load on the garbage collector. Very bad idea, performance dropped significantly.

Performance of cmp-function vs sort

The relevant sort functions are all K (n log n) with different K. It is the K that I am measuring and discussing here. The differences are, after all, quite marginal. There is clearly another constant cost: the cost of the compare function. That seems to matter more than anything else. And in all cases above “string” is just a single string of 10 characters. If you have a more expensive compare function, the significance of sort() will be even less.

Nevertheless, V8 is a single threaded environment and ultimately cycles wasted in sort() will result in overall worse performance. Milliseconds count.

Conclusions

Array.prototype.sort() is a critical component of the standard library. In many applications sorting may be the most expensive thing that takes place. I find it strange that it does not perform better than a simple mergesort implementation. I do not suggest you use my code, or start looking for better sort() implementations out there right away. But I think this is something for JavaScript programmers to keep in mind. However, the compare function probably matters more in most cases.

I find it strange that Node v11, with Timsort and V8 Torque is not more of an improvement (admittedly, I didnt test that one very much).

And finally I find it strange that Node.js performance seems to deteriorate with every major release.

Am I doing anything seriously wrong?

JavaScript Double Linked List

JavaScript has two very powerful and flexible build in data structures: [] and {}. You can program rather advanced JavaScript for years without needing anything else.

Nevertheless I had a conversation about possible advantages of using a linked list (instead of an array). Linked lists are not very popular, Stroustrup himself has suggested they should be avoided. But what if you mostly do push(), pop(), shift() and unshift() and never access an item by its index? Higher order functions as map(), reduce(), filter() and sort(), as well as iterators should be just fine.

I decided to implement a Double Linked List in JavaScript making it (mostly) compatible with Array and do some tests on it. The code both of the DoubleLinkedList itself, and the unit tests/benchmarks are available.

Disclaimer

This is a purely theoretical, academical and nerdy experiment. The DoubleLinkedList offers no advantages over the built in Array, except for possible performance advantages in edge cases. The disadvantages compared to Array are:

  • Lower performance in some cases
  • Less features / limited API
  • Less tested and proven
  • An extra dependency, possible longer application loading time

So, my official recommendation is that you read this post and perhaps look at the code for learning purposes. But I really doubt you will use my code in production (although you are welcome to).

Benchmarks

Benchmarks are tricky. In this case there are three kinds of benchmarks:

  1. Benchmarks using array[i] to get the item at an index. This is horrible for the linked list. I wrote no such benchmarks.
  2. Benchmarks testing map(), reduce(), filter(), that I wrote but that show consistently no relevant and interesting differences between built in Array and my DoubleLinkedList (my code is essentially equally fast as the standard library array code, which on one hand is impressive, and on the other hand is reason not to use it).
  3. Benchmarks where my DoubleLinkedList does fine, mostly that heavily depends on push(), pop(), shift() and unshift().

The only thing I present below is (3). I have nothing for (1), and (2) shows nothing interesting.

The machines are in order an Hades Canyon i7-NUC, an old i5-NUC, a newer Celeron-NUC, an Acer Chromebook R13 (with an ARMv8 CPU), A Raspberry Pi v3, and a Raspberry Pi V2. The Chromebook runs ChromeOS, the i7 runs Windows, the rest run Linux.

My benchmarks use Math.random() to create test data. That was not very smart of me because the variation between test runs is significant. The below numbers (milliseconds) are the median value of running each test 41 times. You can see for yourself that the values are quite consistent.

The tested algorithms

The push(), pop(), shift(), unshift() tests use the array/list as a queue and push 250k “messages” throught it, keeping the queue roughly 10k messages.

The mergesort() test is a mergesort built on top of the datastructures using push()/shift().

The sort() test is the standard Array.sort(), versus a mergesort implementation for DoubleLinkedList (it has less overhead than mergesort(), since it does not create new objects for every push()).

Benchmark result

                    Node8   ============== Node 10 =====================
(ms) NUCi7 NUCi7 NUCi5 NUC-C R13 RPiV3 RPiV2
unshift/pop 250k
Array 679 649 1420 1890 5216 11121 8582
D.L.L. 8 13 10 20 40 128 165
push/shift 250k
Array 37 31 31 49 143 388 317
D.L.L. 10 12 10 19 44 115 179
mergesort 50k
Array 247 190 300 466 1122 3509 3509
D.L.L. 81 88 121 244 526 1195 1054
sort 50k
Array 53 55 59 143 416 1093 916
D.L.L. 35 32 42 84 209 543 463

What do we make of this?

  • For array, push/shift is clearly faster than unshift/pop!
  • It is possible to implement a faster sort() than Array.sort() of the standard library. However, this may have little to do with my linked list (I may get an even better result if I base my implementation on Array).
  • I have seen this before with other Node.js code but not published it: the RPiV2 (ARMv7 @900MHz) is faster than the RPiV3 (ARMv8 @1200Mhz).
  • I would have expected my 8th generation i7 NUC (NUC8i7HVK) to outperform my older 4th generation i5 NUC (D54250WYK), but not so much difference.

More performance findings

One thing I thought could give good performance was a case like this:

x2 = x1.map(...).filter(...).reduce(...)

where every function creates a new Array just to be destroyed very soon. I implemented mapMutate and filterMutate for my DoubleLinkedList, that reuse existing List-nodes. However, this gave very little. The cost of the temporary Arrays above seems to be practically insignificant.

However for my Double linked list:

dll_1 = DoubleLinkedList.from( some 10000 elements )
dll_1.sort()
dll_2 = DoubleLinkedList.from( some 10000 elements )

Now
dll_1.map(...).filter(...).reduce(...) // slower
dll_2.map(...).filter(...).reduce(...) // faster

So it seems I thought reusing the list-nodes would be a cost saver, but it turns out to produce cache-misses instead

Using the Library

If you feel like using the code you are most welcome. The tests run with Node.js and first runs unit tests (quickly) and then benchmarks (slower). As I wrote earlier, there are some Math.random() in the tests, and on rare occations statistically unlikely events occur, making the tests fail (I will not make this mistake again).

The code itself is just for Node.js. There are no dependencies and it will require minimal work to adapt it to any browser environment of your choice.

The code starts with a long comment specifying what is implemented. Basically, you use it just as Array, with the exceptions/limitations listed. There are many limitations, but most reasonable uses should be fairly well covered.

Conclusion

It seems to make no sense to replace Array with a linked list in JavaScript (Stroustrup was right). If you are using Array as a queue be aware that push/shift is much faster than unshift/pop. It would surprise me much if push/pop is not much faster than unshift/shift for a stack.

Nevertheless, if you have a (large) queue/list/stack and all you do is push, pop, shift, unshift, map, filter, reduce and sort go ahead.

There is also a concatMutate in my DoubleLinkedList. That one is very cheap, and if you for some reason do array.concat(array1, array2, array3) very often perhaps a linked list is your choice.

It should come as no surprise, but I was suprised that sort(), mergesort in my case, was so easy to implement on a linked list.

On RPiV2 vs RPiV3

I have on several occations before written about that the 900MHz ARMv7 of RPiV2 completely outperformes the 700MHz ARMv6 of RPiV1. It is about 15 times faster, and not completely clear why the difference is so big (it is that big for JavaScript, not for C typical code).

The RPiV3 is not only 300MHz faster than the RPiV2, it is also a 64-bit ARMv8 cpu compared to the 32-bit ARMv7 cpu of RPiV2. But V3 delivers worse performance than V2.

One reason could be that the RPi does not have that much RAM, and not that fast RAM either, and that the price of 64-bit is simply not worth it. For now, I have no other idea.

References

An article about sorting in V8: https://v8.dev/blog/array-sort. Very interesting read. But I tried Node version 11 that comes with V8 version 7, and the difference was… marginal at best.

Best way to write compare-functions

The workhorse of many (JavaScript) programs is sort(). When you want to sort objects (or numbers, actually) you need to supply a compare-function. Those are nice functions because they are very testable and reusable, but sorting is also a bit expensive (perhaps the most expensive thing your program does) so you want them fast.

For the rest of this article I will assume we are sorting som Order objects based status, date and time (all strings).

The naive way to write this is:

function compareOrders1(a,b) {
if ( a.status < b.status ) return -1;
if ( a.status > b.status ) return 1;
if ( a.date < b.date ) return -1;
if ( a.date > b.date ) return 1;
if ( a.time < b.time ) return -1;
if ( a.time > b.time ) return 1;
return 0;
}

There are somethings about this that is just not appealing: too verbose, risk of a typo, and not too easy to read.

Another option follows:

function cmpStrings(a,b) {
if ( a < b ) return -1;
if ( a > b ) return 1;
return 0;
}

function compareOrders2(a,b) {
return cmpStrings(a.status,b.status)
|| cmpStrings(a.date ,b.date )
|| cmpStrings(a.time ,b.time );
}

Note that the first function (cmpStrings) is highly reusable, so this is shorter code. However, there is still som repetition, so I tried:

function cmpProps(a,b,p) {
return cmpStrings(a[p], b[p]);
}

function compareOrders3(a,b) {
return cmpProps(a,b,'status')
|| cmpProps(a,b,'date')
|| cmpProps(a,b,'time');
}

There is something nice about not repeating status, date and time, but there is something not so appealing about quoting them as strings. If you want to go more functional you can do:

function compareOrders4(a,b) {
function c(p) {
return cmpStrings(a[p],b[p]);
}
return c('status') || c('date') || c('time');
}

To my taste, that is a bit too functional and obscure. Finally, since it comes to mind and some people may suggest it, you can concatenate strings, like:

function compareOrders5(a,b) {
return cmpStrings(
a.status + a.date + a.time,
b.status + b.date + b.time
);
}

Note that in case fields “overlap” and/or have different length, this could give unexpected results.

Benchmarks

I tried the five different compare-functions on two different machines and got this kind of results (i5 N=100000, ARM N=25000), with slightly different parameters.

In these tests I used few unique values of status and date to often hit the entire compare function.

(ms)   i5    i5    ARM
#1 293 354 507
#2 314 351 594
#3 447 506 1240
#4 509 541 1448
#5 866 958 2492

This is quite easy to understand. #2 does exactly what #1 does and the function overhead is eliminated by the JIT. #3 is trickier for the JIT since a string is used to read a property. That is true also for #4, which also requires a function to be generated. #5 puts two strings on the stack needlessly when often only the first two strings are needed to compare anyway.

Conclusion & Recommendation

My conclusion is that #3 may be the best choice, despite it is slightly slower. I find #2 clearly preferable to #1, and I think #4 and #5 should be avoided.

JavaScript: Sets, Objects and Arrays

JavaScript has a new (well well) fancy Set datastructure (that does not come with functions for union, intersection and the likes, but whatever). A little while ago I tested Binary Search (also not in the standard library) and I was quite impressed with the performance.

When I code JavaScript I often hesitate about using an Array or an Object. And I have not started using Set much.

I decided to make some tests. Lets say we have pseudo-random natural numbers (like 10000 of them). We then want to check if a number is among the 10000 numbers or not (if it is a member of the set). A JavaScript Set does exactly that. A JavaScript Object just requires you to do: set[314] = true and you are basically done (it gets converted to a string, though). For an Array you just push(314), sort the array, and then use binary search to see if the value is there.

Obviously, if you often add or remove value, (re)sorting the Array will be annoying and costly. But quite often this is not the case.

The test
My test consists of generating N=10000 random unique numbers (with distance 1 or 2 between them). I then insert them (in a kind of pseudo-random order) into an Array (and sorts it), into an Object, and into a Set. I measure this time as an initiation time (for each data structure).

I repeat. So now I have 2xArrays, 2xObjects, 2xSets.

This way I can test both iterating and searching with all combinations of data structures (and check that the results are the same and thus correct).

Output of a single run: 100 iterations, N=10000, on a Linux Intel i5 and Node.js 8.9.1 looks like this:

                         ====== Search Structure ======
(ms)                        Array     Object      Set
     Initiate                1338        192      282
===== Iterate =====    
        Array                 800         39       93
       Object                 853        122      170
          Set                1147         82      131

By comparing columns you can compare the cost of searching (and initiating the structure before searching it). By comparing rows you can compare the cost of iterating over the different data structures (for example, iterating over Set while searching Array took 1147ms).

These results are quite consistent on this machine.

Findings
Some findings are very clear (I guess they are quite consistent across systems):

  • Putting values in an Array, to sort it, and the search it, is much slower and makes little sense compared to using an Object (or a Set)
  • Iterating an Array is a bit faster than iterating an Object or Set, so if you are never going to search an Array is faster
  • The newer and more specialized Set offers little advantage to good old Objects

What is more unclear is why iterating over Objects is faster when searching Arrays, but iterating over Sets if faster when searching Objects or Sets. What I find is:

  • Sets seem to perform comparably to Objects on Raspberry Pi, ARMv7.
  • Sets seem to underperform more on Mac OS X

Obviusly, all this is very unclear and can vary depending on CPU-cache, Node-version, OS and other factors.

Smaller and Larger sets
These findings hold quite well for smaller N=100 and larger N=1000000. The Array, despite being O(n log n), does not get much more worse for N=1000000 than it already was for N=10000.

Conclusions and Recommendation
I think the conservative choice is to use Arrays when order is important or you know you will not look for a member based on its unique id. If members have unique IDs and are not ordered, use Object. I see no reason to use Set, especially if you target browsers (support in IE is still limited in early 2018).

The Code
Here follows the source code. Output is not quite as pretty as the table above.

var lodash = require('lodash');

function randomarray(size) {
  var a = new Array(size);
  var x = 0;
  var i, r;
  var j = 0;
  var prime = 3;

  if ( 50   < size ) prime = 31;
  if ( 500  < size ) prime = 313;
  if ( 5000 < size ) prime = 3109;

  for ( i=0 ; i<size ; i++ ) {
    r = 1 + Math.floor(2 * Math.random());
    x += r;
    a[j] = '' + x;
    j += prime;
    if ( size <= j ) j-=size;
  }
  return a;
}

var times = {
  arr : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  obj : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  set : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  }
}

function make_array(a) {
  times.arr.make -= Date.now();
  var i;
  var r = new Array(a.length);
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[i] = a[i];
  }
  r.sort();
  times.arr.make += Date.now();
  return r;
}

function make_object(a) {
  times.obj.make -= Date.now();
  var i;
  var r = {};
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[a[i]] = true;
  }
  times.obj.make += Date.now();
  return r;
}

function make_set(a) {
  times.set.make -= Date.now();
  var i;
  var r = new Set();
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r.add(a[i]);
  }
  times.set.make += Date.now();
  return r;
}

function make_triplet(n) {
  var r = randomarray(n);
  return {
    arr : make_array(r),
    obj : make_object(r),
    set : make_set(r)
  };
}

function match_triplets(t1,t2) {
  var i;
  var m = [];
  m.push(match_array_array(t1.arr , t2.arr));
  m.push(match_array_object(t1.arr , t2.obj));
  m.push(match_array_set(t1.arr , t2.set));
  m.push(match_object_array(t1.obj , t2.arr));
  m.push(match_object_object(t1.obj , t2.obj));
  m.push(match_object_set(t1.obj , t2.set));
  m.push(match_set_array(t1.set , t2.arr));
  m.push(match_set_object(t1.set , t2.obj));
  m.push(match_set_set(t1.set , t2.set));
  for ( i=1 ; i<m.length ; i++ ) {
    if ( m[0] !== m[i] ) {
      console.log('m[0]=' + m[0] + ' != m[' + i + ']=' + m[i]);
    }
  }
}

function match_array_array(a1,a2) {
  times.arr.arr -= Date.now();
  var r = 0;
  var i, v;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    v = a1[i];
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.arr.arr += Date.now();
  return r;
}

function match_array_object(a1,o2) {
  times.arr.obj -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( o2[a1[i]] ) r++;
  }
  times.arr.obj += Date.now();
  return r;
}

function match_array_set(a1,s2) {
  times.arr.set -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( s2.has(a1[i]) ) r++;
  }
  times.arr.set += Date.now();
  return r;
}

function match_object_array(o1,a2) {
  times.obj.arr -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.obj.arr += Date.now();
  return r;
}

function match_object_object(o1,o2) {
  times.obj.obj -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( o2[v] ) r++;
  }
  times.obj.obj += Date.now();
  return r;
}

function match_object_set(o1,s2) {
  times.obj.set -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( s2.has(v) ) r++;
  }
  times.obj.set += Date.now();
  return r;
}

function match_set_array(s1,a2) {
  times.set.arr -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.set.arr += Date.now();
  return r;
}

function match_set_object(s1,o2) {
  times.set.obj -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( o2[v] ) r++;
  }
  times.set.obj += Date.now();
  return r;
}

function match_set_set(s1,s2) {
  times.set.set -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( s2.has(v) ) r++;
  }
  times.set.set += Date.now();
  return r;
}

function main() {
  var i;
  var t1;
  var t2;

  for ( i=0 ; i<100 ; i++ ) {
    t1 = make_triplet(10000);
    t2 = make_triplet(10000);
    match_triplets(t1,t2);
    match_triplets(t2,t1);
  }

  console.log('TIME=' + JSON.stringify(times,null,4));
}

main();

When to (not) use Web Workers?

Web Workers is a mature, simple, standardised, compatible technology for allowing multithreaded JavaScript-applications in the web browser.

I am not going to write about how to use Web Worker (check the excellent MDN article). I am going to write a little about when and why to (not) use Web Worker.

First, Web Workers are about performance. And performance is typically not the best thing to think about first when you code something.

Second, when you have performance problems and you throw more cores at the problem your best speedup is x2, x4 or xN. In 2018 it is quite common with 4 cores and that means in the optimal case you can make your program 4 times faster by using Web Workers. Unfortunately, if it was not fast enough from the beginning chances are a 4x speedup is not going to help much. And the cost of 4x speedup is 4 times more heat is produced, the battery will drain faster, and perhaps other applications will be suffering. A more efficient algorithm can often produce 10-100 times speedup without making the maintainability of the program suffer too much (and there are very many ways to make a non-optimised program faster).

Let us say we have a web application. The user clicks “Show report”, the GUI locks/blocks for 10s and then the report displays. The user might accept that the GUI locks, if just for 1-2 seconds. Or the user might accept that the report takes 10s to compute, if it shows up little by little and the program does not appear hung. The way we could deal with this in JavaScript (which is single thread and asyncronous) is to break the 10s report calculation into small pieces (say 100 pieces each taking 100ms) and after calculating each piece calling window.setTimeout which allows the UI to update (among other things) before calculating another piece of the report. Perhaps a more common and practical approach is to divide the 10s job into logical parts: fetch data, make calculations, make report, but this would not much improve the locked GUI situation since some (or all) parts still take significant (blocking) time.

If we could send the entire 10s job to a Web Worker our program GUI would be completely responsive while the report is generated. Now the key limitation of a web worker (which is also what allows it to be simple and safe):

Data is copied to the Worker before it starts, and copied from the Worker when it has completed (rather than being passed by reference).

This means that if you already have a lot of data, it might be quite expensive to copy that data to the web worker, and it might actually be cheaper to just do the job where the data already is. In the same way, since there is some overhead in calling the Web Worker, you can’t send too many too small pieces of work to it, because you will occupy yourself with sending and receiving messages rather than just doing the job right away.

This leaves us with obvious candidates for web workers (you can use Google):

  • Expensive searches (like chess moves or travelling salesman solutions)
  • Encryption (but chances are you should not do it in JavaScript in the first place, for security reasons)
  • Spell and grammar checker (I don’t know much about this).
  • Background network jobs

This is not too useful in most cases. What would be useful would be to send packages of work (arrays), like streams in a functional programming way: map(), reduce(), sort(), filter().

I decided to write some Web Worker tests based on sort(). Since I can not (easily, and there are probably good reasons) write JavaScript in WordPress I wrote a separate page with the application. Check it out now:

So, for 5 seconds I try to do the following job as many times I can, while I keep track of how much the GUI is suffering:

  1. create an array of 10001 random numbers: O(n)
  2. sort it: O(n log n)
  3. get the median (array[5000]): O(1)

The expensive part is step 2, the sort (well, I actually have not measured 1 vs 2). If the ratio of amount of work done per byte being sent is high enough then it can be worth it to send the job to a Web Worker.

If you run the tests yourself I think you shall see that the first Web Worker tests that outsource all of 1-2-3 are quite ok. But this basically means giving the web worker no data at all and when it has done a significant amount of job, receiving just a few numbers. This is more Web Worker friendly than Chess where at least the board would need to be sent.

If you then run the tests that outsource just sort() you see significantly lower throughput. How suitable sort()? Well, sorting 10k ~ 2^13 elements should require each element to be compared (accessed) about 13 times. And there is no data sent that is not needed by the Web Worker. Just as a counter example: if you send an order to get back the sum of the lines most of the order data is ignored by the Web Worker, and it just needs to access each line value once; much much less suitable than sort().

Findings from tests
I find that sort(), being O(n log n), on an array of numbers is far too fast to be outsourced to a Web Worker. You need to find a much more “dense” problem to benefit of a Web Worker.

Islands of data
If you can design your application in such way that one Web Worker maintains its own full state and just shares small selected parts occationally, that could work. The good thing is that this would also be clean encapsulation of data and separation of responsibilites. The bad thing is that you probably need to design with the Web Worker in mind quite early, and this kind of premature optimization is often a bad idea.

This could be letting a Web Worker do all your I/O. But if most data that you receive is needed in your application, and most data you send comes straight from your application, the benefit is very questionable. An if most data you receive is not needed in your application, perhaps you should not receive so much data in the first place. Even if you process your incoming data quite much: validating, integrating with current state, precalculating I would not expect it to come very close to the computational intensity of my sort().

Conclusions
Unfortunately, the simplicity and safety of Web Worker is unfortunately also its biggest limitation. The primary reason for using a Web Worker should be performance and even for artificial problems it is hard to get any benefit.

Minification of real web Application

I have built and I maintain a reasonably large (AngularJS) web application and here follow a few notes on the effect of minification.

I start with the findings:

                            Uncompressed         GZIP     Minified    Min+GZIP

App 1:  Size        (kb)            1130         1130          843         841
        Transferred (kb)            1150          375          861         308
        Load time    (s)             2.8          1.6          2.7         1.7

App 2:  Size        (kb)             708          708          659         659
        Transferred (kb)             721          359          672         347
        Load time    (s)             4.0          3.5          3.1         3.5

Conclusions
You should always enable gzip on the server. It is faster to compress and send less data than to send the uncompressed data. The benefits of gzip are huge and there are no negative side effects.

Minification saves some bandwidth (and if unlike me you do it ahead of time, some loading time). But unless your code contains mostly comments the effects are marginal (although that might be a big saving if you use very much bandwidth or you are looking for fastest possible load times).

Also, gzip tends to be good at what minification can easily do, and while the effect of minification alone is quite significant, the effect of minification together with gzip is smaller.

Behind the figures
The figures above come from Firefox Load time over the internet.

  • App1: About 100 files are served, mostly .js (a few .html and .css)
  • App2: About 80 files are served, mostly .js (a few .html and .css)
  • App1: Angular is always pre-minified 165kb, gzipped to 67kb.
  • App2: Angular+modules is always pre-minified 298kb, gzipped to 127kb
  • App2 contains a few fonts which are neither minified nor gzipped (142kb)
  • Files served by Node.js
  • Files minified by custom Node.js code in real time
  • Files gzipped by nginx in real time
  • Not everything is initiated when Load is complete (more html-files are loaded dynamically as user navigates, and data is loaded from APIs on demand)

Implications of minification
Minification (and possibly packaging of code) has more implications than gzip. Possible negative side effects are:

  • A build process is not strictly needed for web development, but minification is often done as part of a build process, increasing complexity of development, testing and deployment.
  • Testing and development is made harder when debugging minified code (although there are tools to mitigate this).
  • More aggressive minification can have unexpected results

The minification code I run in Node.js, when I serve a file, basically just:

  • Removes all white space in the beginning and end of lines
  • Removes all comments

This nice thing about this simple minification strategy is that everything that is obviously just waste is removed at a low cost, but the code is for all practical purposes completely unchanged (even line numbers are preserved to not complicate debugging). Also, developers should feel free to write as many comments as they like in the code, yet comments should never be served in a public facing application. More powerful minification comes at higher costs, and the effects are probably mostly lost after gzip.

I guess every project and system have a sweet spot when it comes to minification and I think my simple minification strategy makes sense for my needs.

Syncthing v0.14.40, Raspberry Pi, 100% CPU

I think Syncthing is an amazing piece of software, but I ran into problem last week.

I have a library of 10 different folders, 120000 files, 42000 directories and 428GB of data.

I thought that was a little bit too much for my RPi V1 (Syncthing 0.14.40, Arch Linux), because it constantly ran at 100%. I raised Rescan Interval to several hours (so it would finish before staring over).

After startup it took about 10-15 min to get the web GUI up, and about an hour to scan all folders for the first time. Well, that is ok, but after that it still constantly used 100% CPU despite all folders were “up to date”.

It turned out it crashed and started over. I found panic logs in .config/syncthing and error messages in ./config/syncthing/index-v0.14.0.db/LOG.

Some errors indicated Bad Magic Number and Checksum Corruption. The usual reason for this seems to be hardware problem (!?!).

I upgraded my RPi V1 to an RPi V2, with little success. Then I found that I had similar problems on another RPi V2. So after shutting down Syncthing I tried the quite scary:

  $ syncthing -reset-database      ( does not start syncthing )      
  $ syncthing                      ( start syncthing )

After several hours of scanning everything seems to work perfectly!
Let us see how long that lasts.

Peculiar Compiler Optimizations

My teacher in a High Performance Computing class once told me not to confuse the compiler. This was in the late 90s, and SGI C and Fortran compilers were supposed to replace entire blocks of code with highly optimised implementations. As long as the compiler understood your intentions.

I have never discovered this, but yesterday perhaps! Read on.

I have been playing around with LISP, solving Project Euler challenges on Hackerrank. For problems 44 and 45 I decided to do Binary Search (which afterwards turned out not to be so smart, but that is another story) and took the implementation from Rosetta Code (the iterative one).

Binary search is about finding an element in a sorted array by starting in the middle, and jumping left or right, cutting the remaning array in half each time.

In my case I decided just a part of the array was worth searching so instead of searching the entire array [0..length] I wanted to search up to the Nth element [0..N]. Searching fewer elements should be faster, so I improved the binary search function to take an additional argument: hi. For SBCL (Steel Bank Common Lisp), this surprisingly had horrible effect on performance.

Benchmark Results
The results for different algorithms, different machines and different LISP implementations follow. The RPi V1 runs Arch Linux, Clisp comes with Arch, SBCL is downloaded from SBCL webpage. The RPi V2 runs Raspbian and SBCL that comes with the distribution. The Celeron runs Ubuntu that comes with SBCL. The MacBook Air runs OS X and SBCL is downloaded separately.

 (times in seconds)             Array Size Standard Optimized Recursive    C -O2
================================================================================
RPi V1  ARMv6 700MHz  Clisp 2.49      5000    640       633       720
                      SBCL  1.3.12    5000     15.6      27        34       0.95
RPi V2  ARMv7 900MHz  SBCL  1.2.4     5000      6.2      16        17       0.31
                                     20000    110       293       300       5.8
NUC Celeron J3455     Clisp 2.49     20000    765       762       720   
                      SBCL  1.3.3    20000      8.3      16.7      18.0     1.0
MacBook Air i5        SBCL  1.2.11   20000      4.0      11.5      12.3     0.75

A very slight “optimization” turns out to have very negative impact on performance for the quite fast (compiled) SBCL. I can’t imagine any other explanation than SBCL replaces the standard binary search with optimized code rather than executing my program. For Clisp the optimization actually works quite as would be expected and the recursive code is actually the fastest. On the Celeron, Clisp and SBCL behaves completely opposite.

Comparing to C
The other week I had the feeling (SBCL) LISP was fast and decided to compare LISP to C. This time I had the feeling that LISP was rather slow so I ported my test program to C (basically line by line). Well, I found that SBCL is actually pretty fast (especially on x86/x64), and C was faster only thanks to -O2 on some systems. -O2 actually made the C-program more than 5 times faster: perhaps also the C-compiler replace the entire binary search?

The Test Program
The code follows. The only difference between Standard and Optimized is the single line that is commented out (with ; in LISP) selecting which binary search to run (the length of the function name does not explain the performance difference).

The program creates an array of length N and populates it with values by the formula n(n+1)/2. This takes little time. It then checks for values 10,20,30… if the values are found in the array (using binary search). In this program the entire array is always searched, not taking advantage of the extra parameter (although the optimized version does not need to find the length of the array every time called).

(defun binary-search (value array)                       ; Standard 2 lines
    (let ((low 0) (high (1- (length array))))            ;
        (do () ((< high low) nil)
            (let ((middle (floor (+ low high) 2)))
                (cond ((> (aref array middle) value)
                       (setf high (1- middle)))
                      ((< (aref array middle) value)
                       (setf low (1+ middle)))
                      (t (return middle)))))))

(defun binary-search-optimized (value array hi)          ; Optimized 2 lines
    (let ((low 0) (high hi))                             ;
        (do () ((< high low) nil)
            (let ((middle (floor (+ low high) 2)))
                (cond ((> (aref array middle) value)
                       (setf high (1- middle)))
                      ((< (aref array middle) value)
                       (setf low (1+ middle)))
                      (t (return middle)))))))

(defun binary-search-r (value
                        array
                        &optional (low 0)
                        (high (1- (length array))))
  (if (< high low)
      nil
      (let ((middle (floor (+ low high) 2)))
        (cond ((> (aref array middle) value)
               (binary-search-r value array low (1- middle)))
              ((< (aref array middle) value)
               (binary-search-r value array (1+ middle) high))
              (t middle)))))

(defun formula (n)
    (/ (* n (+ n 1)) 2))

(defun init-array (n)
    (let ((arr (make-array n)))
        (loop for i from 0 to (1- n) do
            (setf (aref arr i) (formula (1- i))))
        arr))

(defun solve (arr n max)
    (let ((ret 0))
        (loop for i from 10 to max by 10 do
            (if (binary-search i arr)                     ; Toggle code used
;           (if (binary-search-r i arr)                   ;
;           (if (binary-search-optimized i arr n)         ;
                (incf ret)
                Nil))
        ret))
            
(defun main ()
    (let ((n (read)))
        (let ((arr (init-array n)))
            (format T "~D~%" (solve arr (1- n) (aref arr (1- n)))))))

(main)

Since I am a very novice LISP programmer I appreciate any feedback. The code above does not solve Project Euler 44 or 45, it is much simplified to test binary search. Initially I wrote code that relied on recursion rather than loops but I exceeded the stack size and ended up with loops (according to what I read, loops rather than recursion is the preferred style of Common Lisp).

Conclusion
Well... optimization is hard, and dont make any assumptions. As I have found many times before, what makes code faster on some platforms can make it slower on others. When it comes to optimizing SBCL and compiled LISP much experience is required, and dont forget to measure!