Author Archives: zo0ok

Performance, Node.js & Sorting

I will present two findings that I find strange in this post:

  1. The performance of Node.js (V8?) has clearly gotten consistently worse with newer Node.js versions.
  2. The standard library sort (Array.prototype.sort()) is surprisingly slow, often slower than a simple textbook mergesort.

My findings in this article are based on running a simple program mergesort.js on different computers and different node versions.

You may also want to read this article about sorting in Node.js. It applies to V8 version 7.0, which should be used in Node.js V11.

The sorting algorithms

There are three sorting algorithms compared.

  1. Array.prototype.sort()
  2. mergesort(), a textbook mergesort
  3. mergesort_opt(), a mergesort that I put some effort into making faster

Note that mergesort is considered stable and not as fast as quicksort. As far as I understand from the above article, Node.js used to use quicksort (up to V10), and from V11 uses something better called Timsort.

My mergesort implementations (2) (3) are plain standard JavaScript. Nothing fancy whatsoever (I will post benchmarks using Node.js v0.12 below).

The data to be sorted

There are three types of data to be sorted.

  1. Numbers (Math.random()), compared with a-b;
  2. Strings (random numbers converted to strings), compared with default compare function for sort(), and for my mergesort simple a<b, a>b compares to give -1, 1 or 0
  3. Objects, containing two random numbers a=[0-9], b=[0-999999], compared with (a.a-b.a) || (a.b-b.b). In one in 10 the value of b will matter, otherwise looking at the value of a will be enough.

Unless otherwise written the sorted set is 100 000 elements.

On Benchmarks

Well, just a standard benchmark disclaimer: I do my best to measure and report objectively. There may be other platforms, CPUs, configurations, use cases, datatypes, or array sizes that give different results. The code is available for you to run.

I have run all tests several times and reported the best value. If anything, that should benefit the standard library (quick)sort, which can suffer from bad luck.

Comparing algorithms

Lets start with the algorithms. This is Node V10 on different machines.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
NUC i7 82 82 61 110 81 54 95 66 50
NUC i5 113 105 100 191 130 89 149 97 72
NUC Clrn 296 209 190 335 250 196 287 189 157
RPi v3 1886 1463 1205 2218 1711 1096 1802 1370 903
RPi v2 968 1330 1073 1781 1379 904 1218 1154 703

The RPi-v2-sort()-Numbers stand out. Its not a typo. But apart from that I think the pattern is quite clear: regardless of datatype and on different processors the standard sort() simply cannot match a textbook mergesort implemented in JavaScript.

Comparing Node Versions

Lets compare different node versions. This is on a NUC with Intel i5 CPU (4th gen), running 64bit version of Ubuntu.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 84 107 96 143 117 90 140 97 71
v10.15.3 109 106 99 181 132 89 147 97 71
v8.9.1 85 103 96 160 133 86 122 99 70
v6.12.0 68 76 88 126 92 82 68 83 63
v4.8.6 51 66 89 133 93 83 45 77 62
v0.12.9 58 65 78 114 92 87 55 71 60

Not only is sort() getting slower, also running “any” JavaScript is slower. I have noticed this before. Can someone explain why this makes sense?

Comparing different array sizes

With the same NUC, Node V10, I try a few different array sizes:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
10 000 10 9 11 8 12 6 4 7 4
15 000 8 15 7 13 14 11 6 22 7
25 000 15 35 12 40 27 15 11 25 18
50 000 35 56 34 66 57 37 51 52 30
100 000 115 107 97 192 138 88 164 101 72
500 000 601 714 658 1015 712 670 698 589 558

Admittedly, the smaller arrays show less difference, but it is also hard to measure small values with precision. So this is from the RPi v3 and smaller arrays:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
5 000 34 57 30 46 59 33 29 52 26
10 000 75 129 64 100 130 74 63 104 58
20 000 162 318 151 401 290 166 142 241 132
40 000 378 579 337 863 623 391 344 538 316

I think again quite consistently this looks remarkably bad for standard library sort.

Testing throughput (Version 2)

I decided to measure throughput rather than time to sort (mergesort2.js). I thought perhaps the figures above are misleading when it comes to the cost of garbage collecting. So the new question is, how many shorter arrays (n=5000) can be sorted in 10s?

(count)  ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 3192 2538 4744 1996 1473 2167 3791 2566 4822
v10.15.3 4733 2225 4835 1914 1524 2235 4911 2571 4811
RPi v3 282 176 300 144 126 187 309 186 330

What do we make of this? Well the collapse in performance for the new V8 Torque implementation in Node v11 is remarkable. Otherwise I notice that for Objects and Node v10, my optimized algorithm has no advantage.

I think my algorithms are heavier on the garbage collector (than standard library sort()), and this is why the perform relatively worse for 10s in a row.

If it is so, I’d still prefer to pay that price. When my code waits for sort() to finish there is a user waiting for a GUI update, or for an API reply. I rather see a faster sort, and when the update/reply is complete there is usually plenty of idle time when the garbage collector can run.

Optimizing Mergesort?

I had some ideas for optimizing mergesort that I tried out.

Special handling of short arrays: clearly if you want to sort 2 elements, the entire mergesort function is heavier than a simple function that sorts two elements. The article about V8 sort indicated that they use insertion sort for arrays up to length 10 (I find this very strange). So I implemented special functions for 2-3 elements. This gave nothing. Same performance as calling the entire mergesort.

Less stress on the garbage collector: since my mergesort creates memory nodes that are discarded when sorting is complete, I thought I could keep those nodes for the next sort, to ease the load on the garbage collector. Very bad idea, performance dropped significantly.

Performance of cmp-function vs sort

The relevant sort functions are all K (n log n) with different K. It is the K that I am measuring and discussing here. The differences are, after all, quite marginal. There is clearly another constant cost: the cost of the compare function. That seems to matter more than anything else. And in all cases above “string” is just a single string of 10 characters. If you have a more expensive compare function, the significance of sort() will be even less.

Nevertheless, V8 is a single threaded environment and ultimately cycles wasted in sort() will result in overall worse performance. Milliseconds count.

Conclusions

Array.prototype.sort() is a critical component of the standard library. In many applications sorting may be the most expensive thing that takes place. I find it strange that it does not perform better than a simple mergesort implementation. I do not suggest you use my code, or start looking for better sort() implementations out there right away. But I think this is something for JavaScript programmers to keep in mind. However, the compare function probably matters more in most cases.

I find it strange that Node v11, with Timsort and V8 Torque is not more of an improvement (admittedly, I didnt test that one very much).

And finally I find it strange that Node.js performance seems to deteriorate with every major release.

Am I doing anything seriously wrong?

JavaScript Double Linked List

JavaScript has two very powerful and flexible build in data structures: [] and {}. You can program rather advanced JavaScript for years without needing anything else.

Nevertheless I had a conversation about possible advantages of using a linked list (instead of an array). Linked lists are not very popular, Stroustrup himself has suggested they should be avoided. But what if you mostly do push(), pop(), shift() and unshift() and never access an item by its index? Higher order functions as map(), reduce(), filter() and sort(), as well as iterators should be just fine.

I decided to implement a Double Linked List in JavaScript making it (mostly) compatible with Array and do some tests on it. The code both of the DoubleLinkedList itself, and the unit tests/benchmarks are available.

Disclaimer

This is a purely theoretical, academical and nerdy experiment. The DoubleLinkedList offers no advantages over the built in Array, except for possible performance advantages in edge cases. The disadvantages compared to Array are:

  • Lower performance in some cases
  • Less features / limited API
  • Less tested and proven
  • An extra dependency, possible longer application loading time

So, my official recommendation is that you read this post and perhaps look at the code for learning purposes. But I really doubt you will use my code in production (although you are welcome to).

Benchmarks

Benchmarks are tricky. In this case there are three kinds of benchmarks:

  1. Benchmarks using array[i] to get the item at an index. This is horrible for the linked list. I wrote no such benchmarks.
  2. Benchmarks testing map(), reduce(), filter(), that I wrote but that show consistently no relevant and interesting differences between built in Array and my DoubleLinkedList (my code is essentially equally fast as the standard library array code, which on one hand is impressive, and on the other hand is reason not to use it).
  3. Benchmarks where my DoubleLinkedList does fine, mostly that heavily depends on push(), pop(), shift() and unshift().

The only thing I present below is (3). I have nothing for (1), and (2) shows nothing interesting.

The machines are in order an Hades Canyon i7-NUC, an old i5-NUC, a newer Celeron-NUC, an Acer Chromebook R13 (with an ARMv8 CPU), A Raspberry Pi v3, and a Raspberry Pi V2. The Chromebook runs ChromeOS, the i7 runs Windows, the rest run Linux.

My benchmarks use Math.random() to create test data. That was not very smart of me because the variation between test runs is significant. The below numbers (milliseconds) are the median value of running each test 41 times. You can see for yourself that the values are quite consistent.

The tested algorithms

The push(), pop(), shift(), unshift() tests use the array/list as a queue and push 250k “messages” throught it, keeping the queue roughly 10k messages.

The mergesort() test is a mergesort built on top of the datastructures using push()/shift().

The sort() test is the standard Array.sort(), versus a mergesort implementation for DoubleLinkedList (it has less overhead than mergesort(), since it does not create new objects for every push()).

Benchmark result

                    Node8   ============== Node 10 =====================
(ms) NUCi7 NUCi7 NUCi5 NUC-C R13 RPiV3 RPiV2
unshift/pop 250k
Array 679 649 1420 1890 5216 11121 8582
D.L.L. 8 13 10 20 40 128 165
push/shift 250k
Array 37 31 31 49 143 388 317
D.L.L. 10 12 10 19 44 115 179
mergesort 50k
Array 247 190 300 466 1122 3509 3509
D.L.L. 81 88 121 244 526 1195 1054
sort 50k
Array 53 55 59 143 416 1093 916
D.L.L. 35 32 42 84 209 543 463

What do we make of this?

  • For array, push/shift is clearly faster than unshift/pop!
  • It is possible to implement a faster sort() than Array.sort() of the standard library. However, this may have little to do with my linked list (I may get an even better result if I base my implementation on Array).
  • I have seen this before with other Node.js code but not published it: the RPiV2 (ARMv7 @900MHz) is faster than the RPiV3 (ARMv8 @1200Mhz).
  • I would have expected my 8th generation i7 NUC (NUC8i7HVK) to outperform my older 4th generation i5 NUC (D54250WYK), but not so much difference.

More performance findings

One thing I thought could give good performance was a case like this:

x2 = x1.map(...).filter(...).reduce(...)

where every function creates a new Array just to be destroyed very soon. I implemented mapMutate and filterMutate for my DoubleLinkedList, that reuse existing List-nodes. However, this gave very little. The cost of the temporary Arrays above seems to be practically insignificant.

However for my Double linked list:

dll_1 = DoubleLinkedList.from( some 10000 elements )
dll_1.sort()
dll_2 = DoubleLinkedList.from( some 10000 elements )

Now
dll_1.map(...).filter(...).reduce(...) // slower
dll_2.map(...).filter(...).reduce(...) // faster

So it seems I thought reusing the list-nodes would be a cost saver, but it turns out to produce cache-misses instead

Using the Library

If you feel like using the code you are most welcome. The tests run with Node.js and first runs unit tests (quickly) and then benchmarks (slower). As I wrote earlier, there are some Math.random() in the tests, and on rare occations statistically unlikely events occur, making the tests fail (I will not make this mistake again).

The code itself is just for Node.js. There are no dependencies and it will require minimal work to adapt it to any browser environment of your choice.

The code starts with a long comment specifying what is implemented. Basically, you use it just as Array, with the exceptions/limitations listed. There are many limitations, but most reasonable uses should be fairly well covered.

Conclusion

It seems to make no sense to replace Array with a linked list in JavaScript (Stroustrup was right). If you are using Array as a queue be aware that push/shift is much faster than unshift/pop. It would surprise me much if push/pop is not much faster than unshift/shift for a stack.

Nevertheless, if you have a (large) queue/list/stack and all you do is push, pop, shift, unshift, map, filter, reduce and sort go ahead.

There is also a concatMutate in my DoubleLinkedList. That one is very cheap, and if you for some reason do array.concat(array1, array2, array3) very often perhaps a linked list is your choice.

It should come as no surprise, but I was suprised that sort(), mergesort in my case, was so easy to implement on a linked list.

On RPiV2 vs RPiV3

I have on several occations before written about that the 900MHz ARMv7 of RPiV2 completely outperformes the 700MHz ARMv6 of RPiV1. It is about 15 times faster, and not completely clear why the difference is so big (it is that big for JavaScript, not for C typical code).

The RPiV3 is not only 300MHz faster than the RPiV2, it is also a 64-bit ARMv8 cpu compared to the 32-bit ARMv7 cpu of RPiV2. But V3 delivers worse performance than V2.

One reason could be that the RPi does not have that much RAM, and not that fast RAM either, and that the price of 64-bit is simply not worth it. For now, I have no other idea.

References

An article about sorting in V8: https://v8.dev/blog/array-sort. Very interesting read. But I tried Node version 11 that comes with V8 version 7, and the difference was… marginal at best.

Micro service separation

Lets say we have a simple service that we can start:

$ ls
my-service my-data
$ ./my-service -d my-data -p 8080

As we interact with the service over HTTP on 8080 it stores data in the my-data folder. This may seem simplified but it is basically how any network (web, file, directory, database and so on) service works.

Micro Services

Lets say you have 4 different such services responsible for different things (like html/UI, storage, log and authentication) and they work together: well you basically have a Micro Service architecture.

All your different services can have a binary and a data folder. They can all exist in the same folder. You can start, stop and update them independently. If a service is “heavy” you can can run several instances of it. The services need to know about each other and listen to different ports, but that is just a matter of configuration and it can be automated.

Separation of micro services

While the simple approach above works (and it works surprisingly well), you may run into issues such as:

  1. you want to be sure two services can’t mess with each others data
  2. a service may be heavy and should run on separate hardware
  3. the services have common dependencies but it must be possible to update them separately (dll hell)
  4. the services may not even run on the same operating systems
  5. two services may use some resource that would cause a conflict if they shared it (say both use the same windows registry keys)
  6. different teams may be responsible for different services, and they shall neither be able to mess with each other, or blame each other

Clearly, running all services on the same computer, as the same user, in the same folder is not good enough in these scenarios. What options do we have?

Separate Hardware

Traditionally, especially in the Windows world, each service got its on computer (I refer to custom application services, clearly Windows itself comes with multiple standard services running).

In Windows, you typically install something with an install Wizard. It updates and stores stuff in different places in the system (Registry, Windows folder, Program Files and perhaps more). Multiple services may not coexist completely safely. So each get a Windows server that you backup entirely in case of disaster. This is expensive, wasteful and complicated.

Virtual Machines

VMWare was founded in 1998 and VMWare workstation was released in 1999. It changed everything, especially on Windows. Instead of having multiple computers you could have one physical computer running multiple virtual computers. Such a virtual computer “thought” it was a real computer. It needed special device drivers for the virtual hardware it ran on.

This is great. But you end up duplicating megabytes, perhaps gigabytes of system files. Installation and configuration of a computer is not trivial. Even if you automate it there are many details that need to get right. And a virtual computer may need to swap/page, and does so to what it thinks is a physical drive, but it is just a file on the host computer.

Just different users

You may run your different services in the home directories of different users. In theory that could work in Windows, but it is a quite unlikely setup.

In *NIX it would mostly work fine. You can have multiple terminals and log in as multiple users at the same time. If you are root you can easily write scripts to become any user you like, and execute whatever you want in their home directory.

Also, in *NIX most software can actually be built, installed and run in a regular user home directory. Often you just build and install as the regular user:

$ ./configure --prefix=/home/service01
$ make
$ make install

Basically, *NIX is already Micro Service-ready.

Chroot

For some purposes, running in a home directory may not be optimal. Instead, you may want to run the application in an environment where everything it needs, and nothing else, is in / (the root). There is a command called chroot that allows you to do this.

But chroot is not perfect. First it is not entirely safe (there are ways to break out of it). Second, you need to populate /bin, /lib, /etc with everything you need, and that may not be obvious. And you will only run the service in the chroot – the administrator or devops team need to access the computer normally, and they are not restricted to or don’t just see the chroot.

Containers

While all the above methods can be made to work for a microservice architecture, much effort has been made to come up with something even better, especially for deploying applications to the cloud: containers.

Containers and the tools around them focus much on development, deployment and automation. They are made for DevOps. It is cheap to configure, create, run and discard containers.

Application containers (Docker) are quite similar to a chroot, but they exist on Windows too. It is about putting everything an appliation needs, and nothing else, into the container so you can easily move, reconfigure it, duplicate it, and so on without touching your operating system itself. The issue of having exactly the right OS version, with the right patches, and the right versions of the right dependencies installed is much simplified when you can create a stable container that you can run on any machine capable of running containers.

System containers (LXC) are quite similar to a virtual machine. But while a virtual machine emulates hardware and runs a complete kernel, a system container just emulates a kernel (that may require some contemplation). It has all the advantages of a Linux virtual machine on Linux, but less of the costs.

Conclusion and Summary

Containers are popular, and for good reasons. But they are also a bit hyped. In the end of they day, you have your service code, and when you run it, it (perhaps) works on local data. That is it. You can separate, scale, isolate, secure, manage, deploy, monitor and automate this on different levels:

  1. Run your services in different folders (most light weight)
  2. Run your services as different users
  3. Run your services in chroots
  4. Create appliation containers for your services
  5. Create system containers for your services
  6. Create virtual machines for your services
  7. Get physical machines for your services (most heavy weight)

You can clearly mix and match strategies (you may very well need to).

And the price of a Raspberry Pi is so low, that even #7 can be very cheap for many purposes.

Doublethink 2019

Doublethink is: the acceptance of contrary opinions or beliefs at the same time, especially as a result of political indoctrination

As I listen to politicians and influencers in 2019 there are very many contradictory things that they seem to believe, and that I am expected to also believe.

I publish the list below not to have a discussion about every single item. Perhaps you agree about some and think I got others wrong (which is of course possible). I publish this list to show that it is common that politicians and other influencers hold and express contradictory opinions at the same time, and that we are supposed to accept and follow this.

I am convinced that each and every one of us should ideally hold zero contradictory beliefs at any given time. So even if you only agree about some of my items, we should be able to agree that we have a systemic problem with doublethink.

Much of this doublethink happens in the name of Political Correctness. Welcome to 1984.

We are supposed to believe…

that Nazism is worse than Communism despite the number of dead tell a very different story

that taxation is useful to restrict things like tobacco, alcohol and emissions, but not causing any harm to real businesses and employment

that they care about reducing CO2-emissions, despite they don’t want any nuclear power

that nuclear power is very dangerous, despite coal cause more deaths monthly on the planet than nuclear accidents ever caused

that the unequal distribution of power and money under capitalism will disappear when even more power is centralized with politicians to control not only government, but also companies under socialism

that Israel is a terrible apartheid state, despite there is a significant minority of arabs and muslims in Israel, enjoying a good living standard and having more democracy and human rights than arabs and muslims anywhere else in the middle east

that they stand up against all racism – nazism most of all, except when it comes to antisemitism among people from the middle east

that the hijab (or variants of same purpose) is a symbol of womens freedom when many women in and from the middle east tell a very different story

that the white European man brought slavery and slave trade to the world, when the arab slave trade had been going on for 1000 years (both from Africa and Europe) and the market and infrastructure was already there in Africa when Europeans for a few hundred years did slave trade (a horrible thing)

that capitalism is the worst threat to the environment, when the atrocities when it comes to destroying the environment in the Soviet Union are unprecedented

that Europeans brought all cruelty and misery to the american continent, despite human sacrifice and very authoritarian societies were massively widespread among the natives before the continent was discovered (the Europeans also did horrible things)

that socialism is in strong opposition to nazism, when clearly nazism is a form of socialism, and Marx coined the term “the jewish question” himself

that the left is against hatred and violence, when their own rhetoric against capitalists, white men and other opponents is very aggressive

that all criminals deserve another chance and can be good citizens and humans, despite the fact that psychopathy exists there is no known working treatment

that anti-fascism is a democratic mindset, however the Berlin Wall was stated to be an anti-fascist measurement and it had very little to do with fascism and everything to do with oppressing people in a socialist state

that capitalism is bad to people, despite it is practically unheard of people leaving free market nations for socialist nations, while millions of people have fled socialism

that Islam is a religion like others, and a religion of peace, despite it has been at war with itself, at war with its own people, at war with its women, and at war with its neighbors since the days of Mohammed

that the free market is to blame for lack of housing, when the housing market is one of the few markets that are heavily regulated, and while most free markets show no lack of options for relatively poor people: cheap clothes, food, fast food, entertainment, air travel, furniture are provided by global brands like HM, MacDonalds, Lidl, Netflix, Ryanair and IKEA (how about letting those companies work with lack of housing?)

that ISIS is not representative of Islam despite they just follow the footsteps of Mohammed which is clearly a virtue in Islam

that Islamism is a separate thing from Islam, despite the Koran considers the law, Sharia, central to the life of a muslim, and despite Erdogan has said that the separation of Islam and Islamism is an ugly western construction

that Islam is not a threat to freedom, democracy and human rights, despite there are virtually no muslim majority countries that are free, democratic and respect human rights

that Mohammed is a respectable prophet, despite he was one of the most cruel humans in history

that the environmentalists care about science when it comes to human caused climate change, but when it comes to GMO and studies finding disadvantages with organic farming they ignore scientific results

that later muslim aggressions have been a reasonable response to the crusades, when the crusades are minor isolated events compared to the many centuries of war, imperialism and slavery that muslims had brought upon the Middle East, Europe and Africa before the crusades

that women are structurally and systematically oppressed by men in the west, despite women live 5 years longer, spend less than 20% time in prison, commit far fewer suicides, experience far few workplace accidents, and generally have better mental health and more friends than men do

that feminists care about women who have a hard time, despite they don’t listen at all to what prostitutes say or want

that very few women go into tech jobs because of discrimination and sexist attitudes, despite women a century ago were restricted from most careers and (very impressively) managed to make it into medicine, teaching, law, economics, politics, and so on (feminists today are implying that tech men are worse than other men which is speculative, and also injury-on-insult since women tend to reject nerds in the first place)

that democratic socialism has anything to do with democracy, despite all socialist states have ended up seriously authoritarian, many of them with the word democratic in their names

that the west, USA and modern Europe, are the only evil empires, despite the history of the world is the history of empires (chineese, persian, roman, muslim, aztec, ottoman, russian, and so on) that conquered nations and enslaved people.

that we are very certain about the catastrophic effects of climate change, despite previous threats: peek oil, HIV, the ozone layer, nuclear power, nuclear weapons, world wars, new ice age, turned out to be exaggerated (in fact humans have always thought they live just before the end of time)

that men and women are practically the same, when the biological differences (both physical, psychological, and when it comes to abilities and talents) are clearly significant (and this is also proven scientifically in numerous studies)

that equality of outcome is desirable, when the way to achieve it necessarily is very authoritarian and restrictive to individual freedom and choice

that anxiety of climate change is a virtue, while anxiety of terrorism and criminality is a sin

that nationalism is inherently evil, despite no welfare state has ever existed beyond the scope of a nation

that women are for all purposes equal to men, except when they do horrible things like joining ISIS, then they are to be understood as passive victims and they can play the woman-card

that the anti-racist left are anti racist at all, when they often hold and spread very antisemitic attitudes and support antisemitic (muslim) organisations

that feminism is about fighting the patriarchy, when it is mostly indifferent to outright oppression of women in/from non-western cultures

that feminists have valid reasons to embrace people like Linda Sarsour (an advocate of Sharia in USA), when they reject people like Ayaan Hirsi Ali (an ex-muslim standing up for human rights)

that political correctness is more important (when it comes to respect of minorities and other cultures) than the words of Martin Luther King: History will have to record that the greatest tragedy of this period of social transition was not the strident clamor of the bad people, but the appalling silence of the good people.

that cows, chicken and pigs will be happily liberated by vegetarians, when the animals will instead be practically extinguished

that christianity and islam are mostly the same, despite christianity have been reformed on several occasions and christians mostly accept and respect different interpretations of their religion, while Islam is inherently resilient to reformation, has failed to reform but rather tend to fall back into fundamentalism, and muslims mostly don’t accept the interpretation of other muslims.

that socialism is a fine idea, which it perhaps was just like Thalidomide (Neurosedyn in Swedish), but after seeing the catastrophic effects of applying it, it is nothing less than heartless, cruel and evil to make another attempt (but socialism has, unlike capitalism, science and most other activities, absolutely no feedback loop so it is applied over and over again with the same horrible result – only the propaganda is refined)

intolerant extremists to the right (fascists and nationalists) are considered an absolute evil and threat and they must not even be talked to, but the returning murderers and rapists of ISIS are supposed to be respectfully integrated (getting more support than any other refugee or ISIS victim could dream of)

that the idea of socialism, which ultimately is to reward those who do wrong and punish those who do right, can ever lead to a good society despite any workplace, relationship, raising of children or keeping pets would fail horribly following the same inferior idea (I dare you to start encouraging bad behavior and punishing good behavior with people or pets around you)

that the problem is that some people are very rich, when in fact poverty globally is being effectively pushed back (except mostly in a few unfortunate authoritarian and/or socialist countries); caring about improving the situation for the poor is emphatic, caring about taking from the rich is only jealousy (and shows ignorance of basic economics)

that gender theory and political correctness are good things, when they are just the racial biology and racial hygiene of our time (the purpose is to classify and group people based on origin and physical attributes, and then to create a conforming population for the utopia, not to respect human rights or diversity)

that women have absolute right to their own bodies (abortion now legal up to day of birth in some places), unless they want to make profit on sex or sexuality

that there is anything reasonable about transgender women competing with other women in elite sports, despite the obvious fact that they have a massive unfair advantage

that gender is a social construction, although we are born with a sexual orientation

that Che Guevara is a fighter of freedom and equality for the left, when he was a homophopic, rasist, chauvinistic murderer

that while abortion is an absolute right (my body my choice), surrogacy is unacceptable

that criticism of Islam is Islamophobic, while criticism of Israel is not antisemitic

In Sweden, we are supposed to believe…

that it will reduce emissions to build high speed railroads to the north to replace a few daily flights (emissions from construction will exceed flights for foreseeable future)

that criminality can not be reduced by longer prison sentences, despite criminals answer that they did not even recognize they got a punishment at all for their crime

that there is evidence longer prison sentences don’t work to reduce crime, when it has never been tried in modern time in Sweden (references are usually to studies from the US where already draconian prison sentences were made even longer, which obviously says very little about what happens if criminals start being punished at all, or from a very low level)

that socialism made Sweden rich, when in fact the foundation of wealth in Sweden was built before the second world war by a low-tax, laissez-faire economy (that later came to slow down as taxes rises under socialism)

that socialism brought social security to the working class in Sweden, despite the workers organized it themselves as private insurances (that the socialist government later nationalized and took credit for)

that we have world class healthcare, despite people who have been living abroad are often shocked with Swedish health care

that they care about the well-being of animals, when they want wolves to hunt freely causing massive cruelty to wild and domestic animals wherever they come

that they care about vulnerable people, when they are mostly unwilling to protect ordinary people – even abused women – from criminals with a long record of offences

that the left (Swedish V) is inherently against racism, when they were the only party who did not oppose Hitler at the time Nazi Germany conquered Norway and Denmark.

that the left (Swedish V) is democratic and against imperialism, when they were the only party not support the Polish Solidarity movement at the time

that restricting access to weapons to honest citizens and hunters even more will make the country any safer, despite these legal weapons are virtually unheard of in crime cases (except rare self defense cases)

that they care about ethnic diversity and indigenous people, when Sweden blatantly ignores UN Declaration of the Rights of Indigenous People when it comes to the samis.

About Venezuela, we are supposed to believe…

that USA is to blame for the catastrophe in Venezuela, despite Russia and China has much more dealings with the socialist regime

that it is about USA wanting the oil, when USA is rather self sustained

that Russia has some moral ground when they object to interfering in the internal affairs of other nations, like Venezuela, given their recent history in Georgia, Ukraine and Chechnya, and a longer tradition of occupying neighbour states

that they care about potential climate refugees in a distant futures, despite they don’t give a shit about refugees of socialistic Venezuela

that the crimes the socialist regime in Venezuela commit against human rights and its own population (censorship, torture, oppression, socialistic market regulation and currency regulations resulting in lack of everything, blocking humanitarian aid, ignoring the constitution, establishing collectivos to harass and brutalize people, and so on) are somehow reasonable given a narrative that USA is working against the chavism and socialist movement

that the socialist regime had inherently good intentions towards its own population when it ran welfare programs in the past, despite the same regime now is completely indifferent to the suffering of its own people, even rejecting international emergency humanitarian aid (the regime only cares about its loyal supporters, everyone else who don’t support the socialistic cause can suffer)

that things are mostly fine in Venezuela despite UNHCR tells that more than 3 million people have fled the country in a few years (3 million people had left GDR before the Berlin Wall was built to prevent it)

Crostini on Acer R13

Update 20190409: Activated Developer mode and installed Crouton again. It works perfectly for me.
Update 20190329: Got build 73.0.3683.88 four days late, Chrome OS 73, still no improvement.
Update 20190306: Got build 72.0.3626.122, problems remain.
Update 20190225: Got build 72.0.3626.117, problems remain.

Finally Crostini is available on the stable channel of ChromeOS for Acer R13 (elm platform 72). Unfortunately, the experience is still not what I had hoped.

I get a container as expected, but after a while problems start, it crashes and fails to start again. This is a way to destroy the virtual machine (termina) and the container (penguin) inside it.

crosh> vmc stop termina
crosh> vmc destroy termina
crosh> vmc start termina
(termina) chronos@localhost ~ $ lxc list
To start your first container, try: lxc launch ubuntu:18.04
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

When I have done this, I can start the terminal app, it takes a while, and I get:

zo0ok@penguin:~$ uname -a
Linux penguin 4.19.4-02480-gd44d301822f0 #1 SMP PREEMPT Thu Dec 6 17:48:31 PST 2018 aarch64 GNU/Linux

That is good. I run apt-get update and apt-get upgrade successfully.

Trying to install a real terminal

However, when I try to install gnome-terminal “it” crashes.

(termina) chronos@localhost ~ $ [ERROR:utils.cc(50)] Failed to read message size from socket: Resource temporarily unavailable
[ERROR:vsh_client.cc(186)] Failed to receive message from server: Resource temporarily unavailable
crosh>

Both the virtual machine (termina) and the container (penguin) crashed. I can start termina again, but penguin is dead.

Yesterday, when running apt-get install, I got “Illegal instruction” repeatedly. I can make some semi-qualified guesses based on that:

  1. Something with the visualization/containerization layer is not working properly on ARM64 yet.
  2. The Debian guest OS is built in a way that is not compatible with my machine, at least not inside a container in a VM

I tried (as suggested in a message above) to set up Ubuntu instead:

(termina) chronos@localhost ~ $ lxc launch ubuntu:18.04
Creating the container
Container name is: set-kitten
Starting set-kitten
(termina) chronos@localhost ~ $ lxc list
+------------+---------+-----------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+-----------------------+------+------------+-----------+
| set-kitten | RUNNING | 100.115.92.201 (eth0) | | PERSISTENT | 0 |
+------------+---------+-----------------------+------+------------+-----------+

Then it turned out to be necessary to use a little trick to set the password in Ubuntu, and log in as usual.

(termina) chronos@localhost ~ $ lxc exec set-kitten -- /bin/bash
root@set-kitten:~# passwd ubuntu
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
root@set-kitten:~# exit

(termina) chronos@localhost ~ $ lxc console set-kitten
To detach from the console, press: +a q
Ubuntu 18.04.2 LTS set-kitten console
set-kitten login: ubuntu
Password:

But again, apt-get update and apt-get upgrade works just fine. But when I tried to install gnome-terminal all looked fine for a while, until:

Setting up libcdparanoia0:arm64 (3.10.2+debian-13) …
Setting up libblockdev-loop2:arm64 (2.16-2) …
[ERROR:utils.cc(50)] Failed to read message size from socket:
Resource temporarily unavailable
[ERROR:vsh_client.cc(186)] Failed to receive message from server: Resource temporarily unavailable
crosh>

From here it got worse, and all I could think of was to start over again.

(termina) chronos@localhost ~ $ lxc list
+------------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------+---------+------+------+------------+-----------+
| set-kitten | STOPPED | | | PERSISTENT | 0 |
+------------+---------+------+------+------------+-----------+
(termina) chronos@localhost ~ $ lxc start set-kitten
Error: Missing source '/run/sshd/penguin/authorized_keys' for disk 'ssh_authorized_keys'

Back to basics

What if the problem is related to me wanting a GUI-terminal? I started over with a new default-penguin-container (using the vmc destroy termina trick I mentioned in the beginning). Unfortunately, when I did apt-get ugrade in my new penguin-system I have the impression that many things anyway were left since before, and apt-get upgrade crashed.

So I decided to:

  1. Powerwash the Acer R13
  2. First thing when it came up again, install Linux
  3. apt-get update
  4. apt-get uprade (CRASHED)

Trying with Archlinux

I gave an arch-container a try. Installing archlinux is as easy as:

lxc launch images:archlinux

I installed a few tools (ssh, git, nodejs) and that was fine. Then I tried to git clone a private repository. It got stuck half way, but Ctrl-C allowed me to restart and all was good. Then I installed some node-packages with npm install and I got the familiar:

[ERROR:utils.cc(50)] Failed to read message size from socket: Transport endpoint is not connected [ERROR:vsh_client.cc(186)]
Failed to receive message from server: Transport endpoint is not connected
crosh>

…that is, container, and virtual machine all crashed.

Other people…

It seems I am not alone according to this reddit thread (that I have also posted to).

Conclusion

Unfortunately , I must say that Crostini is not at all stable and useful on Acer R13, despite it is now in the stable channel.

At this point when I get very irregular crashes I must ask myself if there is anything wrong with my Chromebook. However, apart from Crostini I never had any problems with it. And for several months I ran it in Developer mode with Crouton, always rock solid.

The good thing is possibly that this should not be a major bug with architectural implications. If they (Google) have made it all this way with Crouton for ARM64 I don’t think they will abandon it. This will silently be fixed some day and forgotten.

So unfortunately, I can not recommend the Acer R13 (or any other ARM device) at the moment if you are interested in running Linux/Crostini on your Chromebook. Crouton should still be good, I guess, it was a while since I used it.

Where to ‘use strict’ with Object.freeze()?

I have coded JavaScript short enough time to consider ‘use strict’ a mandatory and obvious feature of the language to use. I always use it unless I forget to.

A while ago I was aware of Object.freeze(). I have been thinking about different ways to exploit this (strict) feature for a while and I now have a very good use case: freeze indata in unit tests to ensure my tested functions don’t incidentally change indata (pure functions are good, pure functions don’t change indata, and it is hard to really guarantee a function in JavaScript is pure).

Imagine I am writing a function that calculates the average and I have a test for it.

const averageOfArray1 = (a) => {
let s = 0;
for ( let i=0 ; i<a.length ; i++ ) s+=a[i];
return s/a.length;
};

describe('test avg', () => {
it('should give the average value of 2', () => {
const a = [1,2,3];
assert.equal(2, averageOfArray1(a) );
});
});

If averageOfArray mutates its input, it would be a serious bug, and the above test would not detect it. Lets look at a different implementation:

const averageOfArray2 = (a) => {
for ( let i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};

describe('test avg', () => {
it('should give the average value of 2', () => {
const a = [1,2,3];
assert.equal(2, averageOfArray2(a) );
});
});

Some genious “optimized” the function by eliminating an unnecessary variable (s), and the test still passes! However, if the tests where written:

describe('test loop', () => {
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a) );
});
})

the tests would fail! Much better. How do the tests fail? This is what I get:

1) test avg
should give the average value of 2:

AssertionError [ERR_ASSERTION]: 2 == 0.3333333333333333
+ expected - actual
-2
+0.3333333333333333

So it appears that the first element [0] of the array was never changed, thus the return value of 0.3333. But no exception was thrown. If I instead would ‘use strict’ for the entire code:

 'use strict';

const assert = require('assert');

const averageOfArray2 = (a) => {
for ( let i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};
describe('test avg', () => {
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a));
});
});

instead I get:

1) test avg
should give the average value of 2:
TypeError: Cannot assign to read only property '0' of object '[object Array]'
at averageOfArray2 (avg.js:12:45)
at Context.it (avg.js:20:25)

which is what I really wanted.

So it APPEARS to me that without ‘use strict’ the frozen object is not changed, but changing it just fails silently. With ‘use strict’ I get an exception right way, which leads me to the question where I can put use strict? This is what I found:

 // 'use strict';  // GOOD

const assert = require('assert');

// 'use strict'; // BAD

const averageOfArray2 = (a) => {
// 'use strict'; // GOOD
let i;
// 'use strict'; // BAD
for ( i=1 ; i<a.length ; i++ ) a[0] += a[i];
return a[0]/a.length;
};
describe('test avg', () => {
// 'use strict'; // BAD
it('should give the average value of 2', () => {
const a = Object.freeze([1,2,3]);
assert.equal(2, averageOfArray2(a));
});
});

That is, ‘use strict’ should be in place where the violation actually takes place. And ‘use strict’ must be placed first in whatever function it is placed, otherwise it is silently ignored! This is probably well known to everyone, but it was not to me.

Conclusion

Object.freeze() is very useful for improved unit tests. However, you should use it together with properly placed ‘use strict’ and that is in the function begin tested (not only the unit test).

And note, if you have done Object.freeze in a unit test, and someone refactors the tested function in a way that it both:

  1. Mutates the frozen object
  2. Removes or moves ‘use strict’ to an invalid place

your unit tests may still pass, even though the function is now very dangerous.

Best way to write compare-functions

The workhorse of many (JavaScript) programs is sort(). When you want to sort objects (or numbers, actually) you need to supply a compare-function. Those are nice functions because they are very testable and reusable, but sorting is also a bit expensive (perhaps the most expensive thing your program does) so you want them fast.

For the rest of this article I will assume we are sorting som Order objects based status, date and time (all strings).

The naive way to write this is:

function compareOrders1(a,b) {
if ( a.status < b.status ) return -1;
if ( a.status > b.status ) return 1;
if ( a.date < b.date ) return -1;
if ( a.date > b.date ) return 1;
if ( a.time < b.time ) return -1;
if ( a.time > b.time ) return 1;
return 0;
}

There are somethings about this that is just not appealing: too verbose, risk of a typo, and not too easy to read.

Another option follows:

function cmpStrings(a,b) {
if ( a < b ) return -1;
if ( a > b ) return 1;
return 0;
}

function compareOrders2(a,b) {
return cmpStrings(a.status,b.status)
|| cmpStrings(a.date ,b.date )
|| cmpStrings(a.time ,b.time );
}

Note that the first function (cmpStrings) is highly reusable, so this is shorter code. However, there is still som repetition, so I tried:

function cmpProps(a,b,p) {
return cmpStrings(a[p], b[p]);
}

function compareOrders3(a,b) {
return cmpProps(a,b,'status')
|| cmpProps(a,b,'date')
|| cmpProps(a,b,'time');
}

There is something nice about not repeating status, date and time, but there is something not so appealing about quoting them as strings. If you want to go more functional you can do:

function compareOrders4(a,b) {
function c(p) {
return cmpStrings(a[p],b[p]);
}
return c('status') || c('date') || c('time');
}

To my taste, that is a bit too functional and obscure. Finally, since it comes to mind and some people may suggest it, you can concatenate strings, like:

function compareOrders5(a,b) {
return cmpStrings(
a.status + a.date + a.time,
b.status + b.date + b.time
);
}

Note that in case fields “overlap” and/or have different length, this could give unexpected results.

Benchmarks

I tried the five different compare-functions on two different machines and got this kind of results (i5 N=100000, ARM N=25000), with slightly different parameters.

In these tests I used few unique values of status and date to often hit the entire compare function.

(ms)   i5    i5    ARM
#1 293 354 507
#2 314 351 594
#3 447 506 1240
#4 509 541 1448
#5 866 958 2492

This is quite easy to understand. #2 does exactly what #1 does and the function overhead is eliminated by the JIT. #3 is trickier for the JIT since a string is used to read a property. That is true also for #4, which also requires a function to be generated. #5 puts two strings on the stack needlessly when often only the first two strings are needed to compare anyway.

Conclusion & Recommendation

My conclusion is that #3 may be the best choice, despite it is slightly slower. I find #2 clearly preferable to #1, and I think #4 and #5 should be avoided.

ArchLinux on RPi with USD Harddrive

I have found that one of the weakest parts of a Raspberry Pi server or workstation is the SD card: it is slow and it will break sooner rather than later. There may be industrial SD cards or better SD cards, but a good old USB hard drive is just better.

With RPi v3 it shall be possible to boot straight off a USB drive! That sounded great so I got a brand new RPi v3 B+, a USB hard drive, and I installed ArchLinux on the hard drive, just as if it was a memory card. Fail. That did not work (with ArchLinux, Raspbian may be another story).

But there are levels of pain:

  1. All SD-card
  2. SD-card, but /home on USB harddrive
  3. USB harddrive, but /boot on SD-card
  4. All USB harddrive

I decided to try #3.

It turns out that when the RPi boots it runs u-boot (its like the BIOS of RPi, and many other embedded devices). At one point u-boot reads boot.scr (from the first VFAT partition of the SD card). It had the lines:

part uuid ${devtype} ${devnum}:2 uuid

setenv bootargs console=ttyS1,115200 console=tty0 root=PARTUUID=${uuid} rw rootwait smsc95xx.macaddr="${usbethaddr}"

I figured that I could do this instead:

# part uuid ${devtype} ${devnum}:2 uuid

setenv bootargs console=ttyS1,115200 console=tty0 root=/dev/sda2 rw rootwait smsc95xx.macaddr="${usbethaddr}"

However, boot.scr has a checksum so you cant just edit it. But it tells you what to do: run ./mkscr. But it is dependent on mkimage, so the procedure is:

  1. Install uboot tools
    1. ARCH: pacman -S uboot-tools
    2. Ubuntu/Debian: apt-get install u-boot-tools
  2. Edit boot.txt (not boot.scr) to your liking
  3. Run: ./mkscr

Now only /boot is on SD-card. That is quite ok with me. There is very little I/O to boot so the SD-card should survive. If I want to I can make a regular simple backup by simple file copy of /boot to a zip-file or something, and just restore that zip-file to any SD-card.

There seems to be no need to edit anything else (like fstab).

Well, the bad thing is it did not work out 100% as I hoped. The good thing is that this should work with any RPi, not just the RPi v3 that supports USB boot.

Best Train Simulator 2019

I have some personal enthusiasm for trains, and last years part of that has been playing Train Simulator on PC. That is the game that used to be called Railworks and that currently is named Train Simulator 2019. While I have spent much time with it I also have mixed feelings.

In 2019 there are two alternatives to TS2019 that I have tested and that I will write about: Train Sim World and Trainz Railroad Simulator 2019.

My experience with Dovetail Train Simulator (2019)

I got Train Simulator because I wanted to try to drive trains. The game has developed over the years but there are some annoyances.

The game has some quirks and bugs. The physics, engines, wagons, signals, AI and scenario conditions sometimes don’t work in a way you would expect.

The game is also rather unforgiving. One little mistake can ruin a scenario so you can’t even continue. If that was passing a red light, ok. But sometimes I am just a little late, a little early, I connected or disconnected the wrong wagon, I went into the wrong siding or something like that.

The combination of bugs and being unforgiving is rather frustrating. When there is a little inperfection in the game I can perhaps accept the lack of good simulation experience, but if it ruins the scenario completely, it is worse.

The game has a competitive aspect (it is a game) where you drive scenarios and get scores. This is particularly unforgiving. Decouple a wagon and for some reason (bug?) I get “operational error”, being penalised with -750 points (1000 is max), and I have no choice but aborting the scenario. Also, speeding is penalised heavily. This is annoying for two reasons: first the time table is often ridiculously tight, second it is not uncommon that maximum allowed speed changes unexpectedly.

You can read about the outdated graphics of TS2019 and that is true, but it does not ruin my experience. You can read about all the expensive DLC, but that is your choice (I bought some, but most everything on sales). What I find more annoying is that I buy a nice piece of DLC and it comes with very few scenarios. That is where the (Steam) workshop comes in and there are quite many scenarios (of varying quality) do download.

I found that creating scenarios was often more fun than driving myself and I have contributed some 44 scenarios on Steam Workshop. If driving is quirky, creating scenarios is kind of black magic (the problem is I need to test it, and when it fails after 40 minutes, I need to guess whats wrong and drive again for 40 minutes until I know if it works – a horrible development and debugging experience).

It seems to me it would be very possible to deliver a better Train Simulator game!

On Realism

It is easy to talk about realism. But is it really what we want. My experience…

  • Some routes allow for long eventless sessions. That is the realistic truth about driving a train, but how entertaining is it?
  • A real challenge when driving a train is breaking and planning your breaking. The weight and length of the train matters, as long as other factors. In the real world a train engineer makes calculations about breaking distances. They are not going to be driving a new train, with unknown weight, on a new track on a tight time schedule. Yet in a train simulator this is what we do, because we want (much) variation (it is a somewhat boring game anyways).
  • A real engineer knows the line well, and has special physical documentation about the line available. And he has studied this before. You don’t do that in a train simulator.
  • A real engineer spends much time checking things like breaks and wheels. And there is much waiting.
  • You can have a realistic “regulator”, that you can operate in the locomotive cab. That will look realistic in one way. But a real engineer would not point and look at it with a mouse, he just happens to have his hand there in the first place. User-friendly, where man-machine becomes one, is good simulation to me.
  • Real(istic) timetables are good, but not when it is almost impossible to arrive on time in the simulator.

My point is that I don’t want a realistic simulator. I want a simulator that gives me the feeling I am driving a train. And I want the time I spend with my computer to be more eventful, entertaining and challenging that the average work hour of a train engineer. And also somewhat more forgiving and I want support with things that are easier in the real world.

Train Sim World

Train Sim World is produced by the same company (Dovetail) as Train Simulator 2019. It appears they thought of it as a replacement for Train Simulator 2019, but it also appears that for now the games exist side by side. It is not clear that Train Sim World will ever replace, or even survive, Train Simulator 2019.

The good:

  • It looks (the graphics) better than the alternatives.
  • It may be the most “polished” option (also available for Playstation and Xbox, which gives you a hint).
  • If you get a “package” at discounted price on Steam (EUR25 for 4 routes) it is quite good value.

The bad:

  • It does not look that good; it is still computer graphics with obvious artifacts and problems. Also, the sound is not too convincing and the surroundings are pretty dead.
  • Walking around (in the scenarios) does not appeal to me, and it is not well made enough to add to the realism of the game.
  • Menus are a bit messy.
  • Quite limited number of scenarios, but plenty of “services”, but I think that contributes to (even) less events, action and storytelling.
  • The routes seem small, and very little action or room outside the mainline (like very linear).
  • Occational glitches like “what do I do now”, “what happens next” or “how do I do that”? (driving a service, I was done, told to get off, the train drove away by itself with no visible driver or no comments, and then nothing… had to just quit).
  • It lacks something. Like its not a bit dirty, noisy and rough… but just too smooth and clean.
  • So far, no possibility for user generated content. It is promised, and it will be based on Unreal, so it seems to be very technically demanding. I myself would prefer to be able to make scenarios with a story easily, without changing anything about the route or the other assets at all.
  • Unreal (which is to thank for the better graphics) seems to be a more complex (expensive) development environment, and perhaps this will limit in the future the availability or routes and assets, and make the price high (pure speculation).

I did give Train Sim World a first try, wrote a very negative review, refunded it, but after a few weeks I gave it a new try, and now I have a more balanced opinion about it.

Trainz Railroad Simulator 2019

Years ago I obviously did research and opted for Train Simulator rather than Trainz. Now that I was a bit disappointed with Train Simulator and rather disappointed with Train Sim World I felt I had to give Trainz 2019 a try.

My expectations based on marketing and what I read was:

  • Better graphics than Train Simulator, but perhaps not as good as Train Sim World.
  • More creator-, community- and sharing oriented (which appealed to my preference to making scenarios).
  • It’s a railroad simulator, rather than a train driver simulator.

I must say right away that I am quite disappointed. I ended up paying EUR 70 for Trainz, and EUR 25 for Train Sim World, and that does not reflect the value of what I got.

Download Station

Trainz comes with “its own Steam Workshop, Download Station”. This is the worst part of it. Hundreds of assests, organised alphabetically, with virtually no filtering and no community/feedback/rating function. Unless I completely missed something, this is shit. My use case is that I want to see if someone created a nice 30 min session for one of the premium routes that came with my purchase (and that has no extra dependencies). Trainz seems to live in the world where people download zip-files from ftp-servers and spend the effort of maintaining their virtual asset library like the stock portfolio. I am tempted to make a few sessions myself, and sharing them here, on my blog, but why?

Graphics

There is something idyllic, picturesque, beautiful and friendly about Trainz that is missing in Train Simulator and Train Sim World. There are gorgeous screenshots from Trainz out there. But when it comes to actual game performance on my actual computer (a NUC Hades Canyon) Trainz is the worst. I have been spending not so little time optimizing my graphics settings (and there are many settings to play with).

Quality

To my disappointment the routes come with quite few sessions. The beatiful route from Edinburg to Aberdeen (perhaps just to Dundee) has two sessions: a passenger service with the same Deltic locomotive going both ways. These two scenarios both take 1h30min each to drive. And the one I did try did not work in CAB (realistic) drive mode, because for some reason the Deltic can not pull those wagons with any speed whatsoever. Isn’t it reasonable to expect when a new EUR 70 release is made after 7 years, that the sessions are tested at least once, and working?

Then there was another beautiful session on the Cornish mainline where a 2MT steam locomotive pulls ~25 freight wagons and it just can’t make it up the grades. I asked in the forum and I had managed to get further than most people, but the suggestion was to just try another locomotive (edit the session). Why release a session with the wrong locomotive in the first place?

If driving steam locomotives in realistic mode can be a challenge in Train Simulator (often a frustrating one), in Trainz it feels… not realistic. Perhaps I need more practice, but it is very… unsmooth.

Other things

There is no support for a Gamepad (although I found a little software called AntiMicro) which works decently well for my purposes.

I really miss the look-out-throw-the-side-window camera view.

I appreciate that I can see the status of the next signal in the HUD.

When I have completed a session it does not remember (marked as completed) so I made my own list

A good thing about Trainz is that it is more forgiving than Train Simulator. I ran out of boiler pressure, but then I could switch to simple driving mode and at least complete the session.

I get the feeling that for people who already own and love the old Trainz this is an upgrade. But for a new player it is a rough experience.

Conclusion and recommendation

Unfortunately I think none of the games I have written about live up to the expections you should allow yourself to have in 2019. And I am not aware of a better game in the genre.

Clearly this genre appeals to enthusiasts who want to make their own assets and modify the game, and clearly Train Simulator and Trainz are based on old technology that have not aged too well (and people are reluctant to abandon their assets). Train Sim World, being based on Unreal, has not been able to deliver a workshop- or sharing-experience at all, yet.

If you are curious about how it is to drive a train, get Train Sim World (and an Xbox controller if you get it on PC, I know nothing about the Playstation/Xbox experience). Sit comfortably, turn up the volume, have some coffee (or whatever you drink) and do your best to enjoy the experience. Spend time with the tutorials and dont get too frustrated if you get stuck.

If you want to have your own digital train layout, and play with it (dispatch and control multiple trains), get Trainz, and make sure to have a powerful enough computer.

If you think that Steam workshop is a nice idea where you can share scenarios (and other assets) and communicate with other people about them get Train Simulator 2019. Cost/price aside, there are very many routes (and extra locomotives) available for Train Simulator 2019.

Train Simulator 2019 now supports 64-bit mode. Technically its not… hot… but it is being improved. Train Sim World looks better, but it is not that much better. Honestly, folks who make a living reviewing computer games say: “TS2019 looks so old, but TSW is built on Unreal like all the other cool games, much better.” But for your total train simulation experience, the difference is… marginal.

I would not be too surprised if the Train Sim World Editor never happens. If it is released I would not be surprised if it is too complex and a critical portion of contributors and enthusiasts never switch. The advice to enthusiasts to “Download the UE4 Editor from Epic and start learning”, I am sceptical about it. I doubt I will contribute scenarios if I have to get into a real 3D studio to place some trains and make some timetables/rules.

I would hope that Trainz gets a real workshop experience where you can easily share assets in a social way and where you don’t need to worry too much about dependencies. And I would hope that Trainz manages to polish their game, test it properly, and provide a solid graphics experience.


Trainz Log

I decided to try Trainz (now that 2019 is out) and I think it is useful to keep track of my progress, so I will do it here.

Gamepad

I prefer to drive my train with a gamepad. I found a little free software called AntiMicro which allowed me to map my Xbox gamepad to relevant Trainz keys. It was easy and I recommend it.

Graphics

My computer is a NUC Hades Canyon. It has an Intel Core i7 CPU, AMD Radeon RX Vega M GPU, 16GB or RAM and 512GB SSD. This computer can not at all play Trainz 2019 in “Ultra”, instead I have to tune down the performance settings quite much. I have found (and I may change opinion when I have experimented more) that I want to run Trainz in full resolution (1920×1080) and I think the lowest anti aliasing (2x) makes most sense. I allow myself to use “Clutter + Turf FX” and High details. Different routes are supposed to be demanding on the GPU.

Cornish Mainline and Branches

A trip to Falmouth: Completed with 2 stars.
Freight Delivery: This session is tricky and I started a thread on the Trainz forum. I dont think the 2MT is capable of pulling 23 wagons all the way, I got completely stuck after 7 miles in a 2%+ grade. I modified the session and used a 4200 tank locomotive instead but I got two new problems: 1) AI trains going in the other direction all stand still, 2) When I have finally arrived the session never ends (or continues). Perhaps I broke something when I modified it.
Helston freight run: Completed with 5 stars.
Helston passenger run: Completed with 4 stars.
Mainline passenger service: Completed with 1 star (!), on time, gorgeous!
St Ives passenger run: Completed with 5 stars.

ECML Edinburgh – Dundee

09-10 Dundee – Kings Cross: Completed with 5 starts (but can’t drive in realistic CAB mode)

Sebino Lake

Maintenance Service: Completed with 5 stars (in DCC, can’t get the train rolling in Cab mode)