When to (not) use Web Workers?

Web Workers is a mature, simple, standardised, compatible technology for allowing multithreaded JavaScript-applications in the web browser.

I am not going to write about how to use Web Worker (check the excellent MDN article). I am going to write a little about when and why to (not) use Web Worker.

First, Web Workers are about performance. And performance is typically not the best thing to think about first when you code something.

Second, when you have performance problems and you throw more cores at the problem your best speedup is x2, x4 or xN. In 2018 it is quite common with 4 cores and that means in the optimal case you can make your program 4 times faster by using Web Workers. Unfortunately, if it was not fast enough from the beginning chances are a 4x speedup is not going to help much. And the cost of 4x speedup is 4 times more heat is produced, the battery will drain faster, and perhaps other applications will be suffering. A more efficient algorithm can often produce 10-100 times speedup without making the maintainability of the program suffer too much (and there are very many ways to make a non-optimised program faster).

Let us say we have a web application. The user clicks “Show report”, the GUI locks/blocks for 10s and then the report displays. The user might accept that the GUI locks, if just for 1-2 seconds. Or the user might accept that the report takes 10s to compute, if it shows up little by little and the program does not appear hung. The way we could deal with this in JavaScript (which is single thread and asyncronous) is to break the 10s report calculation into small pieces (say 100 pieces each taking 100ms) and after calculating each piece calling window.setTimeout which allows the UI to update (among other things) before calculating another piece of the report. Perhaps a more common and practical approach is to divide the 10s job into logical parts: fetch data, make calculations, make report, but this would not much improve the locked GUI situation since some (or all) parts still take significant (blocking) time.

If we could send the entire 10s job to a Web Worker our program GUI would be completely responsive while the report is generated. Now the key limitation of a web worker (which is also what allows it to be simple and safe):

Data is copied to the Worker before it starts, and copied from the Worker when it has completed (rather than being passed by reference).

This means that if you already have a lot of data, it might be quite expensive to copy that data to the web worker, and it might actually be cheaper to just do the job where the data already is. In the same way, since there is some overhead in calling the Web Worker, you can’t send too many too small pieces of work to it, because you will occupy yourself with sending and receiving messages rather than just doing the job right away.

This leaves us with obvious candidates for web workers (you can use Google):

  • Expensive searches (like chess moves or travelling salesman solutions)
  • Encryption (but chances are you should not do it in JavaScript in the first place, for security reasons)
  • Spell and grammar checker (I don’t know much about this).
  • Background network jobs

This is not too useful in most cases. What would be useful would be to send packages of work (arrays), like streams in a functional programming way: map(), reduce(), sort(), filter().

I decided to write some Web Worker tests based on sort(). Since I can not (easily, and there are probably good reasons) write JavaScript in WordPress I wrote a separate page with the application. Check it out now:

So, for 5 seconds I try to do the following job as many times I can, while I keep track of how much the GUI is suffering:

  1. create an array of 10001 random numbers: O(n)
  2. sort it: O(n log n)
  3. get the median (array[5000]): O(1)

The expensive part is step 2, the sort (well, I actually have not measured 1 vs 2). If the ratio of amount of work done per byte being sent is high enough then it can be worth it to send the job to a Web Worker.

If you run the tests yourself I think you shall see that the first Web Worker tests that outsource all of 1-2-3 are quite ok. But this basically means giving the web worker no data at all and when it has done a significant amount of job, receiving just a few numbers. This is more Web Worker friendly than Chess where at least the board would need to be sent.

If you then run the tests that outsource just sort() you see significantly lower throughput. How suitable sort()? Well, sorting 10k ~ 2^13 elements should require each element to be compared (accessed) about 13 times. And there is no data sent that is not needed by the Web Worker. Just as a counter example: if you send an order to get back the sum of the lines most of the order data is ignored by the Web Worker, and it just needs to access each line value once; much much less suitable than sort().

Findings from tests
I find that sort(), being O(n log n), on an array of numbers is far too fast to be outsourced to a Web Worker. You need to find a much more “dense” problem to benefit of a Web Worker.

Islands of data
If you can design your application in such way that one Web Worker maintains its own full state and just shares small selected parts occationally, that could work. The good thing is that this would also be clean encapsulation of data and separation of responsibilites. The bad thing is that you probably need to design with the Web Worker in mind quite early, and this kind of premature optimization is often a bad idea.

This could be letting a Web Worker do all your I/O. But if most data that you receive is needed in your application, and most data you send comes straight from your application, the benefit is very questionable. An if most data you receive is not needed in your application, perhaps you should not receive so much data in the first place. Even if you process your incoming data quite much: validating, integrating with current state, precalculating I would not expect it to come very close to the computational intensity of my sort().

Conclusions
Unfortunately, the simplicity and safety of Web Worker is unfortunately also its biggest limitation. The primary reason for using a Web Worker should be performance and even for artificial problems it is hard to get any benefit.

Note to self: never try-catch more than necessary!

A wrote a function, and then a unittest, and the unit test was good.
Then I called the function from my real project, and it failed!

I isolated the problem and thought I had found a bug in V8 (except after many years as a programmer I have I learnt it is never the compilers fault).

This was my output:

$ node bug.js 
Test good
main: err=Not JSON

This is my simplified (faulty) code:

function callSomething(callback) {
  var rawdata = '{ "a":"1" }';
  var jsondata; 

  try {
    jsondata = JSON.parse(rawdata);
    callback(null,jsondata);
  } catch (e) {
    callback('Not JSON', null);
  }
}

function test() {
  callSomething(function(err,data) {
    if ( err ) console.log('Test bad: ' + err);
    console.log('Test good');
  });
}

function main() {
  var result = {
    outdata : {}
  };

  callSomething(function(err,data) {
    if ( err ) {
      console.log('main: err=' + err);
    } else {
      result.outata.json = data;
      console.log('main: json=' + JSON.stringify(result.outdata.json));
    }
  });
}

test();
main();

How can the test not fail when main fails?

Well, here is the correct output

$ node nodebug.js 
Test good
main: json={"a":"1"}

of the correct code main function:

function main() {
  var result = {
    outdata : {}
  };

  callSomething(function(err,data) {
    if ( err ) {
      console.log('main: err=' + err);
    } else {
//    result.outata.json = data;
      result.outdata.json = data;
      console.log('main: json=' + JSON.stringify(result.outdata.json));
    }
  });
}

The misnamed property caused an Error which was (unintentionally) caught, causing the anonymous callback function to be called once more, this time with err set, but to the wrong error.

It would have been better to write:

function callSomething(callback) {
  var rawdata = '{ "a":"1" }';
  var jsondata; 

  try {
    jsondata = JSON.parse(rawdata);
  } catch (e) {
    callback('Not JSON', null);
    return;
  }
  callback(null,jsondata);
}

and the misnamed propery error would have crashed the program in the right place.

Conclusion
Don’t ever try more things than necessary. And if you need to try several lines, consider making separate try for each.

Minification of real web Application

I have built and I maintain a reasonably large (AngularJS) web application and here follow a few notes on the effect of minification.

I start with the findings:

                            Uncompressed         GZIP     Minified    Min+GZIP

App 1:  Size        (kb)            1130         1130          843         841
        Transferred (kb)            1150          375          861         308
        Load time    (s)             2.8          1.6          2.7         1.7

App 2:  Size        (kb)             708          708          659         659
        Transferred (kb)             721          359          672         347
        Load time    (s)             4.0          3.5          3.1         3.5

Conclusions
You should always enable gzip on the server. It is faster to compress and send less data than to send the uncompressed data. The benefits of gzip are huge and there are no negative side effects.

Minification saves some bandwidth (and if unlike me you do it ahead of time, some loading time). But unless your code contains mostly comments the effects are marginal (although that might be a big saving if you use very much bandwidth or you are looking for fastest possible load times).

Also, gzip tends to be good at what minification can easily do, and while the effect of minification alone is quite significant, the effect of minification together with gzip is smaller.

Behind the figures
The figures above come from Firefox Load time over the internet.

  • App1: About 100 files are served, mostly .js (a few .html and .css)
  • App2: About 80 files are served, mostly .js (a few .html and .css)
  • App1: Angular is always pre-minified 165kb, gzipped to 67kb.
  • App2: Angular+modules is always pre-minified 298kb, gzipped to 127kb
  • App2 contains a few fonts which are neither minified nor gzipped (142kb)
  • Files served by Node.js
  • Files minified by custom Node.js code in real time
  • Files gzipped by nginx in real time
  • Not everything is initiated when Load is complete (more html-files are loaded dynamically as user navigates, and data is loaded from APIs on demand)

Implications of minification
Minification (and possibly packaging of code) has more implications than gzip. Possible negative side effects are:

  • A build process is not strictly needed for web development, but minification is often done as part of a build process, increasing complexity of development, testing and deployment.
  • Testing and development is made harder when debugging minified code (although there are tools to mitigate this).
  • More aggressive minification can have unexpected results

The minification code I run in Node.js, when I serve a file, basically just:

  • Removes all white space in the beginning and end of lines
  • Removes all comments

This nice thing about this simple minification strategy is that everything that is obviously just waste is removed at a low cost, but the code is for all practical purposes completely unchanged (even line numbers are preserved to not complicate debugging). Also, developers should feel free to write as many comments as they like in the code, yet comments should never be served in a public facing application. More powerful minification comes at higher costs, and the effects are probably mostly lost after gzip.

I guess every project and system have a sweet spot when it comes to minification and I think my simple minification strategy makes sense for my needs.

Programming paradigms rock and suck!

I wrote a few articles about why functional programming sucks (1, 2) and why some functional libraries suck (3, 4). Obviously the titles were a little click bait and I think anyone reading the articles understood that I argue that it is the mindless hype that sucks, not the (FP) paradigm itself. Anyways, writing those articles, getting some feedback on them, and programming on my projects gave me more thoughts.

Lets say we have these programming paradigms:

  • Functional Programming: Pure, testable functions, avoiding state and variables.
  • Object Oriented Programming: Encapsulating data inside objects exposing a simplified, safe interfaces
  • Imperative Programming: Explicitly instructing the computer step by step what to do and how to do it, changing state (or Procedural Programming if you like)
  • Declarative Programming: Expressing your problem as data, and something else takes that data and produce what you want
  • State Machines: Global data and distinct states: your program reacts to input, transitions between states and modifies its data

As you perhaps understand, I do not aspire to be a computer scientist. This is all very practical.

I find – and I dont know if you will find this obvious or outrageous – that all these paradigms have strengths and weaknesses, and that a reasonably sized program or system will need to use a combination of them.

Here follows some strenghts and weaknesses:

Functional Programming Strengths

  • High reusability
  • Highly testable code
  • Compact code
  • Transformation, or streaming, or raw data matches the reality of networked services

Functional Programming Weaknesses

  • Obscurity – some code can be very cryptic and hard to read
  • Over engineering – attempting to make super reusable code can complicate simple things
  • Avoiding state – sometimes you have a state and you need to deal with it (and with FP there is a risk you try to avoid reality rather than face it head on)
  • Performance – FP comes with some overhead
  • Algorithmic complexity – it can be hard to understand the algorithmic complexity, or to write an efficient implementation, and this can lead to performance problems

Object Oriented Programming Strengths

  • Encourages clear APIs
  • Encourages reusability
  • Encourages information modelling
  • Allows refactoring of implementation

Object Oriented Programming Weaknesses

  • It adds little (no) value and significant overhead to serialize/deserialize data as it is sent/received and stored/loaded
  • Inheritance is a questionable concept
  • Ideally, objects have only one-way-dependencies, but in the real world this creates difficult, artificial design problems and complexity
  • It adds code and concepts that may cost more than you practically get from it: public/private declaration, getters/setters

Imperative Programming Strengths

  • Simple and straight forward
  • Little overhead: high performance
  • Algorithms explicitly implemented: high performance
  • Works the same in many different languages

Imperative Programming Weaknesses

  • Bad testability: if you have too big procedures with too many side effects
  • Bad maintainability: if you end up writing too big, complex modules
  • You can end up writing much code, copying and pasting
  • It takes discipline

Declarative Programming Strengths

  • Can be very compact
  • Can be very easy to read (if you understand/accept the “language”)
  • Can allow for quickly adding or changing features

Declarative Programming Weaknesses

  • Performance may be bad and/or unpredictable
  • Although compact, it can be very cryptic
  • Debugging can be very hard

State Machine Strengths

  • Can deal with complex states
  • Suitable for error handling, recovery
  • Can deliver efficiency at system level

State Machine Weaknesses

  • Requires proper analysis and design upfront
  • May be hard to refactor or change
  • Complex (represents and deals with complexity rather than hides it or lets someone else take care of it)

I now imagine a simple mobile/web application (like an little online betting site). There is data storage, server side application logic, authentication, http APIs, client loading data, client side application logic, UX, and the user saving/updating data to the system.

Server Side
The server itself should be thought of as a state machine. It can be up and good, but it can also be starting or shutting down. It can have bad or missing connectivity to other services (authentication, payment, data feeds). It can be in maintenance mode or perhaps in test or debug mode. It needs to hold and renew cached data. All these things can dramatially affect how a simple API call is handled! Failing to do this in a structured way can lead to very complicated API implementions or severe performance or stability problems.

If there are adapters connecting to other systems or the storage these may very well be implemented in an Object Oriented way. They expose a simple and safe API and they hide a lot of implementation.

When it comes to server side business logic it is fed with input from the storage or cache and its output is sent over the network to clients (or the other way around). Such business logic should be designed in a Functional way: it should be clear what it does and what data it uses, and it can and should be testable.

The implementation of the business logic or the adapters may be performance critical and non trivial. Imperative programming can be used here – inside what is exposed as Functional or Object oriented code. It must not leak. But every bolt and nut in an FP function does not have to be implemented using FP principles, and every internal part of an Object need not be built on the principles of Object Orientation.

Finally, the definition of the APIs, the access rules can be Declared. Other code can execute these rules and declarations behind the scenes.

Client Side
The client also needs to be thought of as a state machine first. Is the user authenticated and logged in? Have we received all data, or are we still waiting for something? Have we received the latest updates or have there been connection problems? Does the user have edited, dirty, state that is not saved to the server? Are we having errors saving data that we are retrying? Where in the application is the user and what settings, configurations or policies applies to the user? You need to deal with all these things. If you try break it into many small modules with no mutual dependencies you will find that is very hard. Put the damn global state somewhere, and accept that it is a global state. Make it all publicly readable for anything that cares. Changing of the state is a completely different business that must only be done via specific interfaces (the entire state machine can appear like an Object).

Now Declare your user interface: the pages, buttons, colors and everything else. Write code that consumes these declarations and produce the UX. To the state machine this code may appear Object Oriented, and your UX components are probably some kind of OO objects, reusable across the pages.

Whenever you can, break stuff out into Functional, pure, testable functions.

And when it comes to getting stuff done, as long as your code is contained within Objects or Functions and they do not leak: write simple and efficient code Imperative code if that is the best.

The GUI can be described in data rather than code in a declarative way.

Conclusion
It is quite pointless to talk about (for example) Functional programming as better or worse than anything else. It simply depends on. And for programs/systems of non-trivial size and complexity, mixing programming paradigms are fine.

Flight-Assist Astromech

Finally, the fix for the X-wing has arrived. I have written before here and here about my thoughts on fixing the X-wing.

The Flight-Assist Astromech costs one point and lets you make a free barrel roll or boost, unless you have a target in sight and range.

I think it is a good and well balanced upgrade, with uses outside the T65 X-wing.

T65 X-wing
I think for low cost generic X-wings (Rookie Pilots at 21p) the Flight-Assist Astromech is the best upgrade (previous options were R2 and Targetting Astromech). Does it also bring the Rookie Pilot back to play? Well, I don’t compete. I think it compensates for the worst weaknesses of the X-wing in a sensible way. In the first round(s) of the game you can boost to keep up speed with T70 or other ships, and you have more options when it comes to arriving at the battle. The first round of fire it is more likely that the T65 is actually in the fight, and not behind or all enemies out of arc. In the battle you have now options to the 4-U-turn and you can re-engage quicker and more effectively.

I tried a list I call Return of the X-wing that you find here.

How about T65 aces (Luke, Wedge, Wes)? Having a high skill means your opponents fly first and you may not be allowed to use Flight-Assist. I can see situations where BB8 (however unique) would be better. No doubt Luke, Wedge and Wes are better Flight-Assist than without. The most obvious situation is a first/second round strike where a boost or barrel roll can make them just reach. I have not tried this yet.

Finally, Flight-Assist seems to be of no significant benefit to Biggs which was important.

T70 X-wing
The T70 already has boost. Targetting Astromech is great with its 3 red maneuvers while Flight-Assist is not. Also, Poe likes R5-P9 and Nien Numb likes R3-A2. I think the T70 still has many options.

Y-wing
Flight-Assist Astromech can specifically not be used with turret weapons. This makes it interesting to combine with BTL-A4 to produce a more dog-fighting capable Y-wing. I think this was a clever Y-wing upgrade!

ARC-170
I think the ARC-170 is perhaps the ship that benefits the most of Fligth-Assist. With a maneuver dial similar to the T65 and no turrets (unlike Y-wing) it can really use boost or barrel roll. But I think (after flying only once) that Flight-Assist works very well with the Auxilliary firing arc. Normally it is quite hard to make good use of the tail gun, but if you can boost or barrel roll to get your enemy behind you it is a different story. And later with a boost it is much easier to get back in the fight. I tried this with Norra in the 10th (and so far unnamed) Horton Salm squad.

E-wing
I doubt Flight Assist will bring the Generic E-wings to the tables. And when it comes to Corran Horn he can really use a regenerating Astromech (R2-D2 or R5-P9). Perhaps Etahn A’baht could use it though.

Conclusion
I think Flight-Assist is a very good upgrade (I hope it was not too good, because while I want an upgrade for my T65s I still want a balanced game). There are old Astromechs that become even less relevant now (R2-F2, R3, R5). I think Flight-Assist helps where it was mostly needed without just making T65 more like th T70.

Y-wing: Now, on top of my list, I want new unique Y-wing pilots. There are several to choose from within Star Wars Canon universe.
Z95: The headhunter needed this flight assistance just as bad as the T65 did.

Fast binary search without Division

Arrays are simple datastructures but finding elements is not fast. However, if you have a sorted array, a favourite algorithm is Binary Search.

When it comes to JavaScript there is no binary search in the standard library. Standard functions like find and indexOf simply just don’t cut it for many purposes, being O(n).

Rosetta Code has two implementations of binary search for JavaScript. However, binary search is based on division (by 2, you split the search interval in half every iteration). In programming languages with integer math division by 2 is performed by doing a single shift, so it is a cheap operation (division is generally expensive). In JavaScript integer division by two looks like:

mid = Math.floor((lo + hi) / 2);

This is just… unacceptable. Perhaps V8 recognizes the pattern and replaces it with the right thing, I don’t care, floating point math has nothing to do with binary search and this is ugly.

Perhaps I can do better? Well, I actually can!

Instead of dividing the actual interval by 2 in every iteration you can use the powers of two: 32,16,8,4,2,1 and search with them. Lets say the array is 41 elements long. Start check element 32. If its too little try with 32+8 (skip 32+16 > 41), and if it is too large try with 16. That way you essentially do a binary search without a division.

Since I don’t want to do heap allocation for every search I calculate the powers of two before declaring the function. In the code below my maximum value is 2^31, so I can mostly search arrays of size 2^32-1. If you dont like that limit you can raise 32 to whatever you like (but at 52-53 you are running into new problems, and well before that you will run out of RAM).

function powers_init(s) {
  var i;
  var r = new Array(s);
  r[0] = 1;
  for ( i=1 ; i<s ; i++ ) {
    r[i] = 2*r[i-1];
  }
  return r;
}

var powers = powers_init(32);  // [1,2,4,8,16,32,...

function binary_search_divisionless(a, value) {
  var pix = 0;
  var aval;
  var offset0 = 0;
  var offset1;

  if ( a[0] === value ) return 0;
  while ( powers[pix+1] < a.length ) pix+= 1;

  while ( 0 <= pix ) {
    offset1 = offset0 + powers[pix];
    if ( offset1 < a.length ) {
      aval = a[offset1];
      if ( value === aval ) {
        return offset1;
      } else if ( aval < value ) {
        offset0 = offset1;
      }
    }
    pix--;
  }

  return -1;
}

It is perhaps not obvious why I check for a[0] before doing anything else. The function returns from the main while loop as soon as it finds what it is looking for. So a[offset0] does not contain the value in later iterations when offset>0. However, this is not guaranteed from the beginning when offset0=0, and it is not automatically being tested in the end. So I explicitly test it first.

Benchmark
Below follows relative performance in time for different array sizes.

Array Size:                  10      100     1000    10000   100000
=======================================================================
Standard Library indexOf   1.65     2.02    11.89    94.00   790.00
Rosetta Code Recursive     1.32     1.48     1.84     1.81     2.02
Rosetta Code Imperative    1.18     1.08     1.41     1.43     1.45
Divisionless               1.00     1.00     1.00     1.00     1.00
   - unrolled              0.93     0.76     0.82     0.79     0.78

I am satisfied that my code is consistently faster than the alternatives. What surprised me was that binary search clearly wins even for short arrays (10). The little loop before the big loop can be unrolled for significant performance gain:

//while ( powers[pix+ 1] < a.length ) pix+= 1;
  if ( powers[pix+16] < a.length ) pix+=16;
  if ( powers[pix+ 8] < a.length ) pix+= 8;
  if ( powers[pix+ 4] < a.length ) pix+= 4;
  if ( powers[pix+ 2] < a.length ) pix+= 2;
  if ( powers[pix+ 1] < a.length ) pix+= 1;

This is also the reason why 32 was a particularly good array size for the powers of two. This is a kind of optimization I would normally not let into my code. However, if you make much use of binary search on the Node.js server side of an application, go ahead.

It is also worth noting that the following patch doubles the execution time of my code!

//  if ( offset1 < a.length ) {
//    aval = a[offset1];
    if ( undefined !== ( aval = a[offset1] ) ) {

General Compare Function
With little modification, the algorithm can be used with a compare function for arrays of objects other than numbers (or strings).

Division
Since my algorithm only uses powers of 2, I can do division in JavaScript without using Math.floor. It is a quite easy change to eleminate the powers-array and divide by two instead. It turns out it make very little difference on performance (to do integer / 2 instead of array lookup). However, when I added a (meaningless) Math.floor() around the division, performance dropped the same as the the iterative version from Rosetta Code. So my intuition was correct to avoid it.

I checked the Lodash binary search code (sortedIndexOf) and it uses bit shift >>> to divide by two. Admittedly, the Lodash code is slightly faster than my code.

Conclusion
You should have a well implemented and tested binary search function available in your toolbox (admittedly, use Lodash for this). This is not the right place to lose milliseconds. For very small arrays you can argue that indexOf works equally well, but the cost of using binary search is insignificant.

Upgrading OpenWRT to LEDE

A bit late, but I wanted to upgrade OpenWRT 15.05 to LEDE 17.01.4.

It worked perfectly for my WDR 4900. The OpenWRT-to-LEDE-rebranding caused no problems.

I basically followed my own upgrade instuctions.
I also took advantage of adding files and folders to /etc/sysupgrade.conf. Those where automatically kept during the upgrade, which is nice.

Conclusion (based on one successful upgrade): if you are an old OpenWRT fan there is no reason to fear LEDE and wait for a new OpenWRT release before upgrading.

Syncthing v0.14.40, Raspberry Pi, 100% CPU

I think Syncthing is an amazing piece of software, but I ran into problem last week.

I have a library of 10 different folders, 120000 files, 42000 directories and 428GB of data.

I thought that was a little bit too much for my RPi V1 (Syncthing 0.14.40, Arch Linux), because it constantly ran at 100%. I raised Rescan Interval to several hours (so it would finish before staring over).

After startup it took about 10-15 min to get the web GUI up, and about an hour to scan all folders for the first time. Well, that is ok, but after that it still constantly used 100% CPU despite all folders were “up to date”.

It turned out it crashed and started over. I found panic logs in .config/syncthing and error messages in ./config/syncthing/index-v0.14.0.db/LOG.

Some errors indicated Bad Magic Number and Checksum Corruption. The usual reason for this seems to be hardware problem (!?!).

I upgraded my RPi V1 to an RPi V2, with little success. Then I found that I had similar problems on another RPi V2. So after shutting down Syncthing I tried the quite scary:

  $ syncthing -reset-database      ( does not start syncthing )      
  $ syncthing                      ( start syncthing )

After several hours of scanning everything seems to work perfectly!
Let us see how long that lasts.

Eee in 2017

I came up with a possible use for my Asus EeePC 701! A challenge is to find a Linux distribution that works well with it: Lubuntu 16.04 LTS Alternative 32-bit seems good.

Lubuntu 16.04.3 is released, but for the moment I got 16.04.1 with the alternative download. After installation about 1200MB was available (I created no swap, despite warnings, since I have 2BG RAM) on my 4GB SSD.

It turned out the full upgrade (to 16.04.3) requried too much temporary space and filled up my drive. You can do two things to prevent this:

  • Mount /var/cache/apt on a USB drive while upgrading
  • Uninstall packages

When it comes to finding unnecessary packages its up to you. I uninstalled cups (no need to print), abiword (no need to write documents), gnumeric (no need to do excel-work) and many fonts (mostly thai and japanese).

Developer lost in Windows

Admittedly, I am not a Microsoft fan, but Windows is a quite fine operating system nowadays. This article is not about complaining with Windows.

This article is also not about native Windows development for Windows. That is, if you use Windows, Visual Studio and C# (or other languages native to Windows development) to produce software targeting Windows, this article is not for you.

I and other programmers use Linux, OS X or possibly BSD to develop software meant to be OS independent. Our core tools are perhaps bash and a terminal emulator. We use tools like grep, head, tail, curl, iconv, sed, bc, emacs, vim, ssh, nc, git or svn on a daily basis (without them we are in a foreign land not understanding the customs). Depending on programming language we use compilers and interpreters like gcc, python, perl, php, nodejs and sbcl. In Linux, OS X or BSD this all come very naturally but sometimes we find ourself using a Windows computer:

  1. We just happen to have a second Windows computer that we want to use
  2. Company policy requires us to use Windows
  3. We want to be capable of working in Windows
  4. We want our project to work fine in Windows
  5. Perhaps you are a Windows developer/user who need/want to do unix-like development without getting a separate computer.

This article is for you!

The good news is that basically everything you wish to do can be done with free software (also on Windows). What is also good is that you have plenty of choices of good stable software to choose from.

The bad news is that finding just what is right for you and making it feel as simple and smooth as you are used to can be time consuming, difficult and frustrating.

Embracing Windows
To be productive in Windows you should (at least to some degree) embrace Windows. One approach is to do as the Windows developers. This would mean using cmd.exe (the old DOS-shell) or Powershell, and learn/embrace what comes with it. You would perhaps install only node/php/python native for Windows and git. I encourage you to try this, but I will not write more about it.

This might be the best solution if you develop in JavaScript or Java AND basically all tools you use are part of a JavaScript/Java ecosystem. Perhaps git is the only thing you need apart from what you install with npm.

Avoiding Windows
One approach is to just use Windows to access (and possibly run a virtual) Linux (system). It can work better or worse depending on your situation. A full screen VM might be the best solution if Linux GUI tools are central to your development, or if you are anyway very used to working with VMs. SSH to a remote system might be the best solution if mobility is not a problem. However, some development gets more complicated when different things are on different IP addresses, and security becomes more relevant when everything is not on 127.0.0.1. I will not write more about this.

Make Windows Unix-like
So, our gool is to feel reasonably at home in Windows by installing what is missing. It is always tricky to divide things in clear categories, but I would say you will want a stack of four layers:

  1. A Terminal Emulator: Windows already comes with one, but chances are you will not find it good enough. I have very modest demands but I expect copy-paste to work nicely and I expect multiple tabs. This works perfectly in any default Linux desktop and Mac OS X, not so in Windows.
  2. Editor: Unless you prefer to use vim or emacs directly in the terminal you want to install a decent or familiar text editor (Nodepad or Wordpad are not suitable for serious programming).
  3. The standard UNIX/GNU tools: Windows does not come with bash, head, tail, grep, sed, vim, bc and most other tools you take for granted. The old (DOS) equivalents are simply inferior. The new (Powershell) equivalents are… well… lets just say it is a steep learning curve and your bash scripts will not run.
  4. Interpreter/Compiler: Windows does not come with gcc, python, perl, nodejs or php. However, they are most often available as a separate download for Windows. Such a native Windows version may be slightly different from the Unix-version you are used too (command line options, especially related to paths may be different).

The short version is that you can install things like Cygwin or Windows Subsystem for Linux and you might just be fine with it! Or not. If you are bored of reading, try it out now.

To make a more informed choice you need to consider what types of binaries you want to run (and perhaps produce).

Native Windows Binaries
A Native Windows binary can (with some luck) be copied from a Windows computer to another and run. To interact with Windows (and you) it uses the APIs of Windows (typically Win32). It probably expects paths to be formed like in Windows (c:\tmp rather than /tmp).

Native Linux Binaries
A Native Linux binary was typically built on a Linux system to run on that Linux system (like Debian). Until recently it would not run on Windows, however Microsoft put a massive effort into Windows Subsystem for Linux, which allows you to run Linux programs (including bash) directly in Windows 10 (only). This is not perfect though. A Linux filesystem is quite different from a Windows filesystem so access between the two filesystems is limited and may require some thinking. This is perhaps the best approach if you are targetting Linux (like development for Docker) but it is obviously a bad approach if you want your program to work on a Windows server or other Windows computers.

Hybrid / Cygwin
There is an old (as in proven and reliable) project called Cygwin. It is basically a DLL-file that translates all (most) Linux system calls into Native Windows calls. This means that the unmodified source code of (most) programs written for Unix can be compiled with Cygwin, for Cygwin, and run on a Windows computer that has the Cygwin DLL installed. There are some drawbacks. First, performance suffers (from nothing to quite much depending on what you do). Second, for more advanced software, especially with GUI or heavy on network (like Apache), the hybrid solution can feel like the worst of two worlds. Third, access to the entire filesystem is smooth, but when it comes to access rights it sometimes does not work perfectly (files created by Cygwin get weird, even broken, permissions).

Now, to complicate things, there is a project called MSYS2 that maintains a fork of Cygwin, very similar to Cygwin. Cygwin or MSYS2 can be included/embedded in other projects (such as cmder). If you install multiple unix-compability-suites on your system it can get confusing.

Choosing binary type
At first glance Windows Subsystem for Linux, or Cygwin, seem very attractive. But lets assume that we do web development in Python. If you go with Windows Subsystem for Linux you will need to run a webserver (apache, lighttpd) inside that subsystem. To me, configuring, starting and stopping services inside this subsystem is not attractive. What could possibly go wrong? Well, a lot of things. With Cygwin you can probably make Windows IIS invoke Cygwin Python (if you really dont care about performance), because running Cygwin Apache sounds creepy (it can be done though). If, on the other hand you install Python built for Windows you get the real thing. All Windows/Python documentation and forum information suddenly applies to you. But then you end up with a Cygwin shell where everything is just like in Linux, except Python is not (except Cygwin will come with Python too, so you can end up with two versions of Python, with different features).

I would hesitate to run apache inside Cygwin and even more so inside Windows Subsystem for Linux. But I always also hesitate to do anything with IIS. Perhaps the best thing is to install Apache and Python for Windows (not depending on Cygwin) and just find the tools you need to edit your files?

The same reasoning can apply to PHP, nodejs or whatever you do.

Most configurations can probably be made to work. But you just want your Windows computer as simple and standard as possible, and you want your usual Linux (or OS X or BSD) stuff just to work the way they usually do. This is really not a matter of right or wrong, it is more a matter of taste (what kind of worthless errors do you want to solve just because you are a foreigner in Windows doing stuff not the way it was really meant to be done in Windows?).

Maintainability and Management
One thing is to make it work. Another thing is that it works week after week, month after month. Also another thing is to keep your software up to date and being able to add new tools along the way. You will find that some of the software I mention in this post does not come with the Windows installer/uninstaller that you are used to.

There is something called Chocolatey for Windows, which is a package manager dealing with installing, upgrading and uninstalling software on Windows in a uniform way. I don’t know much about it, and I will not write more about it.

While unix programs typically have a .file with configuration (there are a few places to look though) Windows programs typically use the registry. When it comes to unix software adapted for Windows you never really know: registry, config file, both or… depends on? And if a config file, where is it?

While unix programs can usually be installed in the home directory of an unpriviliged user, or in /opt, programs in Windows often require administrative priviliges to spread their files a little bit everywhere.

The more stuff you try and throw out, the more garbage you have left on your computer, which could possibly interfere with a newer version of something you install in the future. Keep this in mind. One day you will install some exotic software that does not work as expected, and you dont know if some old garbage on your computer caused it.

Git
Git is a very popular version control system. Originally designed for Linux it is today (?) the officially preferred system also for native Windows development. Git is so popular (or demanding?) that you can get it bundled for Windows together with some of your favourite tools. This might be just enough for you!

  • posh-git : git for powershell
  • git for Windows : based on Git for Windows SDK
  • cmder : based on MSYS2
  • …more?

cmder
I have tried cmder and I dont like it. It is an ugly install of just unpacking a huge zip file. MSYS2 itself is hidden inside the cmder-folder, so I dont feel comfortable managing it on my own. There seems to be no upgrade strategy (except throwing it all away and downloading the latest version). Git is run from one shell (a traditional Windows shell with a lambda prompt) but a msys2/bash (identical to Cygwin) shell is started separately. I dont want to change console to run git: I run git all the time. But it might be perfect for you (many people like cmder).

Cygwin
Cygwin is nice becuase it comes with an installer (setup.exe) that is also a package manager. It has a lot of packages, and it is capable of installing things like apache as windows services. My experience is that I am too lazy downloading the latest setup.exe, and I am too lazy running setup.exe regularly. Sometimes you end up with old versions and upgrade problems.

My disappointment with Cygwin is that it comes with its own (compared to standard Windows) terminal Mintty that still does not have tabs. I also do nodejs development and nodejs is not a Cygwin package, so I need to use standard Windows node. This sucks a little because node in Windows behaves slightly different from node in Linux/OS X (particularly when it comes to where packages go) so the Cygwin experience is a bit broken when you start using Windows node and (perhaps particularly) npm.

Also, I like bash scripts and they tend to run significantly slower in Cygwin than in Linux (process forking is extremely cheap in Linux and rather expensive in Windows, so with the Cygwin overhead it can get rather slow for heavy scripts).

As I now try to update and configure my Cygwin environment for my Node.js project I find:

  • I use Cygwin wget to download setup.exe (so I get it where I want it, rather than Downloads) to update Cygwin. When I double click it (to run it) permissions are wrong and I cant execute it. It is an easy fix, but compared to OS X / Linux this is awkward.
  • I run node from Cygwin. I get no prompt (>). It turns out node.exe does not recognize Cygwin/bash as a terminal and I need to run node.exe -i.
  • Symlinks keep being a mystery. There are some kind of symlinks in Windows now, Cygwin seems to try to use them, but the result is not consistent.

For a Terminal with tabs, check Fatty below.

MinGW & MSYS
While the idea of Cygwin is to provide a Posix compliant environment on Windows the MinGW/MSYS project was about porting unix tools (perhaps particularly a gcc-based C/C++ build environment) to run natively on Windows. According to the Wikipedia page of MinGW this is pretty much abandoned.

Gow: GNU on Windows
Gnu on Windows is a lightweight alternative to Cygwin. It appears to not have been updated since early 2014 (when 0.8.0) came out (and the Windows Subsystem for Linux seems to be one reason it is less relevant). I will not write more about it.

MSYS2
MSYS2 is the successor to MSYS, and (surprise) it is based on Cygwin. I tried it quickly and I find that:

  • It seems safe to install side by side with Cygwin
  • Using the MSYS2 is very similar to Cygwin
  • Instead of the Cygwin GUI package installer, MSYS uses pacman from Arch (if you much prefer that, go with MSYS2)
  • MSYS2 has some emphasis on MinGW32 and MinGW64. As I understand it this is about being able to use MSYS2 to build native Windows software from C/C++ code (if you do this in Cygwin, you end up with a Cygwin dll dependency)

So, for my purposes MSYS2 seems to be quite equivalent to Cygwin. Expect the same annoyances as I mentioned for Cygwin above.

Windows Subsystem for Linux
If you try to run bash from a Windows 10 command line you will probably get something like:

-- Beta feature --
This will install Ubuntu on Windows, distributed by Canonical
and licensed under its terms available here:
https://aka.ms/uowterms
In order to use this feature you must have Developer Mode enabled.
Press any key to continue...

Note that this can be quite confusing if you have installed some other bash.exe on your system. If you unexpectedly get the above message, check your PATH and make sure you invoke the right bash executable.

Installation is very easy (activate Developer mode and run bash), after giving username+password you are actually good to go! If you are used to Debian/Ubuntu you will feel surpisingly at home.

I find my Windows files in /mnt/c (not too surprising).
I find my Linux home files in c:\Users\zo0ok\AppData\Local\lxss\home\zo0ok.
(copying files there from Windows did not make them appear in Linux though)

So, if you want to edit files using a Windows GUI editor, they need to be in Windows-land, and that is obviously not the optimal environment for you project.

In general it works very well though. My node services had no problems listening to localhost:8080 and accepting incoming http requests from a Windows web browser.

If you are not happy with Ubuntu or you want more control of your Linux environment you will need to do further research. Ideally, Windows Subsystem for Linux has most of the advantages of a virtual machine, but none of the drawbacks. However, depending on what you really do and need, it can turn out to have most of the drawbacks and few of the advantages instead.

Fatty
The Mintty terminal that comes with Cygwin is ok, but it does not support tabs. There are different alternatives, and a simple one is Fatty (it is really Mintty with tabs). Installing Fatty requires doing a git clone and compiling it yourself. If you are brave you can download fatty-1.6.exe from me.

The web page for fatty tells you how to make a desktop shortcut but it did not work for me. What works for me is to set Shortcut target: “C:\cygwin64\bin\fatty.exe -“. Simple as that. I think I will be quite fine with fatty, actually.

Making Fatty run Windows Subsystem for Linux was trickier (as in no success) though.

ConEmu
ConEmu seems to be the ultra powerful flexible console. After 5 minutes I have still not found out how to change the font size.

ConsoleZ
ConsoleZ is good. Under Edit->Settings>Tabs you can add your own shell types.

Cygwin:                       Shell = C:\cygwin64\bin\bash.exe --login
Windows Subsystem for Linux:  Shell = bash.exe

Apart from that, ConzoleZ is reasonably easy to configure and it stays out of your way (I hide toolbar, status bar, search bar).

Editors
I am fine with vim in the console. However, there are many fine editors for Windows:

  • Visual Studio Code (at no cost)
  • Atom
  • Notepad++

Conclusions
I have had Windows as a 2nd/3rd platform for many years and I can see that the game has changed a bit. Microsoft has started supporting Ubuntu on Windows and at the same time the native options (like MSYS) are fading away. There are reasons to think general development is getting more and more based on Unix.

I would say:

  • Posh-git : Powershell is your thing, and you don’t care about unix tools
  • Git for Windows : You want Git, but you don’t care much about other unix tools
  • Cygwin : You want plenty of choices of unix tools in Windows
  • MSYS2 : You like pacman (Arch) or you want to build native Windows C/C++ binaries using free software
  • Windows Subsystem for Linux : You have a Windows 10 computer but you want to keep your Linux development separate from Windows

If you use Cygwin and just want tabs, get Fatty. Otherwise ConsoleZ is good. Chocolatey is more a Windows power user tool than something you need to provide unix capabilities.

In the past I have mostly been using Cygwin (with mixed feelings). Lately, when I have heard about the options (cmder, poshgit, Git for Windows, MSYS2) I have got a feeling that it is rather hard to configure an optimal environment. Now however I have come to realize that the differences are not very big. Most options are hybrids based on Cygwin and/or with Cygwin embedded (perhaps in the name of MSYS2). For Windows developers not used to Unix it is good with things like Git for Windows that just come with the basic Unix tools with no need to think about it. For developers with a Unix background it makes more sense to run Cygwin and MSYS2 (or Windows Subsystem for Linux). The days of unix tools built natively for Windows are over, and it is probably a good thing.

What you need to think about is your compiler, interpreter and/or web server.