Tag Archives: JavaScript

Lambda Functions considered Harmful

Decades ago engineers wrote computer programs in ways that modern programmers scorn at. We learn that functions were long, global variables were used frequently and changed everywhere, variable naming was poor and gotos jumped across the program in ways that were impossible to understand. It was all harmful.

Elsewhere matematicians were improving on Lisp and functional programming was developed: pure, stateless, provable code focusing on what to do rather than how to do it. Functions became first class citizens and they could even be anonymous lambda functions.

Despite the apparent conflict between object oriented, functional and imperative programming there are some universally good things:

  • Functions that are not too long
  • Functions that do one thing well
  • Functions that have no side effects
  • Functions that can be tested, and that also are tested
  • Functions that can be reused, perhaps even being general
  • Functions and variables that are clearly named

So, how are we doing?

Comparing different styles
I read code and I talk to people who have different opinions about what is good and bad code. I decided to implement the same thing following different principles and discuss the different options. I particularly want to explore different ways to do functional programming.

My language of choice is JavaScript because it allows different styles, it requires quite little code to be written, and many people should be able to read it.

My artificial problem is that I have two arrays of N numbers. One number from each array can be added in NxN different ways. How many of these are prime? That is, for N=2, if I have [10,15] and [2,5] i get [12,15,17,20] of which one number (17) is prime. In all code below I decide if a number is prime in the same simple way.

Old imperative style (imperative)
The old imperative style would use variables and loops. If I had goto in JavaScript I would use goto instead of setting a variable (p) before I break out of the inner loop. This code allows for nothing to be tested nor reused, although the function itself is testable, reusable and pure (for practical purposes and correct input, just as all the other examples).

  const primecount = (a1,a2) => {
    let i, j;
    let d, n, p;
    let retval = 0;


    for ( i=0 ; i<a1.length ; i++ ) {
      for ( j=0 ; j<a2.length ; j++ ) {
        n = a1[i] + a2[j];
        p = 1;
        for ( d=2 ; d*d<=n ; d++ ) {
          if ( 0 === n % d ) {
            p = 0;
            break;
          }
        }
        retval += p;
      }
    }
    return retval;
  }

Functional style with lambda-functions (lambda)
The functional programming equivalent would look like the below code. I have focused on avoiding declaring variables (which would lead to a mutable state) and rather using the higher order function reduce to iterate over the two lists. This code also allows for no parts to be tested or reused. In a few lines of code there are three unnamed functions, none of them trivial.

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + ((n) => {
          for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
          return 1;
        })(a1val+a2val);
      }, 0);
    }, 0);
  };

Imperative style with separate test function (imperative_alt)
The imperative code can be improved by breaking out the prime test function. The advantage is clearly that the prime function can be modified in a more clean way, and it can be tested and reused. Also note that the usefulness of goto disappeared because return fulfills the same task.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += is_prime(a1[i] + a2[j]);
    return retval;
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

Functional style with lambda and separate test function (lambda_alt)
In the same way, the reduce+lambda-code can be improved by breaking out the prime test function. That function, but nothing else, is now testable and reausable.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + is_prime(a1val+a2val);
      }, 0);
    }, 0);
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

I think I can do better than any of the four above examples.

Functional style with reduce and named functions (reducer)
I don’t need to feed anonymous functions to reduce: I can give it named, testable and reusable functions instead. Now a challenge with reduce is that it is not very intuitive. filter can be used with any has* or is* function that you may already have. map can be used with any x_to_y function or some get_x_from_y getter or reader function that are also often useful. sort requires a cmpAB function. But reduce? I decided to name the below functions that are used with reduce reducer_*. It works quite nice. The first one reducer_count_primes simply counts primes in a list. That is (re)useful, testable all in itself. The next function reducer_count_primes_for_offset is less likely to be generally reused (with offset=1 it considers 12+1 to be prime, but 17+1 is not), but it makes sense and it can be tested. Doing the same trick one more time with reducer_count_primes_for_offset_array and we are done. These functions may not be reused. But they can be tested and that is often a great advantage during development. You can build up your program part by part and every step is a little more potent but still completely pure and testable (I remember this from my Haskell course long ago). This is how to solve hard problems using test driven development and to have all tests in place when you are done.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const reducer_count_primes = (s,n) => {
    return s + is_prime(n);
  };

  const reducer_count_primes_for_offset = (o) => {
    return (s,n) => { return reducer_count_primes(s,o+n); };
  };

  const reducer_count_primes_for_offset_array = (a) => {
    return (s,b) => { return s + a.reduce(reducer_count_primes_for_offset(b), 0); };
  };

  const primecount = (a1,a2) => {
    return a1.reduce(reducer_count_primes_for_offset_array(a2), 0);
  };

  const test = () => {
    if ( 1 !== [12,13,14].reduce(reducer_count_primes, 0) )
      throw new Error('reducer_count_primes failed');
    if ( 1 !== [9,10,11].reduce(reducer_count_primes_for_offset(3), 0) )
      throw new Error('reducer_count_primes_for_offset failed');
    if ( 2 !== [2,5].reduce(reducer_count_primes_for_offset_array([8,15]),0) )
      throw new Error('reducer_count_primes_for_offset_array failed');
  };

Using recursion (recursive)
Personally I like recursion. I think it is easier to use than reduce, and it is great for acync code. The bad thing with recursion is that your stack will eventually get full (if you dont know what I mean, try my code – available below) for recursion depths that are far from unrealistic.  My problem can be solved in the same step by step test driven way using recursion.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primes_for_offset = (a,o,i=0) => {
    if ( i === a.length )
      return 0;
    else
      return is_prime(a[i]+o) + primes_for_offset(a,o,i+1);
  }

  const primes_for_offsets = (a,oa,i=0) => {
    if ( i === oa.length )
      return 0;
    else
      return primes_for_offset(a,oa[i]) + primes_for_offsets(a,oa,i+1);
  }

  const primecount = (a1,a2) => {
    return primes_for_offsets(a1,a2);
  };

  const test = () => {
    if ( 2 !== primes_for_offset([15,16,17],2) )
      throw new Error('primes_with_offset failed');
  };

Custom Higher Order Function (custom_higher_order)
Clearly reduce is not a perfect fit for my problem since I need to nest it. What if I had a reduce-like function that produced the sum of all NxN possible pairs from two arrays, given a custom value function? Well that would be quite great and it is not particularly hard either. In my opinion this is a very functional approach (despite its implemented with for-loops). All the functions written are independently reusable in a way not seen in the other examples. The problem with higher order functions is that they are pretty abstract, so they are hard to name, and they need to be general enough to ever be reused for practical purposes. Nevertheless, if I see it right away, I can do it. But I don’t spend time inventing generic stuff instead of solving the actual problem at hand.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const combination_is_prime = (a,b) => {
    return is_prime(a+b);
  };

  const sum_of_combinations = (a1,a2,f) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += f(a1[i],a2[j]);
    return retval;
  };

  const primecount = (a1,a2) => {
    return sum_of_combinations(a1,a2,combination_is_prime);
  };

  const test = () => {
    if ( 1 !== is_prime(19) )
      throw new Error('is_prime(19) failed');
    if ( 0 !== combination_is_prime(5,7) )
       throw new Error('combination_is_prime(5,7) failed');
    if ( 1 !== sum_of_combinations([5,7],[7,9],(a,b)=> { return a===b; }) )
       throw new Error('sum_of_combinations failed');
  };

Lambda Functions considered harmful?
Just as there are many bad and some good applications for goto, there are both good and bad uses for lambdas.

I actually dont know if you – the reader – agrees with me that the second example (lambda) offers no real improvement to the first example (imperative). On the contrary, it is arguably a more complex thing conceptually to nest anonymous functions than to nest for loops. I may have done the lambda-example wrong, but there is much code out there, written in that style.

I think the beauty of functional programming is the testable and reusable aspects, among other things. Long, or even nested, lambda functions offer no improvement over old spaghetti code there.

All the code and performance
You can download my code and run it using any recent version of Node.js:

$ node functional-styles-1.js 1000

The argument (1000) is N, and if you double N execution time shall quadruple. I did some benchmarks and your results my vary depending on plenty of things. The below figures are just one run for N=3000, but nesting reduce clearly comes at a cost. As always, if what you do inside reduce is quite expensive the overhead is negligable. But using reduce (or any of the built in higher order functions) for the innermost and tightest loop is wasteful.

 834 ms  : imperative
874 ms  : custom_higher_order
890 ms  : recursive
896 ms  : imperative_alt
1015 ms  : reducer
1018 ms  : lambda_alt
1109 ms  : lambda

Other findings on this topic
Functional Programming Sucks


Vue components in Angular

I have an application written in AngularJS (v1) that I keep adding things to. Nowadays I prefer to write new code for Vue.js rather than AngularJS but rewriting the entire AngularJS application is out of the question.

However, when the need shows up for a new Page (controller in AngularJS) it is quite simple to write a Vue-component instead.

The AngularJS-html looks like this:

<div ng-if="page.showVue" id="{{ page.getVueId() }}"></div>

You may not have exactly “page” but if you have an AngularJS-application you know how to do this.

Your parent Angular controller needs to initiate Vue.

page.showVue = true;
var vue      = null;
var vueid    = null;

page.getVueId = function() {
    if ( !vueid ) {
        vueid = 'my_vue_component_id';
        var vueload = {
            el: '#' + vueid,
            template : '<my_vue_component />',
            data : {}
        };
        $timeout(function() {
            vue = new Vue(vueload);
        });
    }
    return vueid;
};

At some point you may navigate away from this vue page and then you can run the code:

vue.$destroy();
page.showVue = false;
vue          = null;
vueid        = null;

The way everything works is that when Angular wants to “show Vue” it sets page.showVue=true. This in turn activates the div, which needs an ID. The call to page.getVueId() will generate a Vue component (once), but initiate it only after Angular has shown the parent div with the correct id (thanks to $timeout).

You may use a router or have several different Vue-pages in your Angular-application and you obviously need to adjust my code above for your purposes (so every id is unique, and every component is initatied once).

I suppose (but I have not tried) that it is perfectly fine to have several different Vue-components mounted on different places in your Angular application. But I think you are looking for trouble if you want Vue to use (be a parent for) Angular controllers or directives (as children).

Vue.js is small enough that this will come at a quite acceptable cost for your current Angular application and it allows you to write new pages or parts in Vue in an existing AngularJS application.

Webpack: the shortest tutorial

So, you have some JavaScript that requires other JavaScript using require, and you want to pack all the files into one. Install webpack:

$ npm install webpack webpack-cli

These are my files (a main file with two dependencies):

$ cat main.js 

var libAdd = require('./libAdd.js');
var libMult = require('./libMult.js');

console.log('1+2x2=' + libAdd.calc(1, libMult.calc(2,2)));


$ cat libAdd.js 

exports.calc = (a,b) => { return a + b; };


$ cat libMult.js 

exports.calc = (a,b) => { return a * b; };

To pack this

$ ./node_modules/webpack-cli/bin/cli.js --mode=none main.js
Hash: 639616969f77db2f336a
Version: webpack 4.26.0
Time: 180ms
Built at: 11/21/2018 7:22:44 PM
  Asset      Size  Chunks             Chunk Names
main.js  3.93 KiB       0  [emitted]  main
Entrypoint main = main.js
[0] ./main.js 141 bytes {0} [built]
[1] ./libAdd.js 45 bytes {0} [built]
[2] ./libMult.js 45 bytes {0} [built]

and I have my bundle in dist/main.js. This bundle works just like original main:

$ node main.js 
1+2x2=5
$ node dist/main.js 
1+2x2=5

That is all I need to know about Webpack!

Background
I like the old way of building web application: including every script with a src-tag. However, occationally I want to use code I dont write myself, and more and more often it comes in a format that I can not easily just include it with a src-tag. Webpack is a/the way to make it “just” a JavaScript file that I can do what I want with.

Arrow functions in JavaScript: A strategy

Arrow functions have been a part of JavaScript since ES6. They are typically supported where you run JavaScript, except in Internet Explorer. To be clear, arrow functions are:

(a,b) => a+b

instead of

function(a,b) { return a+b }

I like to make things simple, and

  1. my code sometimes run on Internet Explorer
  2. arrow functions offers shorter and simplified syntax in some cases, but fundamentally you can write the same code with function
  3. I like to not have a build step (babel, webpack and friends) for a language that really does and should not need one

so, until now I have simply avoided them (and kind of banned them, along with other ES6 features) in code and software I am responsible for.

However

  1. arrow functions (as part of ES6) are here to stay
  2. they offer some advantages
  3. Internet Explorer will go away.

so, it makes sense to have a strategy for when to use arrow functions.

What I find on the Internet
The Internet is full of sources telling you how you can use arrow functions, how to write them, what are the pros, cons and pitfalls, and what they cannot do.

  • The key difference is how arrow functions work with this.
  • The syntax is shorter especially for single argument (needs no parenthesis), single statement (needs no return), functions.
  • Arrow functions don’t work well with Object oriented things (as constructors and prototype function)

In short, there are some cases where you can’t use arrow functions, some cases where they offer some real advantages, but in most cases it makes little real difference.

Arrow functions allow you to chain sort().filter().map() in very compact ways. With simple single statement arrow functions it is quite nice. But if the arrow functions become multiple lines I think it is poor programming.

What I don’t really find on the Internet
I don’t really find good advice on when to use arrow functions and when not to use arrow functions. I mean, when I program, I make decisions all the time:

  • Should I break this code out into a function?
  • Should this be an object (prototype style) or just data?
  • Should I break this code into its own module?
  • Should I write tests for this?
  • Should I allow a simple, slower algorithm, or should I add effort and complexity to write my code faster?
  • What should be the scope of these variables?
  • Should this be a parameter or can it be hard coded?
  • Can I make good use of map/reduce/every and friends, or is it better I just use a loop?
  • Naming everything…
  • …and so on…

Using, or not using, an arrow function is also a choice. How do I make that choice to ensure my code is good? I don’t really find very clear guidelines or style guides on this.

Lambda functions in other languages
Other languages have lambda functions. Those are special case anonymous functions. The thing I find peculiar about the use of arrow functions in JavaScript is that they are often used instead of function, when a standard function – not a lambda – would have been the obvious choice in other languages.

Intention
For practical purposes most often function and () => {} are interchangeable. And I guess you can write any JavaScript program using only arrow functions.

When you write code, it mostly does not matter what you use.
When you read code, it comes down to understanding the intention of the writer.

So I think good use of arrow functions is a way that makes the intention of the code as clear as possible. I want clear and consistent guidelines.

Using arrow functions in well defined cases shows more intention and contributes to more clear code than never using them.

I tend to read arrow functions as being a strong marker for functional programming. I find it confusing and when arrow functions are used in code that breaks other good core principles of functional programming.

The strongest cases
The strongest cases for arrow functions I can see:

Minimal syntax (no () or {} required), and never worth breaking such function out.

names = stuffs.map(stuff => stuff.name);

Callback: the arguments (error, data) are already given by openFile and the callback function cannot have a meaningful this. Also, for most practical purposes, the callback needs to use closure to access data in the parent scope, so it can not be a named function declared elsewhere.

openFile('myFile', (error, data) => {
  ... implementation
});

When it makes little difference
For a regular function it makes no difference:

const swapNames = (a,b) => {
  let tmp = a.name;
  a.name = b.name;
  b.name = tmp;
}

The function alternative would be:

function swapNames(a,b) {

and is actually shorter. However, I can appreciate with arrows that it is completely clear from the beginning that a binding of this can never happen, that it can not be used as a constructor and that there can be no hidden arguments (accessed via arguments).

Confused with comparison
There are cases when arrow functions can be confused with comparison.

// The intent is not clear
var x = a => 1 ? 2 : 3;
// Did the author mean this
var x = function (a) { return 1 ? 2 : 3 };
// Or this
var x = a <= 1 ? 2 : 3;

Obfuscate with higher order functions
Higher order functions (map, reduce, filter, sort) are nice and can improve your code. But, carelessly used they can be confusing and obfuscating.

These are not the fault of () => {} in itself. But it is a consequence of making higher order functions with arrow functions too popular.

I have seen for example (things like):

myArray.map(x => x.print())

map() should not have a side effect. It is outright obfuscating to feed a function that has a side effect into map(). And side effects have nothing to do with functional programming in the first place.

I have also seen reduce() and filter() being used when every(), some() or find() would have been the right choice. It is obfuscating, it is expensive, and it produces more code than necessary.

The use of arrow functions with higher order functions is only appropriate when the correct higher order function is used.

The abusive cases
Anonymous functions that are non-trivial and could clearly be named and reused (and testable) is clearly bad code:

myStuff.sort((a,b) => {
  if ( a.name < b.name ) return -1;
  if ( a.name > b.name ) return  1;
  if ( a.id   < b.id   ) return -1;
  if ( a.id   > b.id   ) return  1;
  return 0;
});

especially when the code is duplicated or the parent function is large.

An arrow-friendly policy
Admittedly, after doing my research I feel happier with arrow functions than I thought I would.

I suggest (as long as your runtime supports it) to use arrow functions as the default function. The reason for this is that they do less. I think the standard behavior of arguments, this and of OOP-concepts (prototype and constructors) should be optional and require explicit use (of function).

Just as one-line if-statements and if-statements without {} should be used carefully (I tend to abuse it myself) I think the same applies to arrow functions.

I think this is excellent:

names = stuffs.map(stuff => stuff.name);

but apart from those common simple cases I think think the full syntax should be used for clarity:

const compareItems (a,b) => {
  if ( a.name < b.name ) return -1;
  if ( a.name > b.name ) return  1;
  if ( a.id   < b.id   ) return -1;
  if ( a.id   > b.id   ) return  1;
  return 0;
};

(dont try to be clever by omitting (), {}, or return).

The use of function should be reserved for

  • constructors
  • prototype functions
  • functions that need the standard behavior of this
  • functions that do things with arguments
  • source files where function is used exclusively since before

Basic good functional programming practices should be especially respected when using arrow functions:

  • Dont duplicate code: break out anonymous functions to named functions when appropriate
  • Dont write long functions: break out anonymous functions to named functions when appropriate
  • Avoid side effects and global variables
  • Use the correct higher order function for the job

Also, obviously, take advantage of OOP and function when appropriate!

Callback functions
I think anonymous callback functions should generally be kept short.

const doStuff = () => {
  readFile('myFile', (error, data) => {
    if ( error )
      console.log('readFile failed: ' + e);
    else
      doStuffWithData(data);
  });
};

const doStuffWithData = (data) => {
  ...
};

Performance
In principle, I see no reason why arrow functions should not be at least as fast as regular function. In practice, the current state of JavaScript engines could be disappointing - I don't know.

However, a named static function is typically faster than an anonymous inline function. The JIT typically can optimize a function the more it is run so named and reusable functions are preferred.

I have made no benchmarks on arrow functions.

Feedback
I will start using arrow functions when I write new code and I feel enthusiastic about it. I will probably come across things I have not thought about. Do you have any thoughts on this? Let me know!

Vue.js: loading template html files

Update 2018-05-27: A few months have passed since I wrote this post. I have used my solution/library for several real applications and it has worked very well. So everything looks exactly as it did when I posted v0.1 and that is a good thing. There are obviously improvement opportunites and probaby limitations/bugs. But for my purposes I have not encountered any problems to fix. And nobody has notified me of needed fixes.

You may want to code your Vue.js application in such way that your html templates are in separate html files, but you still do not want a build/compile step. Well, the people writing Vue dont want you do do this, but it can easily be done.

All you need is to download this single js file and include it in your Vue.js web page. All instructions and documentation required are found in the js file.

VueWithHtmlLoader-library
I wrote a little library that simply does what is required in a rather simple way. I will not hold you back and I will show you by example immediately:

  • A Rock-paper-scissors Vue-app, all in 1 file: link
  • A Rock-paper-scissors Vue-app, modularised with separate html/js files: link
  • Source of VueWithHtmlLoader library: link

These are the code changes needed to use VueWithHtmlLoader:

 * 1) After including "vue.js", and
 *    before including your component javascript files,
 *    include "vuewithhtmlloader.js"
 *
 * 2) In your component javascript files
 *    replace: Vue.component(
 *       with: VueWithHtmlLoader.component(
 *
 *    replace: template: '...'
 *       with: templateurl: 'component-template.html' (replace with your url)
 *
 * 3) The call to "new Vue()" needs to be delayed, like:
 *    replace: var myVue = new Vue(...);
 *       with: var myVue;          
 *             function initVue() {
 *               myVue = new Vue(...);
 *             }
 *             VueWithHtmlLoader.done(initVue);

My intention is that the very simple Rock-paper-scissors-app shall work as an example.

Disclaimer: the library is just written and tested only with this application. The application is written primarily to demonstrate the library. The focus has been clarity and simplicity. Please feel free to suggest improvements to the library or the application, but keep in mind that it was never my intention to follow all best practices. The purpose of the library is to break a Vue best practice.

What the library does:

  1. It creates a global object: VueWithHtmlLoader
  2. It provides a function: VueWithHtmlLoader.component() that you shall use instead of Vue.component() (there may be unsupported/untested cases)
  3. When using VueWithHtmlLoader.component(), you can provide templateurl:’mytemplate.html’ instead of template:’whatever Vue normally supports’
  4. The Vue()-constructor must be called after all templateurls have been downloaded. To facilitate this, place the code that calls new Vue() inside a function, and pass that function to VueWithHtmlLoader.done()
  5. The library will now load all templateurls. When an html template is successfully downloaded over the network Vue.component() is called normally.
  6. When all components are initiated, new Vue() is called via the provided function

Apart from this, you can and should use the global Vue object normally for all other purposes. There may be more things that you want to happen after new Vue() has been called.

The library has no dependencies (it uses XMLHttpRequest directly).

Background
Obviously there are people (like me) with an AngularJS (that is v1) background who are used to ng-include and like it. We see Vue as a better, smaller AngularJS for the future, but we want to keep our templates in separate files without a build step.

I also expect many developers with various backgrounds to try out Vue.js. They may also benefit from a simple way to keep templates in separate files without worrying about a build tool.

As I see it, there are different sizes of applications (and sizes of team and support around them).

  1. Small single-file applications: I think it is great that Vue supports simple single-file applications (with x-template if you want), implemented like my game above. This has a niche!
  2. Applications that clearly require modularization, but optimizing loading times is not an issue, and you want to use the the simplest tools available (keep html/js separate to allow standard editor support and not require a build step). AngularJS (v1) did this nicely. I intend Vue to do it nicely too with this library.
  3. Applications built by people or organizations that already use Webpack and such tools, or applications that are so demanding that such tools are required.

I fully respect and understand the Vue project does not want to support case 2 out of the box and that they prefer to keep the Vue framework small (and as fast as possible).

But i sense some kind of arrogance with articles like 7 Ways To Define A Component Template in Vue.js. I mean 1,2 are only useful for very small components. 3 is only useful for minimal applications that dont require modularization. 4 has very narrow use cases. 5 is insane for normal development (however, I can see cases where you want to output/generate it). And 6,7 requires a build step.

8. Put the damn HTML in an HTML-file and include it? Nowhere to be seen.

The official objection to 8 is obviously performance. I understand that pre-compiling your html instead of serving html that the client will compile is faster. But compared to everything else this overhead may be negligable. And that is what performance is all about, focusing on what is critical and keeping everything else simple. My experience is that loading data to my applications take much more time than loading the application itself.

The Illusion of Simplicity
AngularJS (v1) gave the illusion of simplicity. You just wrote JavaScript-files and (almost) HTML-files, the browser loaded everything and it just worked. I know this is just an illusion and a lot happens behind the scenes. But my experience is that this illusion works well, and it does not leak too much. Vue.js is so much simpler than AngularJS in so many ways. I think my library can keep my illusion alive.

Other options
There is thread on Stackoverflow about this and there are obviously other solutions. If you want to write .vue-files and load them there is already a library for that. For my solution I was inspired by the simple jquery example, but: 1) it is nice to not have a jquery dependency, 2) it is nice to keep the async stuff in one place, 3) the delayed call of new Vue() seems forgotten.

Feedback, limitations, bugs…
If you have suggestions for improvements or fixes of my library, please let me know! I am happy to make it better and I intend to use it for real applications.

I think this library suits some but not all (or even most) Vue.js applications. Lets not expect it to serve very complex requirements or applications that would actually benefit more of a Webpack treatment.

TODO and DONE

  • A minified version – I have not really decided on ambition/obfuscation level
  • Perhaps change loglevel if minified Vue is used? or not.
  • I had some problems with comments in html-files, but I failed to reproduce them. I think <!– comments –> should definitely be supported.

JavaScript: Sets, Objects and Arrays

JavaScript has a new (well well) fancy Set datastructure (that does not come with functions for union, intersection and the likes, but whatever). A little while ago I tested Binary Search (also not in the standard library) and I was quite impressed with the performance.

When I code JavaScript I often hesitate about using an Array or an Object. And I have not started using Set much.

I decided to make some tests. Lets say we have pseudo-random natural numbers (like 10000 of them). We then want to check if a number is among the 10000 numbers or not (if it is a member of the set). A JavaScript Set does exactly that. A JavaScript Object just requires you to do: set[314] = true and you are basically done (it gets converted to a string, though). For an Array you just push(314), sort the array, and then use binary search to see if the value is there.

Obviously, if you often add or remove value, (re)sorting the Array will be annoying and costly. But quite often this is not the case.

The test
My test consists of generating N=10000 random unique numbers (with distance 1 or 2 between them). I then insert them (in a kind of pseudo-random order) into an Array (and sorts it), into an Object, and into a Set. I measure this time as an initiation time (for each data structure).

I repeat. So now I have 2xArrays, 2xObjects, 2xSets.

This way I can test both iterating and searching with all combinations of data structures (and check that the results are the same and thus correct).

Output of a single run: 100 iterations, N=10000, on a Linux Intel i5 and Node.js 8.9.1 looks like this:

                         ====== Search Structure ======
(ms)                        Array     Object      Set
     Initiate                1338        192      282
===== Iterate =====    
        Array                 800         39       93
       Object                 853        122      170
          Set                1147         82      131

By comparing columns you can compare the cost of searching (and initiating the structure before searching it). By comparing rows you can compare the cost of iterating over the different data structures (for example, iterating over Set while searching Array took 1147ms).

These results are quite consistent on this machine.

Findings
Some findings are very clear (I guess they are quite consistent across systems):

  • Putting values in an Array, to sort it, and the search it, is much slower and makes little sense compared to using an Object (or a Set)
  • Iterating an Array is a bit faster than iterating an Object or Set, so if you are never going to search an Array is faster
  • The newer and more specialized Set offers little advantage to good old Objects

What is more unclear is why iterating over Objects is faster when searching Arrays, but iterating over Sets if faster when searching Objects or Sets. What I find is:

  • Sets seem to perform comparably to Objects on Raspberry Pi, ARMv7.
  • Sets seem to underperform more on Mac OS X

Obviusly, all this is very unclear and can vary depending on CPU-cache, Node-version, OS and other factors.

Smaller and Larger sets
These findings hold quite well for smaller N=100 and larger N=1000000. The Array, despite being O(n log n), does not get much more worse for N=1000000 than it already was for N=10000.

Conclusions and Recommendation
I think the conservative choice is to use Arrays when order is important or you know you will not look for a member based on its unique id. If members have unique IDs and are not ordered, use Object. I see no reason to use Set, especially if you target browsers (support in IE is still limited in early 2018).

The Code
Here follows the source code. Output is not quite as pretty as the table above.

var lodash = require('lodash');

function randomarray(size) {
  var a = new Array(size);
  var x = 0;
  var i, r;
  var j = 0;
  var prime = 3;

  if ( 50   < size ) prime = 31;
  if ( 500  < size ) prime = 313;
  if ( 5000 < size ) prime = 3109;

  for ( i=0 ; i<size ; i++ ) {
    r = 1 + Math.floor(2 * Math.random());
    x += r;
    a[j] = '' + x;
    j += prime;
    if ( size <= j ) j-=size;
  }
  return a;
}

var times = {
  arr : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  obj : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  set : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  }
}

function make_array(a) {
  times.arr.make -= Date.now();
  var i;
  var r = new Array(a.length);
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[i] = a[i];
  }
  r.sort();
  times.arr.make += Date.now();
  return r;
}

function make_object(a) {
  times.obj.make -= Date.now();
  var i;
  var r = {};
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[a[i]] = true;
  }
  times.obj.make += Date.now();
  return r;
}

function make_set(a) {
  times.set.make -= Date.now();
  var i;
  var r = new Set();
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r.add(a[i]);
  }
  times.set.make += Date.now();
  return r;
}

function make_triplet(n) {
  var r = randomarray(n);
  return {
    arr : make_array(r),
    obj : make_object(r),
    set : make_set(r)
  };
}

function match_triplets(t1,t2) {
  var i;
  var m = [];
  m.push(match_array_array(t1.arr , t2.arr));
  m.push(match_array_object(t1.arr , t2.obj));
  m.push(match_array_set(t1.arr , t2.set));
  m.push(match_object_array(t1.obj , t2.arr));
  m.push(match_object_object(t1.obj , t2.obj));
  m.push(match_object_set(t1.obj , t2.set));
  m.push(match_set_array(t1.set , t2.arr));
  m.push(match_set_object(t1.set , t2.obj));
  m.push(match_set_set(t1.set , t2.set));
  for ( i=1 ; i<m.length ; i++ ) {
    if ( m[0] !== m[i] ) {
      console.log('m[0]=' + m[0] + ' != m[' + i + ']=' + m[i]);
    }
  }
}

function match_array_array(a1,a2) {
  times.arr.arr -= Date.now();
  var r = 0;
  var i, v;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    v = a1[i];
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.arr.arr += Date.now();
  return r;
}

function match_array_object(a1,o2) {
  times.arr.obj -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( o2[a1[i]] ) r++;
  }
  times.arr.obj += Date.now();
  return r;
}

function match_array_set(a1,s2) {
  times.arr.set -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( s2.has(a1[i]) ) r++;
  }
  times.arr.set += Date.now();
  return r;
}

function match_object_array(o1,a2) {
  times.obj.arr -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.obj.arr += Date.now();
  return r;
}

function match_object_object(o1,o2) {
  times.obj.obj -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( o2[v] ) r++;
  }
  times.obj.obj += Date.now();
  return r;
}

function match_object_set(o1,s2) {
  times.obj.set -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( s2.has(v) ) r++;
  }
  times.obj.set += Date.now();
  return r;
}

function match_set_array(s1,a2) {
  times.set.arr -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.set.arr += Date.now();
  return r;
}

function match_set_object(s1,o2) {
  times.set.obj -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( o2[v] ) r++;
  }
  times.set.obj += Date.now();
  return r;
}

function match_set_set(s1,s2) {
  times.set.set -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( s2.has(v) ) r++;
  }
  times.set.set += Date.now();
  return r;
}

function main() {
  var i;
  var t1;
  var t2;

  for ( i=0 ; i<100 ; i++ ) {
    t1 = make_triplet(10000);
    t2 = make_triplet(10000);
    match_triplets(t1,t2);
    match_triplets(t2,t1);
  }

  console.log('TIME=' + JSON.stringify(times,null,4));
}

main();

When to (not) use Web Workers?

Web Workers is a mature, simple, standardised, compatible technology for allowing multithreaded JavaScript-applications in the web browser.

I am not going to write about how to use Web Worker (check the excellent MDN article). I am going to write a little about when and why to (not) use Web Worker.

First, Web Workers are about performance. And performance is typically not the best thing to think about first when you code something.

Second, when you have performance problems and you throw more cores at the problem your best speedup is x2, x4 or xN. In 2018 it is quite common with 4 cores and that means in the optimal case you can make your program 4 times faster by using Web Workers. Unfortunately, if it was not fast enough from the beginning chances are a 4x speedup is not going to help much. And the cost of 4x speedup is 4 times more heat is produced, the battery will drain faster, and perhaps other applications will be suffering. A more efficient algorithm can often produce 10-100 times speedup without making the maintainability of the program suffer too much (and there are very many ways to make a non-optimised program faster).

Let us say we have a web application. The user clicks “Show report”, the GUI locks/blocks for 10s and then the report displays. The user might accept that the GUI locks, if just for 1-2 seconds. Or the user might accept that the report takes 10s to compute, if it shows up little by little and the program does not appear hung. The way we could deal with this in JavaScript (which is single thread and asyncronous) is to break the 10s report calculation into small pieces (say 100 pieces each taking 100ms) and after calculating each piece calling window.setTimeout which allows the UI to update (among other things) before calculating another piece of the report. Perhaps a more common and practical approach is to divide the 10s job into logical parts: fetch data, make calculations, make report, but this would not much improve the locked GUI situation since some (or all) parts still take significant (blocking) time.

If we could send the entire 10s job to a Web Worker our program GUI would be completely responsive while the report is generated. Now the key limitation of a web worker (which is also what allows it to be simple and safe):

Data is copied to the Worker before it starts, and copied from the Worker when it has completed (rather than being passed by reference).

This means that if you already have a lot of data, it might be quite expensive to copy that data to the web worker, and it might actually be cheaper to just do the job where the data already is. In the same way, since there is some overhead in calling the Web Worker, you can’t send too many too small pieces of work to it, because you will occupy yourself with sending and receiving messages rather than just doing the job right away.

This leaves us with obvious candidates for web workers (you can use Google):

  • Expensive searches (like chess moves or travelling salesman solutions)
  • Encryption (but chances are you should not do it in JavaScript in the first place, for security reasons)
  • Spell and grammar checker (I don’t know much about this).
  • Background network jobs

This is not too useful in most cases. What would be useful would be to send packages of work (arrays), like streams in a functional programming way: map(), reduce(), sort(), filter().

I decided to write some Web Worker tests based on sort(). Since I can not (easily, and there are probably good reasons) write JavaScript in WordPress I wrote a separate page with the application. Check it out now:

So, for 5 seconds I try to do the following job as many times I can, while I keep track of how much the GUI is suffering:

  1. create an array of 10001 random numbers: O(n)
  2. sort it: O(n log n)
  3. get the median (array[5000]): O(1)

The expensive part is step 2, the sort (well, I actually have not measured 1 vs 2). If the ratio of amount of work done per byte being sent is high enough then it can be worth it to send the job to a Web Worker.

If you run the tests yourself I think you shall see that the first Web Worker tests that outsource all of 1-2-3 are quite ok. But this basically means giving the web worker no data at all and when it has done a significant amount of job, receiving just a few numbers. This is more Web Worker friendly than Chess where at least the board would need to be sent.

If you then run the tests that outsource just sort() you see significantly lower throughput. How suitable sort()? Well, sorting 10k ~ 2^13 elements should require each element to be compared (accessed) about 13 times. And there is no data sent that is not needed by the Web Worker. Just as a counter example: if you send an order to get back the sum of the lines most of the order data is ignored by the Web Worker, and it just needs to access each line value once; much much less suitable than sort().

Findings from tests
I find that sort(), being O(n log n), on an array of numbers is far too fast to be outsourced to a Web Worker. You need to find a much more “dense” problem to benefit of a Web Worker.

Islands of data
If you can design your application in such way that one Web Worker maintains its own full state and just shares small selected parts occationally, that could work. The good thing is that this would also be clean encapsulation of data and separation of responsibilites. The bad thing is that you probably need to design with the Web Worker in mind quite early, and this kind of premature optimization is often a bad idea.

This could be letting a Web Worker do all your I/O. But if most data that you receive is needed in your application, and most data you send comes straight from your application, the benefit is very questionable. An if most data you receive is not needed in your application, perhaps you should not receive so much data in the first place. Even if you process your incoming data quite much: validating, integrating with current state, precalculating I would not expect it to come very close to the computational intensity of my sort().

Conclusions
Unfortunately, the simplicity and safety of Web Worker is unfortunately also its biggest limitation. The primary reason for using a Web Worker should be performance and even for artificial problems it is hard to get any benefit.

Note to self: never try-catch more than necessary!

A wrote a function, and then a unittest, and the unit test was good.
Then I called the function from my real project, and it failed!

I isolated the problem and thought I had found a bug in V8 (except after many years as a programmer I have I learnt it is never the compilers fault).

This was my output:

$ node bug.js 
Test good
main: err=Not JSON

This is my simplified (faulty) code:

function callSomething(callback) {
  var rawdata = '{ "a":"1" }';
  var jsondata; 

  try {
    jsondata = JSON.parse(rawdata);
    callback(null,jsondata);
  } catch (e) {
    callback('Not JSON', null);
  }
}

function test() {
  callSomething(function(err,data) {
    if ( err ) console.log('Test bad: ' + err);
    console.log('Test good');
  });
}

function main() {
  var result = {
    outdata : {}
  };

  callSomething(function(err,data) {
    if ( err ) {
      console.log('main: err=' + err);
    } else {
      result.outata.json = data;
      console.log('main: json=' + JSON.stringify(result.outdata.json));
    }
  });
}

test();
main();

How can the test not fail when main fails?

Well, here is the correct output

$ node nodebug.js 
Test good
main: json={"a":"1"}

of the correct code main function:

function main() {
  var result = {
    outdata : {}
  };

  callSomething(function(err,data) {
    if ( err ) {
      console.log('main: err=' + err);
    } else {
//    result.outata.json = data;
      result.outdata.json = data;
      console.log('main: json=' + JSON.stringify(result.outdata.json));
    }
  });
}

The misnamed property caused an Error which was (unintentionally) caught, causing the anonymous callback function to be called once more, this time with err set, but to the wrong error.

It would have been better to write:

function callSomething(callback) {
  var rawdata = '{ "a":"1" }';
  var jsondata; 

  try {
    jsondata = JSON.parse(rawdata);
  } catch (e) {
    callback('Not JSON', null);
    return;
  }
  callback(null,jsondata);
}

and the misnamed propery error would have crashed the program in the right place.

Conclusion
Don’t ever try more things than necessary. And if you need to try several lines, consider making separate try for each.

All JavaScript objects are not equally fast

One thing I like with JavaScript and NodeJS is to have JSON in the entire stack. I store JSON on disk, process JSON data server side, send JSON over HTTP, process JSON data client side, and the web GUI can easily present JSON (I work with Angular).

As a result of this, all objects are not created the same. Lets say I keep track of Entries, I have an Entry-constructor that initiates new objects with all fields (no more no less). At the same time I receive Entry-objects as JSON-data over the network.

A strategy is needed:

  1. Have mix of raw JSON-Entries and Objects that are instanceof Entry
  2. Create real Entry-objects from all JSON-data
  3. Only work with raw JSON-Entries

Note that if you don’t go with (2) you can’t use prototype, expect objects to have functions or use instanceof to identify objects.

Another perhaps not obvious aspect is that performance is not the same. When you create a JavaScript object using new the runtime actually creates a class with fast to access properties. Such object properties are faster than

  • an empty object {} with properties set afterwards
  • an object created with JSON.parse()

I wrote a program to test this. The simplified explanation is that I obtained an array of objects that I then sorted/calculated a few (6) times. For a particular computer and problem size I got these results:

TIME   PARAMETER   DESCRIPTION
3.3s       R       Produce random objects using "new"
4.4s       L       Load objects from json-file using JSON.parse()
3.0s       L2      json-file, JSON.parse(), send raw objects to constructor
3.2s       L3      load objects using require() from a js-file

I will be honests and say that the implementation of the compare-function sent to sort() matters. Some compare functions suffered more or less from different object origins. Some compare functions are more JIT-optimised and faster the second run. However, the consistent finding is that raw JSON-objects are about 50% slower than objects created with new and a constructor function.

What is not presented above is the cost of parsing and creating objects.

My conclusion from this is that unless you have very strict performance requirements you can use the raw JSON-objects you get over the network.

Below is the source code (for Node.js). Apart from the parameters R, L, L2 and L3 there is also a S(tore) parameter. It creates the json- and js-files used by the Load options. So typically run the program with the S option first, and then the other options. A typicall run looks like this:

$ node ./obj-perf.js S
Random: 492ms
Store: 1122ms

$ node ./obj-perf.js R
Random: 486ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3350ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3361ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3346ms

$ node ./obj-perf.js L
Load: 376ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 4382ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 4408ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 4453ms

$ node ./obj-perf.js L2
Load: 654ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3018ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 2974ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 2890ms

$ node ./obj-perf.js L3
Load: 1957ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3436ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3264ms
DISTS=110463, 110621, 110511, 110523, 110591, 110515 : 3199ms

The colums with numbers (110511) are checksums calculated between the sorts. They should be equal, otherwise they dont matter.

const nodeFs = require('fs');

function Random(seed) {
  this._seed = seed % 2147483647;
  if (this._seed <= 0) this._seed += 2147483646;
}

Random.prototype.next = function () {
  return this._seed = this._seed * 16807 % 2147483647;
};

function Timer() {
  this.time = Date.now();
}

Timer.prototype.split = function() {
  var now = Date.now();
  var ret = now - this.time;
  this.time = now;
  return ret;
};

function Point() {
  this.a = -1;
  this.b = -1;
  this.c = -1;
  this.d = -1;
  this.e = -1;
  this.f = -1;
  this.x =  0;
}

function pointInit(point, rand) {
  var p;
  for ( p in point ) {
    point[p] = rand.next() % 100000;
  }
}

function pointLoad(json) {
  var p;
  var point = new Point();
  for ( p in point ) {
    point[p] = json[p];
  }
  return point;
}

function pointCmp(a,b) {
  return pointCmpX[a.x](a,b,a.x);
}

function pointCmpA(a,b) {
  if ( a.a !== b.a ) return a.a - b.a;
  return pointCmpB(a,b);
}

function pointCmpB(a,b) {
  if ( a.b !== b.b ) return a.b - b.b;
  return pointCmpC(a,b);
}

function pointCmpC(a,b) {
  if ( a.c !== b.c ) return a.c - b.c;
  return pointCmpD(a,b);
}

function pointCmpD(a,b) {
  if ( a.d !== b.d ) return a.d - b.d;
  return pointCmpE(a,b);
}

function pointCmpE(a,b) {
  if ( a.e !== b.e ) return a.e - b.e;
  return pointCmpF(a,b);
}

function pointCmpF(a,b) {
  if ( a.f !== b.f ) return a.f - b.f;
  return pointCmpA(a,b);
}

var pointCmpX = [pointCmpA,pointCmpB,pointCmpC,pointCmpD,pointCmpE,pointCmpF];

function pointDist(a,b) {
  return Math.min(
    (a.a-b.a)*(a.a-b.a),
    (a.b-b.b)*(a.b-b.b),
    (a.c-b.c)*(a.c-b.c),
    (a.d-b.d)*(a.d-b.d),
    (a.e-b.e)*(a.e-b.e),
    (a.f-b.f)*(a.f-b.f)
  );
}

function getRandom(N) {
  var i;
  var points = new Array(N);
  var rand   = new Random(14);

  for ( i=0 ; i<N ; i++ ) {
    points[i] = new Point();
    n = pointInit(points[i], rand);
  }
  return points;
}

function test(points) {
  var i,j;
  var dist;
  var dists = [];

  for ( i=0 ; i<6 ; i++ ) {
    dist = 0;
    for ( j=0 ; j<points.length ; j++ ) {
      points[j].x = i;
    }
    points.sort(pointCmp);
    for ( j=1 ; j<points.length ; j++ ) {
      dist += pointDist(points[j-1],points[j]);
    }
    dists.push(dist);
  }
  return 'DISTS=' + dists.join(', ');
}

function main_store(N) {
  var timer = new Timer();
  points = getRandom(N);
  console.log('Random: ' + timer.split() + 'ms');
  nodeFs.writeFileSync('./points.json', JSON.stringify(points));
  nodeFs.writeFileSync('./points.js', 'exports.points=' +
                                      JSON.stringify(points) + ';');
  console.log('Store: ' + timer.split() + 'ms');
}

function main_test(points, timer) {
  var i, r;
  for ( i=0 ; i<3 ; i++ ) {
    r = test(points);
    console.log(r + ' : ' + timer.split() + 'ms');
  }
}

function main_random(N) {
  var timer = new Timer();
  var points = getRandom(N);
  console.log('Random: ' + timer.split() + 'ms');
  main_test(points, timer);
}

function main_load() {
  var timer = new Timer();
  var points = JSON.parse(nodeFs.readFileSync('./points.json'));
  console.log('Load: ' + timer.split() + 'ms');
  main_test(points, timer);
}

function main_load2() {
  var timer = new Timer();
  var points = JSON.parse(nodeFs.readFileSync('./points.json')).map(pointLoad);
  console.log('Load: ' + timer.split() + 'ms');
  main_test(points, timer);
}

function main_load3() {
  var timer = new Timer();
  var points = require('./points.js').points;
  console.log('Load: ' + timer.split() + 'ms');
  main_test(points, timer);
}

function main() {
  var N = 300000;
  switch ( process.argv[2] ) {
  case 'R':
    main_random(N);
    break;
  case 'S':
    main_store(N);
    break;
  case 'L':
    main_load();
    break;
  case 'L2':
    main_load2();
    break;
  case 'L3':
    main_load3();
    break;
  default:
    console.log('Unknown mode=' + process.argv[2]);
    break;
  }
}

main();

JavaScript: await async

With Node.js version 8 there is finally a truly attractive alternative to good old callbacks.

I was never a fan of promises, and implementing await-async as a library is not pretty. Now when await and async are keywords in JavaScript things change.

The below program demonstrates a simple async function doing IO: ascertainDir. It creates a directory, but if it already exists no error is thrown (if there is already a file with the same name, no error is thrown, and that is a bug but it will do for the purpose of this article).

There are four modes of the program: CALLBACK, PROMISE, AWAIT-LIB and AWAIT-NATIVE. Creating a folder (x) should work. Creating a folder in a nonexisting folder (x/x/x) should fail. Below is the output of the program and as you see the end result is the same for the different asyncronous strategies.

$ node ./await-async.js CB a
Done: a
$ node ./await-async.js CB a/a/a
Done: Error: ENOENT: no such file or directory, mkdir 'a/a/a'

$ node ./await-async.js PROMISE b
Done: b
$ node ./await-async.js PROMISE b/b/b
Done: Error: ENOENT: no such file or directory, mkdir 'b/b/b'

$ node ./await-async.js AWAIT-LIB c
Done: c
$ node ./await-async.js AWAIT-LIB c/c/c
Done: Error: ENOENT: no such file or directory, mkdir 'c/c/c'

$ node ./await-async.js AWAIT-NATIVE d
Done: d
$ node ./await-async.js AWAIT-NATIVE d/d/d
Done: Error: ENOENT: no such file or directory, mkdir 'd/d/d'

The program itself follows:

     1	var nodefs = require('fs')
     2	var async = require('asyncawait/async')
     3	var await = require('asyncawait/await')
     4	
     5	
     6	function ascertainDirCallback(path, callback) {
     7	  if ( 'string' === typeof path ) {
     8	    nodefs.mkdir(path, function(err) {
     9	      if (!err) callback(null, path)
    10	      else if ('EEXIST' === err.code) callback(null, path)
    11	      else callback(err, null)
    12	    })
    13	  } else {
    14	    callback('mkdir: invalid path argument')
    15	  }
    16	};
    17	
    18	
    19	function ascertainDirPromise(path) {
    20	  return new Promise(function(fullfill,reject) {
    21	    if ( 'string' === typeof path ) {
    22	      nodefs.mkdir(path, function(err) {
    23	        if (!err) fullfill(path)
    24	        else if ('EEXIST' === err.code) fullfill(path)
    25	        else reject(err)
    26	      })
    27	    } else {
    28	      reject('mkdir: invalid path argument')
    29	    }
    30	  });
    31	}
    32	
    33	
    34	function main() {
    35	  var method = 0
    36	  var dir    = 0
    37	  var res    = null
    38	
    39	  function usage() {
    40	    console.log('await-async.js CB/PROMISE/AWAIT-LIB/AWAIT-NATIVE directory')
    41	    process.exit(1)
    42	  }
    43	
    44	  switch ( process.argv[2] ) {
    45	  case 'CB':
    46	  case 'PROMISE':
    47	  case 'AWAIT-LIB':
    48	  case 'AWAIT-NATIVE':
    49	    method = process.argv[2]
    50	    break
    51	  default:
    52	    usage();
    53	  }
    54	
    55	  dir = process.argv[3]
    56	
    57	  if ( process.argv[4] ) usage()
    58	
    59	  switch ( method ) {
    60	  case 'CB':
    61	    ascertainDirCallback(dir, function(err, path) {
    62	      console.log('Done: ' + (err ? err : path))
    63	    })
    64	    break
    65	  case 'PROMISE':
    66	    res = ascertainDirPromise(dir)
    67	    res.then(function(path) {
    68	      console.log('Done: ' + path)
    69	    },function(err) {
    70	      console.log('Done: ' + err)
    71	    });
    72	    break
    73	  case 'AWAIT-LIB':
    74	    (async(function() {
    75	      try {
    76	        res = await(ascertainDirPromise(dir))
    77	        console.log('Done: ' + res)
    78	      } catch(e) {
    79	        console.log('Done: ' + e)
    80	      }
    81	    })());
    82	    break
    83	  case 'AWAIT-NATIVE':
    84	    (async function() {
    85	      try {
    86	        res = await ascertainDirPromise(dir)
    87	        console.log('Done: ' + res)
    88	      } catch(e) {
    89	        console.log('Done: ' + e)
    90	      }
    91	    })();
    92	    break
    93	  }
    94	}
    95	
    96	main()

Please note:

  1. The anonymous function on line 74 would not be needed if main() itself was async()
  2. The anonymous function on line 84 would not be needed if main() itself was async
  3. A function that returns a Promise() (line 19) works as a async function without the async keyword.

Callback
Callback is the old simple method of dealing with asyncrounous things in JavaScript. A major complaint has been “callback hell”: if you call several functions in sequence it can get rather messy. I can agree with that, BUT I think each asyncrounous call deserves its own error handling anyway (and with proper error handling other options tend to be equally tedious).

Promise
I dont think using a promise (66-71) is very nice. It is of course a matter of habit. One thing is that not all requests in the success-path are actually success in real life, or not all errors are errors (like in ascertainDir). Very commonly you make a http-request which itself is good, but the data you receive is not good so you want to proceed with error handling. This means that the fulfill case needs to execute the same code as the reject case, for some “successful” replies. Promises can be chained, but it typically results in ignoring proper error handling.

awaitasync library
I think the syntax of the asyncawait library is rather horrible, but it works as a proof-of-concept for the real thing.

async await native keywords
With the async/await keywords in JavaScript, suddenly asyncrounous code can be handled just like in Java or C#. Since it is familiar it is appealing! No doubt it is clean and practical. I would hesitate to mix it with Callbacks or Promises, and would rather wait until I can do a complete rewrite.

Common sources of bugs in JavaScript are people trying to return from within (callback/promises) functions, people not realising the rest of the code continues to run after the asyncrous call, or things related to variable scopes. I guess in most cases the await/async makes these things cleaner and easier, but I would expect problems where it causes unexpected effects when not properly used.

Finally, if you start using async/await keywords there is no polyfill or fallback for older browser (maybe Babel can do that for you). As usual, IE seems to lag behind, and you can forget about Node v6 (or earlier). Depending on your situation, this could be a show stopper or no issue at all.

Watch something?
For more details, I can recommend this video on 5 architectures of asynchronous JavaScript.