Lambda Functions considered Harmful

Decades ago engineers wrote computer programs in ways that modern programmers scorn at. We learn that functions were long, global variables were used frequently and changed everywhere, variable naming was poor and gotos jumped across the program in ways that were impossible to understand. It was all harmful.

Elsewhere matematicians were improving on Lisp and functional programming was developed: pure, stateless, provable code focusing on what to do rather than how to do it. Functions became first class citizens and they could even be anonymous lambda functions.

Despite the apparent conflict between object oriented, functional and imperative programming there are some universally good things:

  • Functions that are not too long
  • Functions that do one thing well
  • Functions that have no side effects
  • Functions that can be tested, and that also are tested
  • Functions that can be reused, perhaps even being general
  • Functions and variables that are clearly named

So, how are we doing?

Comparing different styles
I read code and I talk to people who have different opinions about what is good and bad code. I decided to implement the same thing following different principles and discuss the different options. I particularly want to explore different ways to do functional programming.

My language of choice is JavaScript because it allows different styles, it requires quite little code to be written, and many people should be able to read it.

My artificial problem is that I have two arrays of N numbers. One number from each array can be added in NxN different ways. How many of these are prime? That is, for N=2, if I have [10,15] and [2,5] i get [12,15,17,20] of which one number (17) is prime. In all code below I decide if a number is prime in the same simple way.

Old imperative style (imperative)
The old imperative style would use variables and loops. If I had goto in JavaScript I would use goto instead of setting a variable (p) before I break out of the inner loop. This code allows for nothing to be tested nor reused, although the function itself is testable, reusable and pure (for practical purposes and correct input, just as all the other examples).

  const primecount = (a1,a2) => {
    let i, j;
    let d, n, p;
    let retval = 0;


    for ( i=0 ; i<a1.length ; i++ ) {
      for ( j=0 ; j<a2.length ; j++ ) {
        n = a1[i] + a2[j];
        p = 1;
        for ( d=2 ; d*d<=n ; d++ ) {
          if ( 0 === n % d ) {
            p = 0;
            break;
          }
        }
        retval += p;
      }
    }
    return retval;
  }

Functional style with lambda-functions (lambda)
The functional programming equivalent would look like the below code. I have focused on avoiding declaring variables (which would lead to a mutable state) and rather using the higher order function reduce to iterate over the two lists. This code also allows for no parts to be tested or reused. In a few lines of code there are three unnamed functions, none of them trivial.

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + ((n) => {
          for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
          return 1;
        })(a1val+a2val);
      }, 0);
    }, 0);
  };

Imperative style with separate test function (imperative_alt)
The imperative code can be improved by breaking out the prime test function. The advantage is clearly that the prime function can be modified in a more clean way, and it can be tested and reused. Also note that the usefulness of goto disappeared because return fulfills the same task.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += is_prime(a1[i] + a2[j]);
    return retval;
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

Functional style with lambda and separate test function (lambda_alt)
In the same way, the reduce+lambda-code can be improved by breaking out the prime test function. That function, but nothing else, is now testable and reausable.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primecount = (a1,a2) => {
    return a1.reduce((sum1,a1val) => {
      return sum1 + a2.reduce((sum2,a2val) => {
        return sum2 + is_prime(a1val+a2val);
      }, 0);
    }, 0);
  };

  const test = () => {
    if ( 1 !== is_prime(19) ) throw new Error('is_prime(19) failed');
  };

I think I can do better than any of the four above examples.

Functional style with reduce and named functions (reducer)
I don’t need to feed anonymous functions to reduce: I can give it named, testable and reusable functions instead. Now a challenge with reduce is that it is not very intuitive. filter can be used with any has* or is* function that you may already have. map can be used with any x_to_y function or some get_x_from_y getter or reader function that are also often useful. sort requires a cmpAB function. But reduce? I decided to name the below functions that are used with reduce reducer_*. It works quite nice. The first one reducer_count_primes simply counts primes in a list. That is (re)useful, testable all in itself. The next function reducer_count_primes_for_offset is less likely to be generally reused (with offset=1 it considers 12+1 to be prime, but 17+1 is not), but it makes sense and it can be tested. Doing the same trick one more time with reducer_count_primes_for_offset_array and we are done. These functions may not be reused. But they can be tested and that is often a great advantage during development. You can build up your program part by part and every step is a little more potent but still completely pure and testable (I remember this from my Haskell course long ago). This is how to solve hard problems using test driven development and to have all tests in place when you are done.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const reducer_count_primes = (s,n) => {
    return s + is_prime(n);
  };

  const reducer_count_primes_for_offset = (o) => {
    return (s,n) => { return reducer_count_primes(s,o+n); };
  };

  const reducer_count_primes_for_offset_array = (a) => {
    return (s,b) => { return s + a.reduce(reducer_count_primes_for_offset(b), 0); };
  };

  const primecount = (a1,a2) => {
    return a1.reduce(reducer_count_primes_for_offset_array(a2), 0);
  };

  const test = () => {
    if ( 1 !== [12,13,14].reduce(reducer_count_primes, 0) )
      throw new Error('reducer_count_primes failed');
    if ( 1 !== [9,10,11].reduce(reducer_count_primes_for_offset(3), 0) )
      throw new Error('reducer_count_primes_for_offset failed');
    if ( 2 !== [2,5].reduce(reducer_count_primes_for_offset_array([8,15]),0) )
      throw new Error('reducer_count_primes_for_offset_array failed');
  };

Using recursion (recursive)
Personally I like recursion. I think it is easier to use than reduce, and it is great for acync code. The bad thing with recursion is that your stack will eventually get full (if you dont know what I mean, try my code – available below) for recursion depths that are far from unrealistic.  My problem can be solved in the same step by step test driven way using recursion.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const primes_for_offset = (a,o,i=0) => {
    if ( i === a.length )
      return 0;
    else
      return is_prime(a[i]+o) + primes_for_offset(a,o,i+1);
  }

  const primes_for_offsets = (a,oa,i=0) => {
    if ( i === oa.length )
      return 0;
    else
      return primes_for_offset(a,oa[i]) + primes_for_offsets(a,oa,i+1);
  }

  const primecount = (a1,a2) => {
    return primes_for_offsets(a1,a2);
  };

  const test = () => {
    if ( 2 !== primes_for_offset([15,16,17],2) )
      throw new Error('primes_with_offset failed');
  };

Custom Higher Order Function (custom_higher_order)
Clearly reduce is not a perfect fit for my problem since I need to nest it. What if I had a reduce-like function that produced the sum of all NxN possible pairs from two arrays, given a custom value function? Well that would be quite great and it is not particularly hard either. In my opinion this is a very functional approach (despite its implemented with for-loops). All the functions written are independently reusable in a way not seen in the other examples. The problem with higher order functions is that they are pretty abstract, so they are hard to name, and they need to be general enough to ever be reused for practical purposes. Nevertheless, if I see it right away, I can do it. But I don’t spend time inventing generic stuff instead of solving the actual problem at hand.

  const is_prime = (n) => {
    for ( let d=2 ; d*d<=n ; d++ ) if ( 0 === n % d ) return 0;
    return 1;
  };

  const combination_is_prime = (a,b) => {
    return is_prime(a+b);
  };

  const sum_of_combinations = (a1,a2,f) => {
    let retval = 0;
    for ( let i=0 ; i<a1.length ; i++ )
      for ( let j=0 ; j<a2.length ; j++ )
        retval += f(a1[i],a2[j]);
    return retval;
  };

  const primecount = (a1,a2) => {
    return sum_of_combinations(a1,a2,combination_is_prime);
  };

  const test = () => {
    if ( 1 !== is_prime(19) )
      throw new Error('is_prime(19) failed');
    if ( 0 !== combination_is_prime(5,7) )
       throw new Error('combination_is_prime(5,7) failed');
    if ( 1 !== sum_of_combinations([5,7],[7,9],(a,b)=> { return a===b; }) )
       throw new Error('sum_of_combinations failed');
  };

Lambda Functions considered harmful?
Just as there are many bad and some good applications for goto, there are both good and bad uses for lambdas.

I actually dont know if you – the reader – agrees with me that the second example (lambda) offers no real improvement to the first example (imperative). On the contrary, it is arguably a more complex thing conceptually to nest anonymous functions than to nest for loops. I may have done the lambda-example wrong, but there is much code out there, written in that style.

I think the beauty of functional programming is the testable and reusable aspects, among other things. Long, or even nested, lambda functions offer no improvement over old spaghetti code there.

All the code and performance
You can download my code and run it using any recent version of Node.js:

$ node functional-styles-1.js 1000

The argument (1000) is N, and if you double N execution time shall quadruple. I did some benchmarks and your results my vary depending on plenty of things. The below figures are just one run for N=3000, but nesting reduce clearly comes at a cost. As always, if what you do inside reduce is quite expensive the overhead is negligable. But using reduce (or any of the built in higher order functions) for the innermost and tightest loop is wasteful.

 834 ms  : imperative
874 ms  : custom_higher_order
890 ms  : recursive
896 ms  : imperative_alt
1015 ms  : reducer
1018 ms  : lambda_alt
1109 ms  : lambda

Other findings on this topic
Functional Programming Sucks


Fix broken Marshall Stanmore

First I want to be clear, this post is about fixing a broken Marshall Stanmore speaker by turning it into an active loudspeaker. It is not about repairing it to its original functionality.

My Marshall Stanmore died after about two years of little use. One day it simply did not turn on. Completely dead. It seems to be a common fate of those loudspeakers and there seems to be no easy fix. I opened up the loudspeaker and quite quickly decided that I would not be able to repair it.

I felt very certain that the loudspeaker elements themselves were not broken. The loudspeaker looks and sounds quite good and it is against my nature to just throw such a thing away. So I started looking for ways to make a working active loudspeaker of it (allowing to use it with an iPhone or as a computer speaker). Since I thought this was a fun project I was willing to put some time and effort into it. But a brand new Marshall Stanmore is 200 Euros so the fix had to be significantly cheaper than that.

2.1
The Stanmore is a 2.1-loudspeaker. It has two tweeters and one woofer. The cutoff frequency is 2500Hz meaning that the tweeters are responsible for higher than 2500Hz frequencies and the woofer for the lower frequencies. There are different ways to properly produce 2.1 audio from a 2.0 signal. If I remember correctly the tweeters are rated at 2x20W and the woofer at 40W. I don’t know the impendance (Ohm).

The thing not to do
It is not a good idea to just simply connect L+R and connect it to the woofer. Regardless whether you do this before or after the amplifier you will drive current into components that are only supposed to produce a signal and this can destroy your equipment (your smartphone or computer pre-amp, or your amplifier).

Cutoff filters
There are special cutoff filters to split a signal into a lower and a higher part. I looked into this first, but it seemed a bit to advanced (expensive and complicated) for my project, and the problem with mixing L+R remains.

2.1 Amplifiers
There are 2.1 amplifiers to buy. The problem is that they are designed for use with a subwoofer (very low frequencies), not our 2500Hz woofer. This may or may not be a problem.

Mono
If I had a mono amplifier (that accepts stereo input and produce mono output) I could connect all the three loudspeakers to the same output. Since the distance between the tweeters is less than 25cm I don’t think the lack of stereo-tweeters will matter. However, it was not very easy to find suitable mono amplifiers (or “bridged amplifiers” that can be used as a mono amplifier).

Two-trick-solution
In the end I decided to go for a simple solution based on two parts.

First, pre-amp, it is very easy to convert stereo to mono. The only thing needed is two resistors (470 Ohm, or something close to that).

Second, a 2.0 amplifier can drive the tweeters on one channel and the woofer on the other (that is 40W on each channel).

Cleaning out the Stanmore
I removed (unscrewed) the back of my Stanmore. When I was done with it, the only thing that remained (and in place) was:

  • The box itself (except the back of it).
  • The three loudspeaker elements, and as long cables as possible.
  • The top golden colored control panel (because removing it would not make anything pretty) and the board attached to it (because it was hard to remove).
  • The cable (black+white+red) from the 3.5mm connection on top of the loudspeaker.
  • The 4 red cables from the on/off-switch.

What I also needed
This is a list of other components I used

Assembly
I neatly connected everything in a way that it fits nicely inside the Stanmore.

  • DC-power to two red cables connected to Stanmore power switch
  • The other two red cable to Jtron board (make sure to no reverse!)
  • One Jtron channel connected to yellow+black of woofer
  • One Jtron channel connected to red/blue+black of tweeters
  • Black of 3.5mm connector to Jtron input (middle)
  • Red/white of 3.5mm connector connected via two 470 Ohm resistors
  • From between the resistors, connect to Jtron input (left and right)

This is what I got:

As you can see the Jtron is pretty small.

For now my laptop DC supply is outside the Stanmore and there is just a little hole in the back for the cable.

Operating
The power switch on top is operational and I connect my audio source to the 3.5mm connection on top. The Jtron knobs work as expected (there is no balance).

About the Jtron
The Jtron was very good price and I thought 2x50W was kind of optimal for me. Also, it is a digital amplifier with high power efficiency (little excess heat). There are obviously many other options.

Serial vs Parallell
I connected my tweeters in parallell. I suppose they could have been used in series instead. Perhaps serial would have been more safe: impendence would be 4x higher, which would be less demanding on the Jtron.

Review
Well, I shall not review my own work. To be honest I have not fixed a new back plane yet and I think not having it in place is far from optimal for audio quality. Despite that, the Stanmore sounds very decent. It plays loud enough for me (perhaps louder than before). You probably want to experiment with bass/treble until satisfied. The way I use it (with an iPhone) I will set preset volume to loud, and mostly use the iPhone to control volume.

What I have lost compared to the original Stanmore is RCA-input, bluetooth and volume/treble/bass on top of the unit. I can live with it.




Making use of Nokia N8 in 2019

I made a serious a attempt to make good use of my Nokia N8 in 2014 but I gave up on it and put it in a box. It now turned out I need a regular phone for answering incoming phone calls and I had kept my Nokia N8 in a box last years. Also good, I had flashed it with Belle Delight 6.4 before putting it in the box, so without any effort I had a clean and (well) up-to-date mobile.

After a few hours of charging I inserted a mini-SIM and turned my old friend on.

I replaced my Nokia N8 with a Sony mobile, but last years I have used a regular iPhone. This is what I now find:

  • The N8 is small (small display, narrow, short, light, but a little fat). The size is great in the hand, but the screen is small (especially for browsing and typing).
  • There is a sluggishness in the UI. I remember this was unfortunately true when i bought N8 and did not go away with the much needed, but too-little-too-late upgrades, upgrades that came with it (Anna, Belle).
  • The “Home Screen” as a separate UI from the folder based navigation is unfortunately a bit awkward compared to iOS (and home screen was also a too-little-too-late-feature of Symbian upgrades).
  • The web browser has outdated certificates and is basically useless. I suggest go to m.opera.com immediately, download and install Opera and use it exclusively. Opera is fine, but everything really feels tiny on the screen. Unfortunately I have some certificate problems with Opera as well.
  • Multitasking? Well, kind of, I managed to crash Opera, without being able to kill it. Had to restart mobile.

This is still the way I remember Symbian. It feels a bit like Win95 or MacOS 9 when you are used to Windows 2000 or MacOS X. It is not fast/snappy, not entirely stable and a bit awkward.

Still, the N8 could have been an amazing mobile even in 2019, if it had a good enough operating system and applications. That is never going to happen. I could dream or speculate, but whatever… life goes on.

Nevertheless, my Nokia N8 is back to real duty in my home and it may very well be for another 3-4 years. It feels a bit like a time machine and it brings me good memories. I will perhaps write more about it, and I would perhaps sell it to a genuine enthusiast, but I guess a perfectly fine N8 is easy to find.

Vue components in Angular

I have an application written in AngularJS (v1) that I keep adding things to. Nowadays I prefer to write new code for Vue.js rather than AngularJS but rewriting the entire AngularJS application is out of the question.

However, when the need shows up for a new Page (controller in AngularJS) it is quite simple to write a Vue-component instead.

The AngularJS-html looks like this:

<div ng-if="page.showVue" id="{{ page.getVueId() }}"></div>

You may not have exactly “page” but if you have an AngularJS-application you know how to do this.

Your parent Angular controller needs to initiate Vue.

page.showVue = true;
var vue      = null;
var vueid    = null;

page.getVueId = function() {
    if ( !vueid ) {
        vueid = 'my_vue_component_id';
        var vueload = {
            el: '#' + vueid,
            template : '<my_vue_component />',
            data : {}
        };
        $timeout(function() {
            vue = new Vue(vueload);
        });
    }
    return vueid;
};

At some point you may navigate away from this vue page and then you can run the code:

vue.$destroy();
page.showVue = false;
vue          = null;
vueid        = null;

The way everything works is that when Angular wants to “show Vue” it sets page.showVue=true. This in turn activates the div, which needs an ID. The call to page.getVueId() will generate a Vue component (once), but initiate it only after Angular has shown the parent div with the correct id (thanks to $timeout).

You may use a router or have several different Vue-pages in your Angular-application and you obviously need to adjust my code above for your purposes (so every id is unique, and every component is initatied once).

I suppose (but I have not tried) that it is perfectly fine to have several different Vue-components mounted on different places in your Angular application. But I think you are looking for trouble if you want Vue to use (be a parent for) Angular controllers or directives (as children).

Vue.js is small enough that this will come at a quite acceptable cost for your current Angular application and it allows you to write new pages or parts in Vue in an existing AngularJS application.

Webpack: the shortest tutorial

So, you have some JavaScript that requires other JavaScript using require, and you want to pack all the files into one. Install webpack:

$ npm install webpack webpack-cli

These are my files (a main file with two dependencies):

$ cat main.js 

var libAdd = require('./libAdd.js');
var libMult = require('./libMult.js');

console.log('1+2x2=' + libAdd.calc(1, libMult.calc(2,2)));


$ cat libAdd.js 

exports.calc = (a,b) => { return a + b; };


$ cat libMult.js 

exports.calc = (a,b) => { return a * b; };

To pack this

$ ./node_modules/webpack-cli/bin/cli.js --mode=none main.js
Hash: 639616969f77db2f336a
Version: webpack 4.26.0
Time: 180ms
Built at: 11/21/2018 7:22:44 PM
  Asset      Size  Chunks             Chunk Names
main.js  3.93 KiB       0  [emitted]  main
Entrypoint main = main.js
[0] ./main.js 141 bytes {0} [built]
[1] ./libAdd.js 45 bytes {0} [built]
[2] ./libMult.js 45 bytes {0} [built]

and I have my bundle in dist/main.js. This bundle works just like original main:

$ node main.js 
1+2x2=5
$ node dist/main.js 
1+2x2=5

That is all I need to know about Webpack!

Background
I like the old way of building web application: including every script with a src-tag. However, occationally I want to use code I dont write myself, and more and more often it comes in a format that I can not easily just include it with a src-tag. Webpack is a/the way to make it “just” a JavaScript file that I can do what I want with.

Linux Apps on Chromebook R13! Finally!

For a while I have owned an Acer R13 Chromebook that I occationally have used as a development computer. I have been using Crouton, which is quite ok, but it is like something is missing.

Lately there has been talks and writings about Crostini, Container technology on Chrome OS, which enables Linux applications to run in Chrome OS. This would be different from Crouton in different ways, like:

  1. The Chromebook does not need to start/run in unsafe developer mode.
  2. There is a Terminal App, instead of running Crosh in the browser

This is perhaps not huge, but it is a step closer to making Chrome OS more universally usable.

New features are added to Chrome OS on different times for different devices. Just now (according to this thread) “Linux” was added to the Development Channel for Acer R13. I can confirm it. I changed channels to Beta (R71) and got nothing. I then switched to Dev channel (R72) and now I finally have Linux (Beta).

I guess in a few weeks R72 will have made it to Stable. If everything seems fine I will probably switch back to Stable, disable Developer mode and never touch Crouton again.

The terminal gives me a standard Debian Stretch system. The terminal itself is very minimal. I am aware of these shortcuts:

  • Ctrl + : make text larger
  • Ctrl – : make text smaller
  • Ctrl Shift P : open preferences
  • Ctrl Shift N : open new terminal windows

Compared to the terminal application I am used to in Debian or macOS this is pretty basic (and the limitation of Chrome OS is that you cant just install another terminal application easily).

Unfortunately I tried to install something (the screen command) using apt-get, and something went wrong. apt-get does not finish and when I open another terminal it crashes. A restart of the computer fixid this though. However, when during more real work things crashed on me. So there is clearly a reason for this to be in Development rather than Beta or Stable today, but it is very promising and fun nevertheless.

A new MacMini! Finally!

After well over four years Apple upgraded the MacMini!

I have like most other people only read about it, and I may never own one. But I can have opinions (as I have had before) about it anyway!

First, it is clearly a fine machine. If you buy it, put it on your desk and make use of it, it will probably serve you well for many years.

I think it is a shame that the SSD is (as it seems when I write this) not a standard replacable M.2/PCI unit. It would be so easy, trivial, and cost nothing, to make it a user replacable part (just like the memory appears to be). To me, this is a reason not to buy it. A non-replacable SSD is worse than non replacable RAM.

Previously – especially the earlier version – MacMini has been quite limited. It has seemed Apple has not wanted it to compete with the MacPro or the iMac in any way. Now it is possible to equip a MacMini with i7 CPU, 64GB of RAM and a lot of SSD storage. I think Apple has accepted that professionals will get it for doing real work (I think Adobe-stuff and perhaps programming). That is somewhat an interesting shift.

However, the entry level MacMini costs twice as much as the entry level MacMini cost when it was the cheapest. Apple seems to have abandoned the idea of selling it to regular consumers on a limited budget or as a dedicated media machine. I think it is a pity. Also if it was $449 it would be less of a problem that the storage is not replacable.

The GPU is not really for gaming and I kind of keep wondering why Apple does not release a computer suitable for Steam and market it with “pro gamer approves it runs top 10 games at 1920×1080”. However, external GPUs are becoming a real thing and perhaps a quite standard MacMini, with external storage and an external GPU could be a reasonable machine for gaming. But I don’t regret buying my Hades Canyon for gaming.

A “cheap” Apple laptop still seems like the best option to stay in the Apple ecosystem but also the new MacBook Air is quite pricey (and how can they get rid of the mag-safe?). I owned my first Mac in 1993 and I am typing this on a MacBook Air 11, but it may be the last Apple computer I ever own.

NUC Hades Canyon Review

Computers don’t have to be large anymore. Apple has the MacMini and the MacPro is also very compact. You can get a long way with a Raspberry Pi nowadays. And I particularly like Intel NUCs.

How about gaming? Occationally I play Windows games (using Steam) that require a gaming computer. I needed to replace my old gaming computer (an Intel i5 2450 @ 3.1GHz I think, with a Radeon 9000 graphics card) and decided to give the gaming NUC a try, the Hades Canyon, or NUC8i7HVK.

Its a barebone machine the size of a broadband router so I needed to get an M.2 SSD (500GB) and RAM (2x8GB) and I installed Windows 10 on it (the natural choice for gaming, and I have heard this NUC is not working well with Linux).

Well, after a week with this machine I like it. Installation was smooth. It looks quite good and it is very small. Most of the time it is completely silent. Sometimes during gaming the fans spin up, but it does not sound worse than my old desktop (quite the opposite, I would say). It is obviously not the most powerful gaming machine but it replaced my old machine with no trouble.

Well, for benchmarks and details, read a “real” review.

I am satisfied with the Hades Canyon as a gaming computer. It will be interesting to see if I am happy with it in a few years, or if it turns out to have a short service life.

Arrow functions in JavaScript: A strategy

Arrow functions have been a part of JavaScript since ES6. They are typically supported where you run JavaScript, except in Internet Explorer. To be clear, arrow functions are:

(a,b) => a+b

instead of

function(a,b) { return a+b }

I like to make things simple, and

  1. my code sometimes run on Internet Explorer
  2. arrow functions offers shorter and simplified syntax in some cases, but fundamentally you can write the same code with function
  3. I like to not have a build step (babel, webpack and friends) for a language that really does and should not need one

so, until now I have simply avoided them (and kind of banned them, along with other ES6 features) in code and software I am responsible for.

However

  1. arrow functions (as part of ES6) are here to stay
  2. they offer some advantages
  3. Internet Explorer will go away.

so, it makes sense to have a strategy for when to use arrow functions.

What I find on the Internet
The Internet is full of sources telling you how you can use arrow functions, how to write them, what are the pros, cons and pitfalls, and what they cannot do.

  • The key difference is how arrow functions work with this.
  • The syntax is shorter especially for single argument (needs no parenthesis), single statement (needs no return), functions.
  • Arrow functions don’t work well with Object oriented things (as constructors and prototype function)

In short, there are some cases where you can’t use arrow functions, some cases where they offer some real advantages, but in most cases it makes little real difference.

Arrow functions allow you to chain sort().filter().map() in very compact ways. With simple single statement arrow functions it is quite nice. But if the arrow functions become multiple lines I think it is poor programming.

What I don’t really find on the Internet
I don’t really find good advice on when to use arrow functions and when not to use arrow functions. I mean, when I program, I make decisions all the time:

  • Should I break this code out into a function?
  • Should this be an object (prototype style) or just data?
  • Should I break this code into its own module?
  • Should I write tests for this?
  • Should I allow a simple, slower algorithm, or should I add effort and complexity to write my code faster?
  • What should be the scope of these variables?
  • Should this be a parameter or can it be hard coded?
  • Can I make good use of map/reduce/every and friends, or is it better I just use a loop?
  • Naming everything…
  • …and so on…

Using, or not using, an arrow function is also a choice. How do I make that choice to ensure my code is good? I don’t really find very clear guidelines or style guides on this.

Lambda functions in other languages
Other languages have lambda functions. Those are special case anonymous functions. The thing I find peculiar about the use of arrow functions in JavaScript is that they are often used instead of function, when a standard function – not a lambda – would have been the obvious choice in other languages.

Intention
For practical purposes most often function and () => {} are interchangeable. And I guess you can write any JavaScript program using only arrow functions.

When you write code, it mostly does not matter what you use.
When you read code, it comes down to understanding the intention of the writer.

So I think good use of arrow functions is a way that makes the intention of the code as clear as possible. I want clear and consistent guidelines.

Using arrow functions in well defined cases shows more intention and contributes to more clear code than never using them.

I tend to read arrow functions as being a strong marker for functional programming. I find it confusing and when arrow functions are used in code that breaks other good core principles of functional programming.

The strongest cases
The strongest cases for arrow functions I can see:

Minimal syntax (no () or {} required), and never worth breaking such function out.

names = stuffs.map(stuff => stuff.name);

Callback: the arguments (error, data) are already given by openFile and the callback function cannot have a meaningful this. Also, for most practical purposes, the callback needs to use closure to access data in the parent scope, so it can not be a named function declared elsewhere.

openFile('myFile', (error, data) => {
  ... implementation
});

When it makes little difference
For a regular function it makes no difference:

const swapNames = (a,b) => {
  let tmp = a.name;
  a.name = b.name;
  b.name = tmp;
}

The function alternative would be:

function swapNames(a,b) {

and is actually shorter. However, I can appreciate with arrows that it is completely clear from the beginning that a binding of this can never happen, that it can not be used as a constructor and that there can be no hidden arguments (accessed via arguments).

Confused with comparison
There are cases when arrow functions can be confused with comparison.

// The intent is not clear
var x = a => 1 ? 2 : 3;
// Did the author mean this
var x = function (a) { return 1 ? 2 : 3 };
// Or this
var x = a <= 1 ? 2 : 3;

Obfuscate with higher order functions
Higher order functions (map, reduce, filter, sort) are nice and can improve your code. But, carelessly used they can be confusing and obfuscating.

These are not the fault of () => {} in itself. But it is a consequence of making higher order functions with arrow functions too popular.

I have seen for example (things like):

myArray.map(x => x.print())

map() should not have a side effect. It is outright obfuscating to feed a function that has a side effect into map(). And side effects have nothing to do with functional programming in the first place.

I have also seen reduce() and filter() being used when every(), some() or find() would have been the right choice. It is obfuscating, it is expensive, and it produces more code than necessary.

The use of arrow functions with higher order functions is only appropriate when the correct higher order function is used.

The abusive cases
Anonymous functions that are non-trivial and could clearly be named and reused (and testable) is clearly bad code:

myStuff.sort((a,b) => {
  if ( a.name < b.name ) return -1;
  if ( a.name > b.name ) return  1;
  if ( a.id   < b.id   ) return -1;
  if ( a.id   > b.id   ) return  1;
  return 0;
});

especially when the code is duplicated or the parent function is large.

An arrow-friendly policy
Admittedly, after doing my research I feel happier with arrow functions than I thought I would.

I suggest (as long as your runtime supports it) to use arrow functions as the default function. The reason for this is that they do less. I think the standard behavior of arguments, this and of OOP-concepts (prototype and constructors) should be optional and require explicit use (of function).

Just as one-line if-statements and if-statements without {} should be used carefully (I tend to abuse it myself) I think the same applies to arrow functions.

I think this is excellent:

names = stuffs.map(stuff => stuff.name);

but apart from those common simple cases I think think the full syntax should be used for clarity:

const compareItems (a,b) => {
  if ( a.name < b.name ) return -1;
  if ( a.name > b.name ) return  1;
  if ( a.id   < b.id   ) return -1;
  if ( a.id   > b.id   ) return  1;
  return 0;
};

(dont try to be clever by omitting (), {}, or return).

The use of function should be reserved for

  • constructors
  • prototype functions
  • functions that need the standard behavior of this
  • functions that do things with arguments
  • source files where function is used exclusively since before

Basic good functional programming practices should be especially respected when using arrow functions:

  • Dont duplicate code: break out anonymous functions to named functions when appropriate
  • Dont write long functions: break out anonymous functions to named functions when appropriate
  • Avoid side effects and global variables
  • Use the correct higher order function for the job

Also, obviously, take advantage of OOP and function when appropriate!

Callback functions
I think anonymous callback functions should generally be kept short.

const doStuff = () => {
  readFile('myFile', (error, data) => {
    if ( error )
      console.log('readFile failed: ' + e);
    else
      doStuffWithData(data);
  });
};

const doStuffWithData = (data) => {
  ...
};

Performance
In principle, I see no reason why arrow functions should not be at least as fast as regular function. In practice, the current state of JavaScript engines could be disappointing - I don't know.

However, a named static function is typically faster than an anonymous inline function. The JIT typically can optimize a function the more it is run so named and reusable functions are preferred.

I have made no benchmarks on arrow functions.

Feedback
I will start using arrow functions when I write new code and I feel enthusiastic about it. I will probably come across things I have not thought about. Do you have any thoughts on this? Let me know!

Syncthing crashes on RPi and Arch Linux

One of my Syncthing servers started crashing (again). It is Rapsberry Pi v2 running Arch Linux. Syncthing was 0.14.44.? I upgraded and got 0.14.48.1. Still not stable.

So I downloaded the Syncthing binary from Syncthing instead of using the one that comes with Arch Linux. That seems to work better.

During trying different things I did a database reset:

$ syncthing -reset-database     (does not start syncthing)
$ syncthing

This is not the first time Syncthing misbehaves on Raspberry Pi and I am beginning to question if it is so smart to store my files on a Raspberry Pi with a USB drive.