## Functional Programming Sucks! (it is slow)

Update 2018-08-26: I got into a discussion about never using loops. The code that was discussed was checking if a word is a palindrome (the same word if read backwards). I wrote some simple performance tests and you can draw your own conclusions here.

Update 2017-12-05: I added a new test in the end that came from real code.
It is both true that functional code is slower and that Node.js v8 is tightening the gap.

Update 2017-07-17: Below i present numbers showing that functional code is slower than imperative code. It seems this has changed with newer versions of Node.js: functional code has not turned faster but imperative code has become slower. You can read a little more about it in the comments. I will look more into this. Keep in mind that the below findings may be more accurate for Node.js v4-6 than for v8.

Functional programming is very popular with contemporary JavaScript programmers. As I have written before, Functional programming sucks and functional libraries for JavaScript also suck.

In this post I will explain more why Functional Programming sucks. I will start with the conclusion. Read on as long as you want more details.

Functional Programming practices are bad for performance
It is very popular to feed lamda-functions to map(), reduce(), filter() and others. If you do this carelessly the performance loss is significant.

It is also popular to work with immutable data. That is, you avoid functions that change (mutate) current state (side effects) and instead you produce a new state (a pure function). This puts a lot of pressure on the garbage collector and it can destroy performance.

The Benchmark Problem
Sometimes I entertain myself solving problems on Hackerrank.com. I particularly like the mathematical challenges in the Project Euler section (the Project Euler is also an independent organisation – HackerRank uses the challenges in Project Euler to create programming challenges).

This article refers to Project Euler 32. I will not go into details, but the solution is basically:

1. Generate all permutations of the numbers 1,2,3,4,5,6,7,8,9 (there are 9! of them)
2. For each permutation, check if it is “good” (very few are)
3. Print the sum of the good instances

The first two steps give good benchmark problems. I have made different implementations of (1) and (2) and then compared the results.

Benchmark Results
I have three different permutation generators (all recursive functions):

1. Pure function, immutable data (it may not be strictly pure)
2. Function that mutates its own internal state, but not its input
3. Function that mutates shared data (no allocation/garbace collection)

I also have three different test functions:

1. Tests the orginal Project Euler problem
2. Simplified test using reduce() and lamda function
3. Simplified test implemented a standard loop

I benchmarked on two different systems using Node.js version 6. I have written elsewhere that Node.js performance on Raspberry Pi sucks.

 (seconds) Project Euler Test Simplified Test Test Function: Functional Imperative Permutation Gen: Pure Semi Shared Shared Shared Pure Raspberry Pi v1 (ARMv6 @ 700) 69 23 7.4 21 3.7 62 MacBook Air (Core i5 @ 1400) 0.77 0.29 0.13 0.40 0.11 0.74

Comparing columns 1-2-3 shows the performance of different generators (for Project Euler test)
Comparing columns 4-5 shows the performance of two different test functions (using fast generator)
Comparing columns 5-6 shows the performance of two different generators (for fast simple test)

This shows that the benefit of using shared/mutable data (not running the garbage collector) instead of immutable data is 5x performance on the Intel CPU and even more on the ARM. Also, the cost of using reduce() with a lamda function is more than 3x overall performance on the Intel CPU, and even more on the ARM.

For both the test function and permutation generation, making any of them functional-slow significantly slows down the entire program.

The conclusion of this is that unless you are quite sure your code will never be performance critical you should avoid functional programming practices. It is a lot easier to write imperative code than to later scale out your architecture when your code does not perform.

However, the pure immutable implementation of the permutation generator is arguably much simpler than the iterative (faster) counterpart. When you look at the code you may decide that the performance penalty is acceptable to you. When it comes to the reduce() with a lamda function, I think the imperative version is easier to read (and much faster).

Please notice that if your code consists of nice testable, replaceble parts without side effects you can optimize later on. The functional principles are more valuable at a higher level. If you define your functions in a way that they behave like nice FP functions it does not matter if they are implemented using imperative principles (for performance).

Generating Permutations
I used the following simple method for generating permutations. I start with two arrays and I send them to my permute-function:

```  head = [];
tail = [1,2,3,4];

```

My permute-function checks if tail is empty, and then: test/evalute head.
Otherwise it generates 4 (one for each element in tail) new sets of head and tail:

```  permute( [1] , [2,3,4] )
permute( [2] , [1,3,4] )
permute( [3] , [1,2,4] )
permute( [4] , [1,2,3] )
```

The difference in implementation is:

• Pure version generates all the above 8 arrays as new arrays using standard array functions
• Semi pure version generates its own 2 arrays (head and tail) and then uses a standard loop to change the values of the arrays between the (recursive) calls to permute.
• Shared version simply creates a single head-array and 9 tail-arrays (one for each recursion step) up front. It then reuses these arrays throughout the 9! iterations. (It is not global variables, they are hidden and private to the permutation generator)

The simplified test
The simplified test checks if the array is sorted: [1,2,3,4]. Of all permutations, there is always exactly one that is sorted. It is a simple test to implement (especially with a loop).

```// These functions are part of a "test-class" starting like:
function testorder1() {
var cnt = 0;

// Functional test
this.test = function(p) {
if ( false !== p.reduce(function(acc,val) {
if ( false === acc || val < acc ) return false;
return val;
}, -1)) cnt++;
};

// Iterative test (much faster)
this.test = function(p) {
var i;
for ( i=1 ; i<p.length ; i++ ) {
if ( p[i] < p[i-1] ) return;
}
cnt++;
};
```

I tried to optimise the functional reduce() version by breaking out a named function. That did not help. I also tried to let the function always return the same type (now it returns false OR a number) but that also made no difference at all.

All the code
For those who want to run this themselves or compare the permutation functions here is the entire program.

As mentioned above, the slowest (immutable data) permutation function is a lot smaller and easier to understand then the fastest (shared data) implementation.

```
'use strict';

// UTILITIES

function arrayToNum(p, s, e) {
var r = 0;
var m = 1;
var i;
for ( i=e-1 ; s<=i ; i-- ) {
r += m * p[i];
m *= 10;
}
return r;
}

function arrayWithZeros(n) {
var i;
var a = new Array(n);
for ( i=0 ; i<a.length ; i++ ) a[i] = 0;
return a;
}

// PERMUTATION ENGINES

function permutations0(n, callback) {
}

// IMMUTABLE (SLOWEST)

function permutations1(n, callback) {
var i;
var numbers = [];
for ( i=1 ; i<=n ; i++ ) numbers.push(i);
permute1([],numbers,callback);
}

if ( 0 === tail.length ) {
return;
}

tail.forEach(function(t, i, a) {
a.slice(0,i).concat(a.slice(i+1)),
callback);

});
}

// MUTATES ITS OWN DATA, BUT NOT ITS ARGUMENTS

function permutations2(n, callback) {
var i;
var numbers = [];
for ( i=1 ; i<=n ; i++ ) numbers.push(i);
permute2([],numbers,callback);
}

if ( 0 === tail.length ) {
return;
}
var t2 = tail.slice(1);
var i  = 0;
var tmp;

while (true) {
permute2(h2, t2, callback);
if ( i === t2.length ) return;
tmp   = h2[0];
h2[0] = t2[i];
t2[i] = tmp;
i++;
}
}

// MUTATES ALL DATA (INTERNALLY) (FASTEST)

function permutations3(n, callback) {
var i;
var tails = new Array(n+1);

for ( i=1 ; i<=n ; i++ ) {
tails[i] = arrayWithZeros(i);
}

for ( i=1 ; i<=n ; i++ ) {
tails[n][i-1] = i;
}

function permute3(x) {
var j;
var tail_this;
var tail_next;
var tmp;
if ( 0 === x ) {
return;
}
tail_this = tails[x];
tail_next = tails[x-1];

for ( j=1 ; j<x ; j++ ) {
tail_next[j-1] = tail_this[j];
}

j=0;
while ( true ) {
permute3(x-1);

j++;
if ( j === x ) return;

tail_next[j-1] = tmp;
}
}

permute3(n);
}

// TEST FUNCTIONS

function testprint() {
this.test = function(p) {
console.log(JSON.stringify(p));
};

this.done = function() {
return 'Done';
};
}

// CHECKS IF PERMUTATION IS ORDERED - FUNCTIONAL (SLOWEST)

function testorder1() {
var cnt = 0;

this.test = function(p) {
if ( false !== p.reduce(function(acc,val) {
if ( false === acc || val < acc ) return false;
return val;
}, -1)) cnt++;
};

this.done = function() {
return cnt;
};
}

// CHECKS IF PERMUTATION IS ORDERED - IMPERATIVE (FASTEST)

function testorder2() {
var cnt = 0;

this.test = function(p) {
var i;
for ( i=1 ; i<p.length ; i++ ) {
if ( p[i] < p[i-1] ) return;
}
cnt++;
};

this.done = function() {
return cnt;
};
}

// TEST FUNCTION FOR PROJECT EULER 32

function testeuler() {
var sums = {};

this.test = function(p) {
var w1, w2, w;
var m1, m2, mx;
w =  Math.floor(p.length/2);
w1 = 1;
w2 = p.length - w - w1;

while ( w1 <= w2 ) {
m1 = arrayToNum(p,     0, w1      );
m2 = arrayToNum(p,    w1, w1+w2   );
mx = arrayToNum(p, w1+w2, p.length);

if ( m1 < m2 && m1 * m2 === mx ) {
sums['' + mx] = true;
}

w1++;
w2--;
}
};

this.done = function() {
var i;
var r = 0;
for ( i in sums ) {
r += +i;
}
return r;
};
}

// MAIN PROGRAM BELOW

function processData(input, parg, targ) {
var r;

var test = null;
var perm = null;

switch ( parg ) {
case '0':
perm = permutations0;
break;
case '1':
perm = permutations1;
break;
case '2':
perm = permutations2;
break;
case '3':
perm = permutations3;
break;
}

switch ( targ ) {
case 'E':
test = new testeuler;
break;
case 'O1':
test = new testorder1;
break;
case 'O2':
test = new testorder2;
break;
case 'P':
test = new testprint();
break;
}

r = perm(+input, test.test);
console.log(test.done());
}

function main() {
var input = '';
var parg = '1';
var targ = 'E';
var i;

for ( i=2 ; i<process.argv.length ; i++ ) {
switch ( process.argv[i] ) {
case '0':
case '1':
case '2':
case '3':
parg = process.argv[i];
break;
case 'E':
case 'O1':
case 'O2':
case 'P':
targ = process.argv[i];
break;
}
}

process.stdin.resume();
process.stdin.setEncoding('ascii');
process.stdin.on('data', function (s) {
input += s;
});

process.stdin.on('end', function () {
processData(input, parg, targ);
});
}

main();
```

This is how I run the code (use a lower value than 9 to have fewer than 9! permutations)

```### Project Euler Test: 3 different permutation generators ###
\$ echo 9 | time node projecteuler32.js 3 E
45228
8.95user ...
b\$ echo 9 | time node projecteuler32.js 2 E
45228
25.03user ...
\$ echo 9 | time node projecteuler32.js 1 E
45228
70.34user ...

### Simple check-order test, two different versions. Fastest permutations.
b\$ echo 9 | time node projecteuler32.js 3 O1
1
23.71user ...
\$ echo 9 | time node projecteuler32.js 3 O2
1
4.72user ...
```

(the timings here may not exactly match the above figures)

Update 2017-12-05
Admittedly, I sometimes find map(), filter() handy and I try to use them when it makes code more clear. I came to a situation where I want to split a list in two lists (one list with valid objects and one with invalid). This is a simple if/else with a push() in each. Or it would be two calls to filter(). Then it turned out that I wanted to split the valid objects into two lists: good and ugly. The slightly simplified code is:

```function goodBadUgly_1(list) {
var i, c;
var ret = {
good : [],
ugly : []
}
for ( i=0 ; i<list.length ; i++ ) {
c = list[i];
if ( !validateItem(c) )
else if ( uglyItem(c) )
ret.ugly.push(c);
else
ret.good.push(c);
}
return ret;
}

return {
good : list.filter(function(c) {
return validateItem(c) && !uglyItem(c);
}),
return !validateItem(c);
}),
ugly : list.filter(function(c) {
return  validateItem(c) && uglyItem(c);
})
};
}
```

On my not too powerful x64 CPU, and a list of about 1000 items the non-FP version took 6ms and the FP version took 16ms (second run, to allow the JIT to do its job). This was with Node 8.9.1. For Node 6.11.3 the FP version was slower but the non-FP version was almost same speed (quite consistent with my comment in the beginning from 2017-07-17).

You may think that of course the FP code is slower: it calls validateItem twice (always) and uglyItem twice for all valid items. Yes, that is true, and that is also my point! When you do FP you avoid (storing intermediate results in) variables. This results in extra work being done a lot of the time. How would YOU implement this in FP style?

This is 10 ms: does it matter? Well, first it is only 1000 objects.

If you do this in a Web GUI when a user clicks a button, the user will wait 10ms longer for everything to be updated. 10ms is not a lot. But if this multiplies (because you have a longer list) or adds up (because you are doing other things in a slower-than-necessary way) the UX will suffer.

If you do this server side, 10ms is a lot. In Node.js you have just 1 thread. So this overhead is 1% of all available performance each second. If you get 10 requests per second 10% CPU is wasted only because you prefer FP style.

This is one of those cases when FP has the same computational complexity, but its kind of a constant factor slower. Sometimes it can be even worse.

Update 2018-08-26: Palindrome test
Someone suggested to me to check whether a word is a palindrome using "functional programming" and a reverse() higher order function. The alternatives are:

```const isPalindromeStd = (str) => {
let a=0;
let b=str.length-1;
while ( a<b ) {
if ( str[a] !== str[b] ) return false;
a++; b--;
}
return true;
};

const reverseStr = (str) => {
return Array.from(str).reverse().join('');
};

const isPalindromeRev = (str) =>  {
return str === reverseStr(str);
};
```

Admittedly, the loop-less version has some simplistic appeal. Both of them are equally testable and they are used identically, thus allowing the rest of your program to be equally "functional". When it comes to performance, when testing on a file with common passwords the Std is 20x faster than the Rev implementation. In fact, for practical purposes Std is on average O(1) while Rev is O(n). You can see for yourself here.

All FP-sucks related articles
Functional Programming Sucks)
Underscore.js sucks! Lodash sucks!
Functional Programming Sucks! (it is slow) (this one)
Lodash Performance Sucks!

1. You may want to re-run your tests. My results were completely different. I ran these tests on a MacBook Air 1.6 GHz Intel Core i5, using Node 6.11, and Node 7.8 (no difference between the two). I threw in a console timer to test the actual execution time (otherwise we’d be including the loading of node itself), and they all ran in under 10ms. So performance wise they are equal. You can see the screenshots here:

http://imgur.com/a/ALOd5

You seem to not have taken into account Node V8’s JIT optimizations. Namely loop unrolling, which works the same whether the loop is imperative or functional. See the following youtube link for more info from one of the V8 engineers where he explains that the majority of javascript benchmarking is useless, because of the JIT optimizer. In essence, once the optimizer kicks in, we’re actually testing nothing.

Lastly, your functional variant is not pure since it modifies state (var cnt) outside it’s scope (I saw you mentioned it, just clarifying). Therefore using the Array.reduce method is needlessly verbose when you really want to bail out if the array is not sorted. There is no need to create a new array like reduce does. Performance is the same, as you can see in the images. It’s just ugly looking for no reason at all.

EDIT: See below for an improved version using every() instead of reduce().

If you feel my test methods were off, let me know how you'd like them to be executed.

2. John, appreciate your time and effort! I should have been more clear about how to run the tests 😳

Try:
\$ echo 9 | time node 1 E

(then you can replace 1 with 2 or 3, and E with O1 or O2)

9 means that 9! permutations are tested.
You can use a lower number obviously. The reason for the strange input format is that this was originally a HackerRank exercise using stdin rather than arguments.

I doubt they all run in 10ms this way. I much appreciate if you run your tests again and let me know what you find.

3. Interesting video about (micro)optimization. However, I don’t think I fell into those traps (I was quite aware of all of them). In my case I input data (as a side effect) and make calculations based on it. All permutations are unique so they cant easily be optimized away. In the end I print the result (a side effect) which can never optimized away.

It is very unlikely even for the simple check-if-array-is-ordered scenario that they JIT-compiler understands that I generate all permutations exactly one, and that exactly one permutation is the ordered one.

4. Here are the new results. I rewrote the FP function to be more expressive and faster. I also left out your Project Euler function because that’s not really testing functional programming, but rather whether mutations are faster than immutable objects (no one disagrees with that). As far as I know, FP does not forbid side effects or mutations, just asks that one contains them if they are needed.

So really, for the tests to be fair, they should be FP vs imperative using similar data structures. These are the results I got for your sorting function variants:

input 8: (results in seconds)

Immutable permutations:
–FP Variant 0.131s
–Imperative 0.134s

Mutates its own data, but not its args:
–FP Variant 0.052s
–Imperative 0.057s

Mutates all data (internally)
–FP Variant 0.048s
–Imperative 0.022s

input 9: (results in seconds)

Immutable permutations:
–FP Variant 0.943s
–Imperative 0.844s

Mutates its own data, but not its args:
–FP Variant 0.337s
–Imperative 0.308s

Mutates all data (internally)
–FP Variant 0.159s
–Imperative 0.039s

So as you can see, using the FP paradigm with an input of 8, the differences are negligible at no more than ~25ms.

Using an input of 9, the differences are greater, but still negligible at no more than ~120ms. Whereas you were getting difference of ~300-500ms.

So to say that FP is slow is incorrect according to my results. If the algorithm calls for side effects or mutations, then their use must be contained. For the other 99.99% of the app that is not performance critical, FP + immutability should run more than fast enough.

If I made a mistake somewhere, again, let me know as I’m still learning about FP. Here’s the FP testOrder version I used that ran 100ms faster.

```this.test = p => {
if (
p.every((el, idx, arr) => {
if (idx === 0) return true;
return el < arr[idx - 1];
}) !== false
) {
cnt += 1;
}
};
```
5. Hi John!

I do appreciate the feedback!

First I want to say the the mystery deepens. My original 0.11s for my Imperative version, I failed to reproduce that number. It turns out (roughly) that the imperative code runs with different performance on different node versions:
Node 5: 0.11s
Node 6: 0.21s
Node 8: 0.40s
With Node 8 the “imperative advantage” is essentially gone. But not thanks to the functional approach being faster but becuase the imperative being slower. I find this strange.

I do appreciate your contribution with every() rather than reduce(). That makes FP algorithmically equivalent with the imperative code and it is faster than the functional alternatives.

Now, to my eye the every()-implementation is the most hard-to-read of them all. I find the usage of idx+arr breaking the cleanliness of an FP approach. This is one thing that I think is problematic with FP: it makes it hard to produce clean solutions with the right computational complexity. reduce() is more readable and clean, but obviously non optimal. every() does the right thing, but it is not clean anymore.

You write that “FP does not forbid side effects or mutations, just asks that one contains them if they are needed”. I appreciate your pragmatic approach. I think my argument is that FP is quite hard (to do right) and when there is a dogmatic preference for “pure functions” or “use map(), reduce() for everything” the result is not good.

6. John, another thing…

“the differences are negligible”

For many purposes this may be true. And I am not asking for stupid microoptimization: I just prefer simple efficient code to less efficient code.

Even thought the differences are small (in ms for a benchmark) when you have code that turns out to be performance critical (or just too slow) then “small differences everywhere” can actually make a real difference.

If you have a loop calling a function, and the function is executed tens of thousands of times per second… and that function in turn operates on an array in non optimal way, it can start making a real difference.

I do like
1) Simple code
2) The right efficient algorithm
Often I find FP fails to deliver any of these. And as a bonus, you get more overhead with FP as well.

7. There’s a lot to cover, so bear with me till the end.

I tested this in node 6.11, and 7.8 and I got very similar results in both. The differences you’re seeing in the node versions are most likely due to the V8 optimization engine. In the video I posted in my first comment, one of Google’s V8 engineers mentioned that Angular claimed their implementation was faster than a previous iteration, when in reality, the V8 engine had a bug that had been corrected. It’s also very likely the case with the imperative test being slower in node 8. It appears the loop is not being unrolled, and possibly bailing optimization, leading to the imperative test being slower.

Also, I’m not being pragmatic about using mutations. It’s part of FP thinking. Those who claim mutations must never exist don’t truly understand FP. Here is an article and a university course lecture to back me up regarding the use of mutation in FP.

What FP is “against” is uncontrolled mutation and side effects (I put “against” in quotes because totally eliminating side effects is impossible).

Notice he says “uncontrolled”. That’s the main distinction. Every side effect and mutation must be contained and controlled.

Mutation is not uniformly evil. We’ve seen that we cannot create cyclic data (graphs) without it, for example. Many programs also require that you use mutation to maintain information.

As far as every being harder to read, I disagree. Especially if you write the tests in a true FP style, with one operation per function.

```this.test = p => {
if (isSorted(p) !== false) {
count += 1;
}
};

const isSorted = p =>
p.every((element, index, array) => {
if (index === 0) return true;
return element < array[index – 1];
});

or

const isSorted = p =>
p.every((element, index, array) =>
(index === 0 ? true : element < array[index - 1]));
```

While there is more to read, it is much clearer in intent vs a basic loop, or even your reduce variation. Another benefit of FP is knowing what each function does at first glance. Looking at the test function I can tell that if the array is sorted, we increase count by 1. The sorted function simply tells me to check that every element is sorted correctly, and if it's not, to return false and bail out.

Your reduce version, while functional (and less verbose?), is also needlessly mentally complex.

- First, you have to read the end of the function call to find it's starting point.
- then check if acc === false, if not
- check if the current value < acc and return false, if not
- return the current value and use it as the acc.
- then repeat for each element until you have a single, final value.

When debugging one must keep all this state in a mental construct, which means more work than needs to be done. Especially since your function can return either false or a numerical value.

Whereas the every version only does as much work as necessary (true to FP paradigm). IOW:

- go through every element (el),
- check if the index === 0 and return true. If not,
- check if the element is less than the previous element in the array (array[index - 1), if it's not
- return false and bail out (which is why it's faster).

Much easier to reason about, less steps, more declarative, and with no state to keep track of. Hence no deciphering of intent, or extra mental load.

And the overhead is minimal according to my tests. Which is why I decided to comment after running your tests. The main performance difference is in mutable state, not FP style code. And as my tests show, for performance critical ops, we can drop back to mutation. If it still needs even more micro-optimizations, then drop back to imperative. But the use cases for those are going to be minimal and contained. Which is the main benefit of the FP paradigm. Controlled mutations and side effects.

I cleaned up the code the way I think you meant it to be 🙂 Otherwise please let me know! It is not my intention to make you look bad by editing your comments!

I usually just use
<pre>
</pre>
…good old HTML that happens to work.

I am not sure if you really mean?

```    element < array[index-1]
```

Should it be the other way around? (well, it matters little)

Anyway, I am much confused about this slowdown with newer versions of Node. I will look more into it and try to make something more clear. It may of course be so that all my benchmarks from the beginning was nonsense (the way the video implied), but I really doubt it.

I will make use of your functional code and I will think about.

I also much appreciate the link about mutable state and FP – I have not read it yet!

9. John, I wrote a more simple test program.

\$ node ./benchmark.js 7

```'use strict';

function simpleReduce(array, func, initval) {
var i;
var v = initval;
for ( i=0 ; i<array.length ; i++ ) {
v = func(v, array[i]);
}
return v;
}

// UTILITIES

function arrayWithZeros(n) {
var i;
var a = new Array(n);
for ( i=0 ; i<a.length ; i++ ) a[i] = 0;
return a;
}

// PERMUTATIONS

function Counter(n) {
this.max = n;
this.digits = arrayWithZeros(n);

this.next = function() {
var i=0;

while ( i<this.max ) {
this.digits[i]++;
if ( this.max !== this.digits[i] ) return true;
this.digits[i] = 0;
i++;
}
return false;
}
}

// TEST FUNCTIONS

function testorderPrint(p) {
console.log('    ' + p);
return true;
}

function testorderReduce(p) {
return p.reduce(function(acc,val) {
if ( false === acc || val <= acc ) return false;
return val;
}, -1);
}

function testorderEvery(p) {
return p.every((el, idx, arr) => {
if (idx === 0) return true;
return el > arr[idx - 1];
});
}
function testorderSimpleReduce(p) {
return simpleReduce(p,function(acc,val) {
if ( false === acc || val <= acc ) return false;
return val;
});
}

function testorderImperative(p) {
var i;
for ( i=1 ; i<p.length ; i++ ) {
if ( p[i] <= p[i-1] ) return false;
}
return true;
}

var tests = {
//  'Print'         : testorderPrint,
'Imperative'    : testorderImperative,
'SimpleReduce'  : testorderSimpleReduce,
'Every'         : testorderEvery,
'Reduce'        : testorderReduce
};

function testrun(testsize, testfunc) {
var c = new Counter(testsize);
var r = 0;

do {
if ( false !== testfunc(c.digits) ) r++;
} while ( c.next() );
return r;
}

function testruns(testsize) {
var t;
var t0, t1;
var r;
for ( t in tests ) {
t0 = Date.now();
r = testrun(testsize, tests[t]);
t1 = Date.now();
console.log('  ' + (t1-t0) + ' ms : ' + t + ' (r=' + r + ')');
}
}

function main() {
var testsize = 0;

switch ( process.argv.length ) {
case 0:
case 1:
case 2:
break; // very unlikely
case 3:
testsize = +(process.argv[2]);
break;
default:
console.log('Only one argument allowed');
return;
}

console.log('** Warming up (5) **');
testruns(5);

if ( 0 < testsize && testsize < 16 ) {
console.log('** Actual Run (size=' + testsize + ') **');
testruns(testsize);

console.log('** Actual Run (size=' + testsize + ') **');
testruns(testsize);
}
}

main();
```

Apart from simplifying it I also wanted to isolate the testorder-function more, so now the permutation function is trivial and cheaper.

10. Yeah, you corrected it properly. I used the code tag, not the pre. That’s prob why it came out weird. I really do appreciate you taking the time to test this out with me. Like I said, I’m just getting into FP and running these tests is really helping me. Hope it’s the same for you. 😀

Now, I did find a bug in the Counter function that for some reason is slowing down the FP code. I added a console.time to the function as shown in the code block below, and here’s the output I got. I used a size of 8 because the bug caused 9 perms to crash on my machine.

** Actual Run (size=8) **
counter: 1882.894ms
1884 ms : Every (r=1)

counter: 511.657ms
512 ms : Imperative (r=1)

counter: 2295.055ms
2295 ms : SimpleReduce (r=1)

counter: 4444.126ms
4445 ms : Reduce (r=1)

As you can see, if you discount the time spent in the counter function, the execution times are consistent with my previous findings. I can’t trace the root of the bug though. It looks like it shouldn’t make a difference, so it’s probably a V8 optimization issue, more than a FP issue. Let me know if you get the same results.

```function Counter(n) {
console.time('counter');
this.max = n;
this.digits = arrayWithZeros(n);

this.next = function () {
let i = 0;

while (i < this.max) {
this.digits[i]++;
if (this.max !== this.digits[i]) return true;
this.digits[i] = 0;
i++;
}
console.timeEnd('counter');
return false;
};
}
```
11. Ok, I’m back. lol. After extensive testing, it’s not a bug after all, but rather that your new algorithm is now iterating over many more arrays than before, and I foolishly assumed you just refactored the original. For a size of 6, there are 46,656. For 7 there are 823,543 different arrays. For 8, a total of 16,777,216.

I must say, this is a pretty clever algorithm that eventually broke the native every method. So I asked around and I came to understand that we weren’t inherently testing FP, instead we were testing Javascript’s functional algorithm. So I modified lodash’s version to decrement vs increment the loop, which gained me an extra 80ms and it produces results which are much, much better. The updated code is at the end. Here are my new results:

** Actual Run (size=6; 46,656 arrays) **
3 ms : Imperative (r=1)
4 ms : Every Lodash (r=1)
22 ms : Native Every (r=1)

** Actual Run (size=7; 823,543 arrays) **
41 ms : Imperative (r=1)
79 ms : Every Lodash (r=1)
122 ms : Native Every (r=1)

** Actual Run (size=8; 16,777,216 arrays) **
835 ms : Imperative (r=1)
1530 ms : Every Lodash (r=1)
2485 ms : Native Every (r=1)

So as you can see, the differences are still negligible up until 823,543 iterations of an array with 7 items. Even at 16,777,216 iterations it’s still less than a 1 second difference. The native Every method however falls for reasons unbeknownst to me.

My takeaway is that the native Javascript methods may not be adequate performance-wise. Actually, they most definitely aren’t. But that doesn’t mean FP is inherently slow since it boils down to the algorithm used. And as you can see below, we maintain pure functions and still don’t lose the clarity of intent or the reusability of our functions.

Lodash Every refactored:

```const every = (array, predicate) => {
let index = array == null ? 0 : array.length;

while (--index > 0) {
if (!predicate(array[index], index, array)) {
return false;
}
}
return true;
};
```

testOrderEvery callback:

```    const testorderEveryLodash = testArr =>
every(testArr, (el, idx, arr) => el > arr[idx - 1]);
```
12. So, I ignore your thoughts about timing inside the counter function, right?

I did some tests the other day replacing the test-order-function with a native C++ function. That was slower. It seems the transition between JavaScript-JIT and Native is not for free. I think that is why rewriting reduce(), every() in JavaScript beats standard library!

When it comes to “negligable”… in real code you wouldn’t do the same testorder on millions of arrays over and over again. But when real code gets slow (and it does) some time is spent on value adding things, and some time is spent on overhead (using map() instead of for() ).

If you run code when your user clicks a button in the GUI: is 5ms negligable? 25ms? 100ms?
If you run code for every HTTP request, 5ms overhead significantly limits the number of requests per second you can handle.

I am not saying 16M element array is reasonable. But you may have 5-10MB of data in memory and you may need to scan through it several times to update your GUI or produce an HTTP response.

When it comes to your optimized every(): The good thing is that you could eliminate the expensive branch if index=0 (it gets expensive since it happens in every call). But FP is about clarity and being generic first. Now you have reinvented every() and created an everyTheFirst(). That is neither clear nor generic 🙂 I think already using arr[idx – 1] is… well not as clear as just using one “el” at a time.

The clear thing would perhaps have been to invent an everyPair() function, which would allow something like:

```const testorderX = testArr =>
everyPair(testArr, (e0, e1) =< e0 < e1);
```

But if there is often a need for a specialised replacement of map(), reduce(), every() then the benefit of clarity and being generic is a bit lost.

The imperative code is:
a) arguably as simple as any functional code
b) fastest

I dont mind FP in general, but it is important to see its value and its limitations.

Look here: There is a "Functional Style (recommended)" and an "Imperative Style".
It just does not make sense.
https://www.rosettacode.org/wiki/Luhn_test_of_credit_card_numbers#Scala

13. I’ll address each of your points. But first, to clarify, we are no longer talking about whether FP is problematically slow, but entering the realm of subjective coding preferences.

1.
Even under your examples of GUI interaction or data management, I’d still say it’s negligible in our multi-core world. FP is thread safe. So spreading out your operations over multiple cores would be trivial. Besides, Erlang is a FP language and powers cellphone switches because it guarantees uptime to up to 9 nines. That’s 99.9999999% of uptime (or roughly 30ms of downtime per year). And I don’t see many other heavier uses of ops than cellphone switches. So I already knew that FP works at scale. I was more interested of its use for webapps and UI, which this test helped me settle on.

For parallelism, I’ll refer you back to my previous comment’s link, but this time in reference to parallel computing at Google

A friend of mine is on the Google Chrome team. These are his comments on the role of functional programming (versus mutation-based programming) at Google.

What should be type of a search method be? Assume we want (roughly) the same results each time we search on the same terms.

String -> results

What’s the type of a typical command in a system for searching for flights (for example)?

flight-constraints -> void

Both of these operations search massive amounts of data. We therefore want to distribute the computation into smaller chucks that different machines can run in parallel. Which of these two type contracts is easier to parallelize?

Relying on state complicates handling many real-world problems:
Functional code is easy to parallelize; mutation-based code is not.

The average PC stays up roughly 2 months without crashing. If you build your infrastructure on thousands of cheap machines, you’ll have one failing every few minutes. You must be able to move computations to new machines when a machine goes down.

Testing is much harder with shared state or lots of local state.

How do you schedule updates to highly-shared data structures when machines might die mid-computation? Functions have to return effects (ie, operations to perform on data). Now, highly-parallel algorithms such as map-reduce can work with the results.

2.
You claim I reinvented every into an everyFirst function. That’s completely wrong. I’ve decoupled my every function and made it reusable, which is one of the major benefits of FP. Check this out. (I also removed the extra element because, at least to you, it was harder to read.)

**Checks if every element is 2x the previous

const testDouble= testArr =>
every(testArr, (_, idx, arr) => arr[idx] === arr[idx – 1] * 2);

**Checks if every element is sorted in decrementing order

const testDouble= testArr =>
every(testArr, (_, idx, arr) => arr[idx] >= arr[idx – 1] );

I could literally make any test that takes advantage of my every function to compare each element in an array. You’d have to rewrite your imperative code for all possible use cases. And repetition is a code smell. To make an everyPair function defeats the purpose of FP and reusable code, just for the sake of using a few less letters.

You also say using el <= arr[idx -1] is “not as clear as just using one “el” at a time.” Yet, your imperative code shows this:

p[i] <= p[i – 1]

It’s literally the same exaction notation (except I used 4 more letters?). But that’s just personal preference at this point. Nothing about either is inherently more difficult to read.

3.
I don’t know scala, but that looks extremely overly and needlessly complex. And that code isn’t vetted. So some random on the internet wrote it, meaning that it can’t be used as the bible of what FP code should look like. Instead, look at this imperative vs FP code. Tell me which is more readable. Yet, the poster claims some of his students still found the original easier to understand. It’s just a matter of “learning to think in FP”. And we all know that programmers don’t like learning new ways of doing things. 😆

https://softwareengineering.stackexchange.com/a/158721/6605

4.
Most importantly, I’d say that FP is hard to do right. Not because it’s inherently hard, but because most people claiming to be FP’ers are not really coding in the paradigm. And that is probably because the first paradigm most of us learn is imperative. I must reiterate:

Using functional methods is not the definition of Functional Programming

And that’s where I see a lot of people go wrong. To claim that is akin to claiming to use OOP simply because one uses objects. OOP without inheritance is like FP without modular, reusable functions. Non-existent.

So really, the FP I wrote is:
1. Just as readable as the imperative version
2. More modular, leading to reusable code, hence less places for bugs
3. Just as fast except for micro-optimizations

I’m sorry, but according to my test results, FP sucking because it’s slow is not a reality.

I do thank you for setting up this test because I learned a lot trying to make them equal. Your last test was especially convincing and proved to me that FP in Javascript is not performant, not because of FP, but because of ECMA’s algorithms. Should I need something more performant, refactoring the methods won’t be an issue.

In the end it comes down to preference in style. What you find readable, I find verbose, and vice-versa. What you find easier to debug, I may not. And that’s ok. But objectively speaking, FP is on par with imperative.

14. John, you are very reasonable and mostly correct 🙂

Using functional methods is not the definition of Functional Programming.
And that’s where I see a lot of people go wrong. To claim that is akin to
claiming to use OOP simply because one uses objects. OOP without inheritance
is like FP without modular, reusable functions. Non-existent.”

My problem is I work with (relatively junior) programmers in JavaScript exclusively (and JavaScript is not multithreaded anywhere ever).
I see code like this (really):

```  a.forEach(function(v0,i0,a0) {
b.forEach(function(v1,i1,a1) {
if ( i0 == i1 ) {
...
}
})
})
```

…and programmers dont want to replace it with a single for-loop becuase for-loops are bad and FP is great and lamda functions are cool.

(to me) A functional approach would be to write a NAMED PURE function of the true loop body (…).
If the loop itself happens to be a while-loop, a for-loop or something else: use the simplest solution. Not two nested forEach.

And I just hear: its better, its testable, its fast enough…

I always pick the most simple algorithm and implementation.
In the orginal code there was a “pure nothing mutable” vs a “all mutable” implementation.
I much prefer the pure functional code because it is so much easier. If it is not fast enough I can optimize it later.

I am fine with FP. Really.
But “FP-hype” is making real programmers write a lot of horrible code “in the name of FP”.
I just want to call out this madness.

My little series of FP-sucks is about making people think and consider (and understand) what they really do. And admittedly I found nothing on Google when I searched for FP sucks so I decided to fill that spot.

I think it is highly confusing to make a new “every” function that omits the first element.
I think (just like you) it is great to make and reuse an “everyExceptTheFirst” function (or an everyPair function). But I am sceptical to the widespread use of the index and array parameters becuase that was not how they were meant to be used.

15. First, anyone who’s using a nested forEach loop should be shot on site 👿 That’s the worst piece of evil to bestow on other’s eyes. FP is about small, modular decoupled code. Emphasis on small. Your approach is actually more functional than theirs I’d say. I doubt there’s anything decoupled from their implementation.

I honestly thank you for challenging me to think this thoroughly. You say there’s no FP sucks articles out there, and I’d counter there’s very little ‘this is what FP actually is’. I agree that nearly all of it is hype, with little substance. This gitbook cleared things up for me and mostly stayed away from math and hype. I’d suggest anyone that’s lost on FP concepts to read it.