Author Archives: zo0ok

On Grit and becoming a better programmer

I have read the book Grit by Angela Duckworth. It brought some obvious (or not) things to my attention:

1. To really get better at something you need to challenge yourself, try more difficult things, not just repeatedly do what you are already capable of. (my words, not a quote from the book)

If you think of an athlete, a high jumper, this seems very obvious (you are never going to jump 2.00m if you keep practicing 1.50m all days).

2. Mastering something is about allowing yourself to dig deeper, getting more depth and seeing more details (than a novice).

If you think of a sports commentator (making remarks about subtle technical details in figure scating or gymnastics), this also seems fairly obvious.

What are programmers told to learn
I often hear advice to programmers about how to learn and work. I would say it is mostly about trying new things:

  • Learn new programming languages
  • Learn new tools and libraries (that simplifies things)

While these are obviously good things to learn it is neither very challenging nor very much going deeper. And when it comes to tools and libraries that simplify things you perhaps trade deep understanding for easy productivity.

It is not very often I hear the advice to try to solve a hard and challenging problem using tools you already know very well. And it is also not very common that I hear the advice to go into very intricate details about anything.

Programmers seem to value, or be valued by, knowledge about things allowing them to find shortcuts or ready building blocks:

  • Libraries and frameworks – to write no code, less code or avoid writing difficult code
  • Different programming languages – to pick a language that makes it easy
  • Patterns and methodology – to avoid difficult technical analysis and design
  • …and soft skills or course

All these skills are quite easily described in a CV. But none of it is particularly difficult or challenging.

What is truly hard about programming
To implement a correct solution a programmer needs to:

  • Understand the problem or problem domain correctly (and perhaps completely)
  • Come up with a solution and software architecture that can actually solve the problem
  • Go through the labour of correctly crafting the required code
  • …and this is or course done iteratively, learning and adapting along the way (because getting everything right fom the beginning is often impossibly hard), so you need to make incorrect/insufficient decisions that lead you in the right direction for now

This can perhaps be called problem solving or seniority in a CV, but problem solving is a rather abstract cliche and seniority is often measured in years more than anything else. Also it can appear to be covered by things like requirement analysis, patterns, TDD and agile. But these things are about how to plan, facilitate and manage the difficult things. You can know a lot about TDD, without being able to correctly write test cases that describe the problem domain correctly, and without being able to implement an algorithm that solves the problem sufficiently well.

A balanced training
Back to athletes. Golfers (lets call them athletes) used to not be very fit. Then came Tiger Woods. Since then all (top) golfers go to the gym (to be able to compete). To me, this is like you can be a good programmer but if you don’t know git you are simply not very competetive.

But golfers spend most of their time mastering their swing (or in the gym, or with a shrink). They don’t also do horseback riding, pole jumping and run marathons. Or if they do, they at least don’t think it is key to becoming a better golfer. But when it comes to programmers this is often what we do: learn a new language, a new framework or a new service. Like it would significantly make us better (even though it is no challenge at all, just some time spent).

No similes (or metaphors) are perfect. Golf is not programming. Most programmers don’t aspire to be among the best in the world. But I think the question is worth asking:

Do I, as a programmer, have the right mix of hard/challenging practice and trying/learning new stuff?

Learning in the workplace
In our workplaces they don’t want us to work with things that are so challenging that we might very well fail. That is a project risk. And IT projects fail far too often for anyone to be satisfied. It is not at all strange that organisations want us to work with things that we know and otherwise they want to mitigate the risk by making things easier. But do we learn this way? And do we, 5-10 years down the road, reach our potential and develop the capabilities that would benefit us the most?

Is there a genuine conflict because making things as easy and productive as possible on one hand and improving skills on the other?

For whom do we learn?
I don’t know if programmers who challenge themselves and become masters with deep knowledge are rewarded for it. I don’t know if most organisations even want such programmers. I already hear the complaints:

  • No one else understands her code (but if the problem was THAT hard, what was the poor master going to do?)
  • She is just inventing things instead of using standard tools
  • She is not following best practices

Also, who will ever know what a great job such a programmer does? It is like:

  1. This integration routine is unstable and too slow, can you try to fix it? (lets say it is very hard)
  2. Master fixes it in 4 days
  3. Some suspicious comments: yeah sure?!
  4. After the next week no one ever remembers and its just taken for granted that it works the way it always should have

Don’t do as they say they do, do as they do!
I can’t back this up, but I have the feeling that the best programmers we know are people who challenged themselves with insane projects. But what we hear is that programmers are valued by the number of technologies they know.

I would think that smart organisations know how to identify and appreciate a master. And I think master programmers eventually find themselves in those smart organisations. But I think it happens mostly below the radar.

Example: git
Before git there was subversion (improved on svn) and a number of commercial version control systems. These were, like all tools, both appreciated and hated and using them was best practice.

Now Master Torvalds was not happy. He challenged existing technologies and designed and wrote his own system: git.

However, what I find fascinating here is that he wrote git in C. People complained about it of course. But git was fine because Torvalds

  1. deeply knew the problem domain,
  2. designed git very well,
  3. implemented it in a language he mastered.

It is like, you can’t argue for implementing a system like git in C, but in the end git could not have been better (smaller, faster, more portable) if it was implemented in any other language.

I guess for the rest of us the question is always:

  1. should we use a proven solution and take the easy path?
  2. should we invent our own solution possibly using the crude tools we master?

But if we are never arrogant enough to go for #2, how will we ever grow to be able to go for #2 when it is really needed of us?

The Hard Way
There is a (somewhat infamous) book series and online courses about learning to code the hard way. Many programmers like C/C++ perhaps partly because the fact that it is difficult and even a bit unsafe is fun. I think somehow JavaScript has the same appeal in a different way.

Many hackers seem to be struggling with the impossible even though it is hardly worth it from a rational perspective.

I sometimes entertain myself with Hackerrank.com (especially Project Euler). Some challenges are truly hard (so I have not solved them). Some I have solved after weeks of struggle (often using Lisp or C). I used to judge myself, thinking it was an absolute waste of time. On top of everything it made me a bit stressed and gave me occasional sleeping problems because I could not stop thinking about a problem. I am about to reconsider it. And perhaps it is the really hard challenges, that I fail to solve properly, that I should focus on.

Conclusion
I have left a number of unanswered questions in this post. I don’t have answers. But I think it is worth considering the question: do I reach my potential as a programmer the way I try to learn?

VIM: Disable autoindent

More and more often I find that Vim comes with auto indention enabled. I don’t want that.

Perhaps the best way to fix this annoyance is to add the following to your .vimrc file.

" Switch off all auto-indenting
set nocindent
set nosmartindent
set noautoindent
set indentexpr=
filetype indent off
filetype plugin indent off

I found these exact lines here.

Acer Chromebook R13: 3. As a Linux development workstation

I have got an Acer Chromebook R13 and I will write about it from my perspective.

1. Background
2. As a casual computer
3. As a Linux development workstation (this post)

As a Linux development workstation
I switched my Chromebook to Development mode and everything that follows depends on that.

In ChromeOS you can hit CTRL-ALT-T to get a crosh shell. If in Development mode you can run shell to get a regular “unix” shell. You now have access to all of ChromeOS. It looks like this:

crosh> shell
chronos@localhost / $ ls /
bin     dev  home  lost+found  mnt  postinst  root  sbin  tmp  var
debugd  etc  lib   media       opt  proc      run   sys   usr
chronos@localhost / $ ls ~
'Affiliation Database'          login-times
'Affiliation Database-journal'  logout-times
Bookmarks                       'Media Cache'
Cache                           'Network Action Predictor'
Cookies                         'Network Action Predictor-journal'
Cookies-journal                 'Network Persistent State'
'Current Session'               'Origin Bound Certs'
'Current Tabs'                  'Origin Bound Certs-journal'
databases                       'Platform Notifications'
data_reduction_proxy_leveldb    Preferences
DownloadMetadata                previews_opt_out.db
Downloads                       previews_opt_out.db-journal
'Download Service'              QuotaManager
'Extension Rules'               QuotaManager-journal
Extensions                      README
'Extension State'               'RLZ Data'
Favicons                        'RLZ Data.lock'
Favicons-journal                'Service Worker'
'File System'                   'Session Storage'
GCache                          Shortcuts
'GCM Store'                     Shortcuts-journal
GPUCache                        Storage
History                         'Sync App Settings'
History-journal                 'Sync Data'
'History Provider Cache'        'Sync Extension Settings'
IndexedDB                       'Sync FileSystem'
'Last Session'                  Thumbnails
'Last Tabs'                     'Top Sites'
local                           'Top Sites-journal'
'Local App Settings'            'Translate Ranker Model'
'Local Extension Settings'      TransportSecurity
'Local Storage'                 'Visited Links'
log                             'Web Data'
'Login Data'                    'Web Data-journal'
'Login Data-journal'
chronos@localhost / $ uname -a
Linux localhost 3.18.0-16387-g09d1f8eebf5f-dirty #1 SMP PREEMPT Sat Feb 24 13:27:17 PST 2018 aarch64 ARMv8 Processor rev 2 (v8l) GNU/Linux
chronos@localhost / $ df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/root                1.6G  1.4G  248M  85% /
devtmpfs                 2.0G     0  2.0G   0% /dev
tmp                      2.0G  248K  2.0G   1% /tmp
run                      2.0G  456K  2.0G   1% /run
shmfs                    2.0G   24M  1.9G   2% /dev/shm
/dev/mmcblk0p1            53G  1.3G   49G   3% /mnt/stateful_partition
/dev/mmcblk0p8            12M   28K   12M   1% /usr/share/oem
/dev/mapper/encstateful   16G   48M   16G   1% /mnt/stateful_partition/encrypted
media                    2.0G     0  2.0G   0% /media
none                     2.0G     0  2.0G   0% /sys/fs/cgroup
tmpfs                    128K   12K  116K  10% /run/crw

This is quite good! But we all know that starting to install things and modifying such a system can cause trouble.

Now, there is a tool called Crouton that allows us to install a Linux system (Debian or Ubuntu) into a chroot. We can even run X if we want. So, I would say that for doing development work on your Chromebook you have (at least) 5 options:

  1. Install things directly in ChromeOS
  2. Crouton: command line tools only
  3. Crouton: xiwi – run X and (for example) XFCE inside a ChromeOS window
  4. Crouton: X – run X side by side with ChromeOS
  5. Get rid of ChromeOS and install (for example) Arch instead

I will explore some of the options.

#2. Crouton command line tools only
For the time being, I don’t really need X and a Window Manager. I am fine (I think) with the ChromeOS UI and UX. After downloading crouton I ran:

sudo sh ./crouton -n deb-cli -r stretch -t cli-extra

This gave me a Debian Stretch system without X, named deb-cli (in case I want to have other chroots in the future). Installation took a few minutes.

To access Debian I now need to

  1. CTRL-ALT-T : to get a crosh shell
  2. crosh> shell : to get a ChromeOS unix shell
  3. $ sudo startcli : to get a shell in my Debian strech system

This is clearly a sub-optimal solution to get a shell tab (and closing the shell takes 3x exit). However, it works very well. I installed Node.js (for ARMv8) and in a few minutes I had cloned my git nodejs-project, installed npm packages, run everything and even pushed some code. I ran a web server on 127.0.0.1 and I could access it from the browser just as expected (so this is much more smooth than a virtual machine).

For my purposes I think this is good enough. I am not very tempted to get X up an running side-by-side with ChromeOS. However I obviously would like things like shortcuts and virtual desktops.

Actually, I think a chroot is quite good. It does not modify the base system the way package managers for OS X tend to do. I don’t need to mess with PATH and other variables. And I get a more complete Debian system compared to just the package manager. And it is actually the real Debian packages I install.

I installed Secure Shell and Crosh Window allowing me to change some defaults parameters of the terminal (by hitting CTRL-SHIFT-P), so at least I dont need to adjust the font size for every terminal.

#4. Crouton with XFCE
Well, this is going so good that I decided to try XFCE as well.

sudo sh ./crouton -n deb-xfce -r stretch -t xfce,extensions

It takes a while to install, but when done just run:

sudo startxfce4

The result is actually pretty nice. You switch between ChromeOS and XFCE with CTRL-ALT-SHIFT-BACK/FORWARD (the buttons next to ESC). The switching is a little slow, but it gives you a (quite needed) virtual desktop. Install crouton extensions in ChromeOS to allow copy-paste. A good thing is that I can run:

sudo enter-chroot -n deb-xfce

to enter my xfce-chroot without starting X and XFCE. So, for practical purposes I can have an X-chroot but I dont need to start X if I dont want to.

screen
After a while I have uninstalled XFCE and I only use crouton with cli. The terminal (part of the Chrome browser) is a bit sub-optimal. My idea is to learn to master screen, however:

$ screen
Cannot make directory '/run/screen': Permission denied

This is easily fixed though (link):

mkdir ~/.screen
chmod 700 ~/.screen

# add to .bashrc
export SCREENDIR=$HOME/.screen

# and a vim "alias" I found handy
svim () { screen -t $1 vim $1; }

I found that I get problems when I edit UTF-8 files in VIM in screen in crouton in a crosh shell. Without screen there are also issues, but slightly less so. It seems to be a good idea to add the following line to .vimrc:

set encoding=utf8

It improves the situation, but still a few glitches.

Now at least screen works. It remains to be seen if I can master it.

lighttpd
I installed lighttpd just the normal Debian way. It does not start automatically, but the normal way works:

$ $ sudo service lighttpd start

If you close your last crouton-session without stopping lighttpd you get:

$ exit
logout
Unmounting /mnt/stateful_partition/crouton/chroots/deb-cli...
Sending SIGTERM to processes under /mnt/stateful_partition/crouton/chroots/deb-cli...

That stopped lighttpd after a few seconds, but I guess a manual stop is preferred.

Performance
I have written about NUC vs RPi before and to be honest I was worried that my ARM Chromebook would more have the poor performance of the RPi than the decent performance of the NUC. I would say this is not a problem, the Acer R13 is generally fast enough.

After a few Nodejs tests, it seems the Acer Chromebook R13 is about 5-6 times faster than an RPi V2.

A C-program (some use of 64-bit double floats, little memory footprint) puts it side-by-side with my Celeron/NUC:

                s
RPi V1        142
RPi V2         74
Acer R13       12.5
Celeron J3455  13.0
i5-4250U        7.5

Benchmarks are always tricky, but I think this gives an indication.

Acer Chromebook R13: 2. As a casual computer

I have got an Acer Chromebook R13 and I will write about it from my perspective.

1. Background
2. As a casual computer (this post)
3. As a Linux development workstation

As a casual computer

My general impressions of the Acer Chromebook R13 are positive. The display is good (I am not used to Full HD on a laptop) and the build quality in general is more than acceptable.

What works well, quite literally out of the box:

  1. English language with non-English keyboard
  2. Connect to 5GHz WiFi
  3. Editing Google Docs, Facebook, Youtube
  4. Google Play Store for Android Apps (required a restart for a system upgrade)
  5. Spotify App (in Mobile App format), streaming audio via Bluetooth to external speaker
  6. Netflix App (failed to mirror/play to external display)
  7. Netflix Web Page (could display video on TV over HDMI)
  8. Writing this blog post…
  9. Switch to tablet mode, use touch and type on virtual keyboard on display (well, it sucks compared to a real keyboard, but it works as could be expected)
  10. Printing to a local network printer: CUPS comes preinstalled (there are other options as well, but for me CUPS is perfect)
  11. Importing photos from a micro-sd-card taken with a camera. VERY rudimentary (crop/rotate/brightness) editing available.

The good
So far my impression is that the performance is very acceptable. I used some JavaScript-heavy web pages and it was surprisingly good.

The not so good
Compared to my MacBook Air the touchpad is not as nice. Scrolling web pages is more… jerky? I would have preferred if the keyboard was closer to the display and the touchpad more far away from me. At least the touchpad is nicely centered in the middle. To be fair, the touchpad is at least as good as on more expensive PC laptops.

Performance and Benchmarks
My own Web Worker Test indicates my MacBook Air (1.4GHz Intel i5) is about 2-3 times faster (both computers using Chrome browser). However, on OS X, Safari seems to be much faster than Chrome browser on some tests and outperforms the Chromebook up to 10x on some tests. This is quite pure JavaScript number crunching.

My own String Compare Test indicates the MacBook Air is about 50% faster (Chrome browser in both cases).

Things not quite there
I have been using my Chromebook more or less daily and there isn’t much I actually miss. But here is a short list (that may grow or shrink over time).

  • A graph plotter/calculator: Grapher in OS X is not amazing but better than what I found for Chrome OS. So far I have tried Plot and Graph Functions and Desmos Graphing Calculator

Developer mode
So far I have not touched the Developer mode. Everything is completely standard and I will leave it like that for a while.

Acer Chromebook R13: 1. Background

I have got an Acer Chromebook R13 and I will write about it from my perspective.

1. Background (this post)
2. As a casual computer
3. As a Linux development workstation

Background
The last 20 years I have used OS X since 10.0, Windows since NT4, and many Linux distributions. These systems all have their pros and cons. Last years Chromebooks running Chrome OS (which is Linux) have appeared. They are typically cheap and built for the cloud. However there are two things that make them particularly interesting:

  1. Chromebooks (modern ones) can run Android Apps
  2. Chromebooks are much used in schools, so children of today will start looking for jobs in a few years, knowing perhaps only Chromebooks

I am too curious not to want one (perhaps mostly to be disappointed).

A few years ago I thought about getting a Chromebook, but at the time I felt it was not going to satisfy me. I bought a MacBook Air 11 instead, which is a great laptop for my purposes. However I less and less agree with what Apple does and I would rather have a native Linux laptop, than a Mac.

There are several reasons why I bought an Acer Chromebook R13 as my first Chromebook

It has got good reviews (although it is not the latest Chromebook in the market).

I like the quality aluminium build (it almost reminds me of my Titanium PowerBook G4).

It has a touchscreen and can be used as a tablet or in tent mode.

It should run Android Apps very will with its ARM CPU.

I am enthusiastic and curious about the ARM CPU for several reasons. I like an underdog and after Spectre/Meltdown I think that we need all possible alternatives to Intel. I am also curious to see if the ARM performs decently enough for my needs (and I might get disappointed).

I hope to get decent quality and some new opportunities compared to MacBook Air.

As a standard user
Most of the time I am a very ordinary computer user. I browse the internet, pay my bills, send and receive emails, watch Youtube, write something using Google Docs and I do some basic photo editing. I kind of expect the Chromebook to do this just as well as my MacBook Air.

As a programmer
I am a programmer. I mostly code JavaScript for Node.js and the web, but I also code C, C++, Lisp, Python, Bash, or whatever I feel like (mostly for fun, sometimes for work). I don’t use very advanced tools (mostly Vim, actually) and I really feel comfortable with a Linux shell. Even Mac OS X with its many package managers feels foreign. Not to talk about how I am lost in Windows.

I understand Chrome OS is Linux. It comes with a terminal. It has a Developer mode. And I can install almost anything I want using crouton (or so I have read).

My hope is that my Chromebook, for most practical purposes, will work like Linux the way I expect (more so than OS X). My hope is also that the ARM CPU will have reasonaable JavaScript performance. I may end up disappointed.

Raspbian – kerberos not found

I have this very strange error on my RPi V2 with Raspbian (8.0). I suspect I will throw away the memory card and never fix it, but I will document the error for future reference.

My problem was that curl, ssh, sshd suddenly did not work. When I start the web browser I get “I/O error”. This screenshot shows (at least a symptom of) the problem.

I tried to reinstall ssh and curl:

$ apt-get install --reinstall curl

and that did not help.

Apart from this the system works ok. It shutdowns and starts properly. No I/O errors from dmesg. I doubt I will ever figure this one out. It seems to me the system is corrupt at disk level, probably an SD-card problem, and that a new install on a new SD-card is the only way forward.

Upgrading Qnap TS109 from Jessie to Stretch

I have an old Qnap TS109 NAS that has been running Debian since long. I have previously written about my upgrade to Wheezy and Jessie.

The upgrade to Strech is the same procedure, and it was fine in the end, but…

In a very late phase of the upgrade I got:

update-initramfs: Generating /boot/initrd.img-4.9.0-5-marvell
flash-kernel: installing version 4.9.0-5-marvell

The initial ramdisk is too large. This is often due to the unnecessary inclusion
of all kernel modules in the image. To fix this set MODULES=dep in one or both
/etc/initramfs-tools/conf.d/driver-policy (if it exists) and
/etc/initramfs-tools/initramfs.conf and then run 'update-initramfs -u -k 4.9.0-5-marvell'

Not enough space for initrd in MTD 'RootFS1' (need 4210887 but is actually 4194304).

Well, the MODULES=dep thing does not help, but there is a another fix. You can compress your initramfs image. The procedure is described in the end of troubleshooting TS109.

The very short procedure is (I recommend you read the real article above):

echo "COMPRESS=xz" > /etc/initramfs-tools/conf.d/compress
apt-get install xz-utils
update-initramfs -u

Apart from that little problem, Stretch is just fine on QNAP TS109.

Vue.js: loading template html files

You may want to code your Vue.js application in such way that your html templates are in separate html files, but you still do not want a build/compile step. Well, the people writing Vue dont want you do do this, but it can easily be done.

VueWithHtmlLoader-library
I wrote a little library that simply does what is required in a rather simple way. I will not hold you back and I will show you by example immediately:

  • A Rock-paper-scissors Vue-app, all in 1 file: link
  • A Rock-paper-scissors Vue-app, modularised with separate html/js files: link
  • Source of VueWithHtmlLoader library: link

These are the code changes needed to use VueWithHtmlLoader:

 * 1) After including "vue.js", and
 *    before including your component javascript files,
 *    include "vuewithhtmlloader.js"
 *
 * 2) In your component javascript files
 *    replace: Vue.component(
 *       with: VueWithHtmlLoader.component(
 *
 *    replace: template: '...'
 *       with: templateurl: 'component-template.html' (replace with your url)
 *
 * 3) The call to "new Vue()" needs to be delayed, like:
 *    replace: var myVue = new Vue(...);
 *       with: var myVue;          
 *             function initVue() {
 *               myVue = new Vue(...);
 *             }
 *             VueWithHtmlLoader.done(initVue);

My intention that the very simple Rock-paper-scissors-app shall work as an example.

Disclaimer: the library is just written and tested only with this application. The application is written primarily to demonstrate the library. The focus has been clarity and simplicity. Please feel free to suggest improvements to the library or the application, but keep in mind that it was never my intention to follow all best practices. The purpose of the library is to break a Vue best practice.

What the library does:

  1. It creates a global object: VueWithHtmlLoader
  2. It provides a function: VueWithHtmlLoader.component() that you shall use instead of Vue.component() (there may be unsupported/untested cases)
  3. When using VueWithHtmlLoader.component(), you can provide templateurl:’mytemplate.html’ instead of template:’whatever Vue normally supports’
  4. The Vue()-constructor must be called after all templateurls have been downloaded. To facilitate this, place the code that calls new Vue() inside a function, and pass that function to VueWithHtmlLoader.done()
  5. The library will now load all templateurls. When an html template is successfully downloaded over the network Vue.component() is called normally.
  6. When all components are initiated, new Vue() is called via the provided function

Apart from this, you can and should use the global Vue object normally for all other purposes. There may be more things that you want to happen after new Vue() has been called.

The library has no dependencies (it uses XMLHttpRequest directly).

Background
Obviously there are people (like me) with an AngularJS (that is v1) background who are used to ng-include and like it. We see Vue as a better, smaller AngularJS for the future, but we want to keep our templates in separate files without a build step.

As I see it, there are different sizes of applications (and sizes of team and support around them).

  1. Small single-file applications: I think it is great that Vue supports simple single-file applications (with x-template if you want), implemented like my game above. This has a niche!
  2. Applications that clearly require modularization, but optimizing loading times is not an issue, and you want to use the the simplest tools available (keep html/js separate to allow standard editor support and not require a build step). AngularJS (v1) did this nicely. I intend Vue to do it nicely too with this library.
  3. Applications built by people or organizations that already use Webpack and such tools, or applications that are so demanding that such tools are required.

I fully respect and understand the Vue project does not want to support case 2 out of the box and that they prefer to keep the Vue framework small (and as fast as possible).

But i sense some kind of arrogance with articles like 7 Ways To Define A Component Template in Vue.js. I mean 1,2 are only useful for very small components. 3 is only useful for minimal applications that dont require modularization. 4 has very narrow use cases. 5 is insane for normal development (however, I can see cases where you want to output/generate it). And 6,7 requires a build step.

8. Put the damn HTML in an HTML-file and include it? Nowhere to be seen.

The official objection to 8 is obviously performance. I understand that pre-compiling your html instead of serving html that the client will compile is faster. But compared to everything else this overhead may be negligable. And that is what performance is all about, focusing on what is critical and keeping everything else simple. My experience is that loading data to my applications take much more time than loading the application itself.

The Illusion of Simplicity
AngularJS (v1) gave the illusion of simplicity. You just wrote JavaScript-files and (almost) HTML-files, the browser loaded everything and it just worked. I know this is just an illusion and a lot happens behind the scenes. But my experience is that this illusion works well, and it does not leak too much. Vue.js is so much simpler than AngularJS in so many ways. I think my library can keep my illusion alive.

Other options
There is thread on Stackoverflow about this and there are obviously other solutions. If you want to write .vue-files and load them there is already a library for that. For my solution I was inspired by the simple jquery example, but: 1) it is nice to not have a jquery dependency, 2) it is nice to keep the async stuff in one place, 3) the delayed call of new Vue() seems forgotten.

Feedback, limitations, bugs…
If you have suggestions for improvements or fixes of my library, please let me know! I am happy to make it better and I intend to use it for real applications.

I think this library suits some but not all (or even most) Vue.js applications. Lets not expect it to serve very complex requirements or applications that would actually benefit more of a Webpack treatment.

TODO and DONE

  • A minified version – I have not really decided on ambition/obfuscation level
  • Perhaps change loglevel if minified Vue is used? or not.
  • I had some problems with comments in html-files, but I failed to reproduce them. I think <!– comments –> should definitely be supported.

JavaScript: Sets, Objects and Arrays

JavaScript has a new (well well) fancy Set datastructure (that does not come with functions for union, intersection and the likes, but whatever). A little while ago I tested Binary Search (also not in the standard library) and I was quite impressed with the performance.

When I code JavaScript I often hesitate about using an Array or an Object. And I have not started using Set much.

I decided to make some tests. Lets say we have pseudo-random natural numbers (like 10000 of them). We then want to check if a number is among the 10000 numbers or not (if it is a member of the set). A JavaScript Set does exactly that. A JavaScript Object just requires you to do: set[314] = true and you are basically done (it gets converted to a string, though). For an Array you just push(314), sort the array, and then use binary search to see if the value is there.

Obviously, if you often add or remove value, (re)sorting the Array will be annoying and costly. But quite often this is not the case.

The test
My test consists of generating N=10000 random unique numbers (with distance 1 or 2 between them). I then insert them (in a kind of pseudo-random order) into an Array (and sorts it), into an Object, and into a Set. I measure this time as an initiation time (for each data structure).

I repeat. So now I have 2xArrays, 2xObjects, 2xSets.

This way I can test both iterating and searching with all combinations of data structures (and check that the results are the same and thus correct).

Output of a single run: 100 iterations, N=10000, on a Linux Intel i5 and Node.js 8.9.1 looks like this:

                         ====== Search Structure ======
(ms)                        Array     Object      Set
     Initiate                1338        192      282
===== Iterate =====    
        Array                 800         39       93
       Object                 853        122      170
          Set                1147         82      131

By comparing columns you can compare the cost of searching (and initiating the structure before searching it). By comparing rows you can compare the cost of iterating over the different data structures (for example, iterating over Set while searching Array took 1147ms).

These results are quite consistent on this machine.

Findings
Some findings are very clear (I guess they are quite consistent across systems):

  • Putting values in an Array, to sort it, and the search it, is much slower and makes little sense compared to using an Object (or a Set)
  • Iterating an Array is a bit faster than iterating an Object or Set, so if you are never going to search an Array is faster
  • The newer and more specialized Set offers little advantage to good old Objects

What is more unclear is why iterating over Objects is faster when searching Arrays, but iterating over Sets if faster when searching Objects or Sets. What I find is:

  • Sets seem to perform comparably to Objects on Raspberry Pi, ARMv7.
  • Sets seem to underperform more on Mac OS X

Obviusly, all this is very unclear and can vary depending on CPU-cache, Node-version, OS and other factors.

Smaller and Larger sets
These findings hold quite well for smaller N=100 and larger N=1000000. The Array, despite being O(n log n), does not get much more worse for N=1000000 than it already was for N=10000.

Conclusions and Recommendation
I think the conservative choice is to use Arrays when order is important or you know you will not look for a member based on its unique id. If members have unique IDs and are not ordered, use Object. I see no reason to use Set, especially if you target browsers (support in IE is still limited in early 2018).

The Code
Here follows the source code. Output is not quite as pretty as the table above.

var lodash = require('lodash');

function randomarray(size) {
  var a = new Array(size);
  var x = 0;
  var i, r;
  var j = 0;
  var prime = 3;

  if ( 50   < size ) prime = 31;
  if ( 500  < size ) prime = 313;
  if ( 5000 < size ) prime = 3109;

  for ( i=0 ; i<size ; i++ ) {
    r = 1 + Math.floor(2 * Math.random());
    x += r;
    a[j] = '' + x;
    j += prime;
    if ( size <= j ) j-=size;
  }
  return a;
}

var times = {
  arr : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  obj : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  },
  set : {
    make : 0,
    arr  : 0,
    obj  : 0,
    set  : 0
  }
}

function make_array(a) {
  times.arr.make -= Date.now();
  var i;
  var r = new Array(a.length);
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[i] = a[i];
  }
  r.sort();
  times.arr.make += Date.now();
  return r;
}

function make_object(a) {
  times.obj.make -= Date.now();
  var i;
  var r = {};
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r[a[i]] = true;
  }
  times.obj.make += Date.now();
  return r;
}

function make_set(a) {
  times.set.make -= Date.now();
  var i;
  var r = new Set();
  for ( i=a.length-1 ; 0<=i ; i-- ) {
    r.add(a[i]);
  }
  times.set.make += Date.now();
  return r;
}

function make_triplet(n) {
  var r = randomarray(n);
  return {
    arr : make_array(r),
    obj : make_object(r),
    set : make_set(r)
  };
}

function match_triplets(t1,t2) {
  var i;
  var m = [];
  m.push(match_array_array(t1.arr , t2.arr));
  m.push(match_array_object(t1.arr , t2.obj));
  m.push(match_array_set(t1.arr , t2.set));
  m.push(match_object_array(t1.obj , t2.arr));
  m.push(match_object_object(t1.obj , t2.obj));
  m.push(match_object_set(t1.obj , t2.set));
  m.push(match_set_array(t1.set , t2.arr));
  m.push(match_set_object(t1.set , t2.obj));
  m.push(match_set_set(t1.set , t2.set));
  for ( i=1 ; i<m.length ; i++ ) {
    if ( m[0] !== m[i] ) {
      console.log('m[0]=' + m[0] + ' != m[' + i + ']=' + m[i]);
    }
  }
}

function match_array_array(a1,a2) {
  times.arr.arr -= Date.now();
  var r = 0;
  var i, v;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    v = a1[i];
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.arr.arr += Date.now();
  return r;
}

function match_array_object(a1,o2) {
  times.arr.obj -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( o2[a1[i]] ) r++;
  }
  times.arr.obj += Date.now();
  return r;
}

function match_array_set(a1,s2) {
  times.arr.set -= Date.now();
  var r = 0;
  var i;
  for ( i=a1.length-1 ; 0<=i ; i-- ) {
    if ( s2.has(a1[i]) ) r++;
  }
  times.arr.set += Date.now();
  return r;
}

function match_object_array(o1,a2) {
  times.obj.arr -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.obj.arr += Date.now();
  return r;
}

function match_object_object(o1,o2) {
  times.obj.obj -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( o2[v] ) r++;
  }
  times.obj.obj += Date.now();
  return r;
}

function match_object_set(o1,s2) {
  times.obj.set -= Date.now();
  var r = 0;
  var v;
  for ( v in o1 ) {
    if ( s2.has(v) ) r++;
  }
  times.obj.set += Date.now();
  return r;
}

function match_set_array(s1,a2) {
  times.set.arr -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( v === a2[lodash.sortedIndex(a2,v)] ) r++;
  }
  times.set.arr += Date.now();
  return r;
}

function match_set_object(s1,o2) {
  times.set.obj -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( o2[v] ) r++;
  }
  times.set.obj += Date.now();
  return r;
}

function match_set_set(s1,s2) {
  times.set.set -= Date.now();
  var r = 0;
  var v;
  var iter = s1[Symbol.iterator]();
  while ( ( v = iter.next().value ) ) {
    if ( s2.has(v) ) r++;
  }
  times.set.set += Date.now();
  return r;
}

function main() {
  var i;
  var t1;
  var t2;

  for ( i=0 ; i<100 ; i++ ) {
    t1 = make_triplet(10000);
    t2 = make_triplet(10000);
    match_triplets(t1,t2);
    match_triplets(t2,t1);
  }

  console.log('TIME=' + JSON.stringify(times,null,4));
}

main();

When to (not) use Web Workers?

Web Workers is a mature, simple, standardised, compatible technology for allowing multithreaded JavaScript-applications in the web browser.

I am not going to write about how to use Web Worker (check the excellent MDN article). I am going to write a little about when and why to (not) use Web Worker.

First, Web Workers are about performance. And performance is typically not the best thing to think about first when you code something.

Second, when you have performance problems and you throw more cores at the problem your best speedup is x2, x4 or xN. In 2018 it is quite common with 4 cores and that means in the optimal case you can make your program 4 times faster by using Web Workers. Unfortunately, if it was not fast enough from the beginning chances are a 4x speedup is not going to help much. And the cost of 4x speedup is 4 times more heat is produced, the battery will drain faster, and perhaps other applications will be suffering. A more efficient algorithm can often produce 10-100 times speedup without making the maintainability of the program suffer too much (and there are very many ways to make a non-optimised program faster).

Let us say we have a web application. The user clicks “Show report”, the GUI locks/blocks for 10s and then the report displays. The user might accept that the GUI locks, if just for 1-2 seconds. Or the user might accept that the report takes 10s to compute, if it shows up little by little and the program does not appear hung. The way we could deal with this in JavaScript (which is single thread and asyncronous) is to break the 10s report calculation into small pieces (say 100 pieces each taking 100ms) and after calculating each piece calling window.setTimeout which allows the UI to update (among other things) before calculating another piece of the report. Perhaps a more common and practical approach is to divide the 10s job into logical parts: fetch data, make calculations, make report, but this would not much improve the locked GUI situation since some (or all) parts still take significant (blocking) time.

If we could send the entire 10s job to a Web Worker our program GUI would be completely responsive while the report is generated. Now the key limitation of a web worker (which is also what allows it to be simple and safe):

Data is copied to the Worker before it starts, and copied from the Worker when it has completed (rather than being passed by reference).

This means that if you already have a lot of data, it might be quite expensive to copy that data to the web worker, and it might actually be cheaper to just do the job where the data already is. In the same way, since there is some overhead in calling the Web Worker, you can’t send too many too small pieces of work to it, because you will occupy yourself with sending and receiving messages rather than just doing the job right away.

This leaves us with obvious candidates for web workers (you can use Google):

  • Expensive searches (like chess moves or travelling salesman solutions)
  • Encryption (but chances are you should not do it in JavaScript in the first place, for security reasons)
  • Spell and grammar checker (I don’t know much about this).
  • Background network jobs

This is not too useful in most cases. What would be useful would be to send packages of work (arrays), like streams in a functional programming way: map(), reduce(), sort(), filter().

I decided to write some Web Worker tests based on sort(). Since I can not (easily, and there are probably good reasons) write JavaScript in WordPress I wrote a separate page with the application. Check it out now:

So, for 5 seconds I try to do the following job as many times I can, while I keep track of how much the GUI is suffering:

  1. create an array of 10001 random numbers: O(n)
  2. sort it: O(n log n)
  3. get the median (array[5000]): O(1)

The expensive part is step 2, the sort (well, I actually have not measured 1 vs 2). If the ratio of amount of work done per byte being sent is high enough then it can be worth it to send the job to a Web Worker.

If you run the tests yourself I think you shall see that the first Web Worker tests that outsource all of 1-2-3 are quite ok. But this basically means giving the web worker no data at all and when it has done a significant amount of job, receiving just a few numbers. This is more Web Worker friendly than Chess where at least the board would need to be sent.

If you then run the tests that outsource just sort() you see significantly lower throughput. How suitable sort()? Well, sorting 10k ~ 2^13 elements should require each element to be compared (accessed) about 13 times. And there is no data sent that is not needed by the Web Worker. Just as a counter example: if you send an order to get back the sum of the lines most of the order data is ignored by the Web Worker, and it just needs to access each line value once; much much less suitable than sort().

Findings from tests
I find that sort(), being O(n log n), on an array of numbers is far too fast to be outsourced to a Web Worker. You need to find a much more “dense” problem to benefit of a Web Worker.

Islands of data
If you can design your application in such way that one Web Worker maintains its own full state and just shares small selected parts occationally, that could work. The good thing is that this would also be clean encapsulation of data and separation of responsibilites. The bad thing is that you probably need to design with the Web Worker in mind quite early, and this kind of premature optimization is often a bad idea.

This could be letting a Web Worker do all your I/O. But if most data that you receive is needed in your application, and most data you send comes straight from your application, the benefit is very questionable. An if most data you receive is not needed in your application, perhaps you should not receive so much data in the first place. Even if you process your incoming data quite much: validating, integrating with current state, precalculating I would not expect it to come very close to the computational intensity of my sort().

Conclusions
Unfortunately, the simplicity and safety of Web Worker is unfortunately also its biggest limitation. The primary reason for using a Web Worker should be performance and even for artificial problems it is hard to get any benefit.