How to (not) set up a RPi V3 server

A few months ago I set up a server running Archlinux on a RPi V3 using a 2.5′ USB drive for root. It is now dead.

One day I innocently did “pacman -Suy” as usual, and it didn’t restart. After that it was very unstable for days until it – the RPi V3 itself – appears broken. That is, I get very random errors, like different kernel panics, trying to boot it. I let a friend try to boot LibreELEC on it (his power, SD-card, TV) and it displayed the 4-color-splash-screen and the little lightning indicating power problem.

There are different ways to connect a USB HDD to a RPi.

  1. Let the RPi power the USB drive.
    (it works sometimes, you can try max_usb_current=1 in config.txt)
  2. Connect the USB drive to a “usb extra power cable” (google it) to take the load off the RPi
  3. Connect the USB drive to a USB hub that has external power
    (optionally, also power the RPi from the same USB hub)

With my unfortunate broken RPi V3, I used method (1) for a hard drive rated at 1.0A (the USB<->SATA-chip probably consumes some power as well). I did use a proper original RPi power supply though, but I believe I somehow stressed some component of the RPi V3 continously taking more than 1.0A from a single USB port.

Findings using USB hub to power RPi + HDD

I have an old USB Hub with a 2.0A rated power supply. I now use it as the only power source for a RPi + USB HDD (The USB Hub and RPi are connected both ways in a loop).

  • RPi V2 + 1.0A 1TB HDD: frequent under-voltage warnings
  • RPi V1 + 0.6A 320GB HDD: works perfectly

So, that particular USB Hub will drive my RPi V1, and I will find another solution for the RPi V2.

Conclusion

I have written several articles about using RPi as a server.

My sober and responsible conclusion must be: don’t (use Raspberry Pi for production Linux servers). It is simply not worth it. First it is not so cheap as you think when you have bought all cables, adapters, cases and chargers. Second, your time my not be for free. Third, performance is bad. Forth, stability is limited and don’t expect very long service time.

The cheapest NUC setup is a more rational choice.

I also believe a QNAP or Synology with virtualisation technology could be a better choice than running (multiple) RPi.

Nevertheless, I never learn, and I am now replacing my broken RPi V3 with two old Raspberry Pi (V1 + V2). I mostly use them for Syncthing and backup, I guess two is better than one, and I have unused USB Harddrives.

Archlinux vs Raspbian

I have come to like Archlinux for RPi. However, the frequent and relatively large upgrades that come with a rolling distribution feels somewhat unoptimal for a low-performance system living on an SD-card.

This time, I am back with Raspbian, because

  1. Raspbian is now based on Debian 10 (buster) right from start (I believe the Debian 9 release of Raspbian was kind of late)
  2. There is simple minimal Raspbian Buster Lite image suitable for servers or headless systems
  3. Creating an empty file named “ssh” in /boot before starting the first time lets you ssh into the brand new raspbian system, so you can easily install with neither keyboard/display or serial

I simply have nothing to complain about with Raspbian anymore.

Dark Mode?

With macOS Mojave Apple introduced Dark Mode. Some applictions support it. I was mildly sceptical, thinking it was just some kind of fashion statement.

But there is an argument that goes like: “if I am going to stare into a lamp all day, I want as much of it to be as dark as possible”. It makes some sense. You would not want to stare into a lamp in the first place, why then let your display default to white everywhere?

There is also an explanation to why we ended up here: designers are educated for printed designs, which is usually on white paper, thus they prefer white background for computers as well, for aesthetic reasons. Everyone is not a designer, but we all mimic good design.

And you probably know that back the old days computer displays were black with green text. So it is plausible that people who want to make computers more modern and appealing prefer white displays, while people who are more nerdy or old fashioned like darkness.

What I have written so far may seem logical. But it does not matter. What matters is (from the perspective of a programmer):

  1. What is truly more ergonomical, to you?
  2. Is it enough to stick with either light or dark mode? Or should you switch depending on your surrounding environment?
  3. Can you get a consistent good dark mode experience, otherwise it is mostly annoying and better avoided entirely?
  4. How to design your product so it appeals to your customers?

Switching your OS to a dark mode is easy. If you are using XCode, Photoshop or some other product that supports dark mode, that is also easy. Terminal applications (frequently used by programmers) are highly customizable and has often never left dark mode in the first place.

How about the browser? Well, not the browser itself, but the web pages and web applications it delivers to you. Well, for Firefox and Chrome there is a plugin called “Dark Reader”. It works reasonably well for me. Read the FAQ/manual when you install it!

A problem is that when my eyes are used to bright content, a dark page with white text is no problem. But when I am used to a dark display and suddenly the entire display turns white for some reason, it is unpleasant.

As a developer I can of course wonder: how do we want web pages to be built so they work nicely both in light and dark mode?

  1. Each web page has a dark mode (will never happen)?
  2. Web pages should follow good light mode practices, so they look good when using a dark mode extension?
  3. Should any web pages be coded dark?

And as a developer, if my OS/Desktop, development tools, terminal and web browser is set to dark mode… what about the web application I am currently developing? I can’t possibly write CSS and whenever I refresh the result is passed through a black-box-dark-mode filter, that would be a very awkward development experience. So whenever I switch to the (web) application I am developing, the display will turn annoyingly white.

On Contrast

I had the idea that high contrast is easier on the eye. But I realise it is not. Absolutely white text on absolutely black background is quite hard on my eyes. However, ligth grey text on dark grey background is quite comfortable. Apple Terminal comes with a few different (color) profiles. Many of them are surprisingly colorful. I imagine I don’t want the cognitive input that colors give me, it distracts my mind, but perhaps I am wrong about it.

Xcode findings

As I start experimenting with Xcode I realise that it is a tricky beast.

Xcode 10.2.1

I realised Xcode 10.2.1 used 100%+ CPU. I fixed that by reinstalling it completely.

Reainstalling Xcode I had managed to mess upp the simulators.
Error: Unable to boot device because it cannot be located on disk
Solution: Run in Terminal: xcrun simctl erase all

Xcode 7.3.1

Xcode 7.3.1 Fails to start on macOS 10.14.5.

A first iOS app with Xcode 10.2.1

Ten years too late I decided to look into iOS development. It is too late, because the Klondyke era of becoming a millionaire on simple apps is probably over. On the other hand Swift has arrived and reached version 5 so it should be a good time to get started.

What I have is

  • Mac OS 10.14.5
  • Xcode 10.2.1
  • iPhone 6s, iOS 12.2 to deploy to
  • iPad 3, iOS 9.3.5 (obsolete by Apple standard)
  • 20 years of programming experience
  • Very limited experience with Swift 5
  • No experience with Xcode, Objective-C or macOS development

I am mostly a backend-programmer, who have to do HTML/CSS/JavaScript as well. Xcode is creepy. I have thought about a few appoaches

  1. Buying a book (but a challenge to find a book with relevant complexity, mix of tutorial/reference, for Xcode 10 / Swift 5)
  2. Apples obsolete tutorial (but I was put off by the fact that it is written for Swift 3)
  3. Just playing around with Xcode (just kidding – that is too scary)
  4. Some online course, like Udemy (but it is not my way)
  5. A simple trumpet tutorial

I went for (5). It was good, because in a few hours it took me all the way from starting Xcode to running something on my iPhone.

Building for the simulator and running works. And I managed to deploy to my iPhone (it is actually quite self explanatory: connect the iPhone, select it as destination in Xcode, and later in the iPhone under settings -> general -> device management you allow the app to run).

The short version is that it all went well! But…

Obsolete iPad 3

I failed to build for my obsolete iPad 3. What happens is that all is fine, and then I come to this screen:

I type my password, and immediately it (building/signing) “Failed with exit code 1”. I can imagine two options right away

  1. I need a real developer license (not Personal Team) to do this
  2. I need an older version of Xcode to build for 9.3
    (and in that case I might need to use older project format, and perhaps not even Swift 5, I don’t know)
  3. I got some indication that with a Personal (free) developer license I can only deploy to a single test device, that would perhaps not include old devices

It actually only builds for Deployment target 12.2, no older versions in the list.

Update: Page 60 of the free Apple Book “App Development With Swift” tells clearly that a free account only supports a single device. So it is clearly a waste of time to ignore that restriction and try to deploy to my iPad.

Xcode

I have spent a few hours with this now. I wrote 4 lines of code. I have ctrl-clicked on things, dragged-and-dropped-things, added properties to things, added resources, opened panels and used shortcuts. If you are used to things like Visual Studio it will probably feel somewhat familiar. But for me, who mostly use Vim, it is very scary.

Update: Xcode turned out to use 100%+ CPU constantly. I completely removed it and reinstalled it, and it seemed to help.

Computer Requirements / Performance

I did these experiments on a MacBook Pro 6,2 (that officially does not support macOS 10.14). It has an SSD drive and 8GB or RAM. Building takes almost 10 seconds, but starting the simulator and loading the app takes almost a minute. The computer clearly gets warm. Neither Xcode nor the simulator consumes much memory (Activity Monitory says about 200Mb each). Obviously, if you run the simulator much in your daily work, a faster CPU is worth it.

I think my 1440×900 display may be the biggest problem if I want to do anything real thought.

Conclusion

I have mixed feelings, it could be worse and better. I clearly need to find a way to be quickly guided through building different types of apps. I think I need a few days being guided through Xcode until both Xcode and the different project artifacts feel somewhat natural.

I have a simple app I want to build for myself, but right now it feels much to intimidating.

I found that Apple has released a free online book (available in their Books application) called App Development with Swift. That seems to be a good option.

Whisky Head to Head

Based on my notes below I have ranked the whiskies I have tasted:

  1. Deanston 18
  2. Old Pulteney 18
  3. Andalusia Tripled Destilled
  4. Deanston Virgin Oak
  5. Glenmorangie 10
  6. Makers Mark
  7. Motörhead
  8. Jameson Black Barrel
  9. Johnny Walker White Walker
  10. Storm

Peated

Usually peated whiskies win on raw power compared to unpeated whiskies. However, that does not mean that a peated whisky is generally preferable on a given occation. But I made a separate list.

  1. Caol Ila 12
  2. Kilchoman Machir Bay
  3. Longrow (moderately peated)
  4. Hven Tychos Star
  5. Mackmyra Svensk Rök
  6. Jura Superstition (slightly peated)

Background and Idea

The idea is to drink two different whiskies and make a few comments. I usually do this alone, in the evening, with two small drams, a glas of water and some salty snacks (like crisps).

To me the way I experience a whisky can change from time to time. Not the least, it depends on what I have eaten and drunk before I taste the whisky. I find it very hard to drink one whisky one day, and another the next day, and compare them. I also find it hard to try many whiskies, because my senses quickly change. So two whiskies, head to head, should be the most fair way I can compare and rate whisky.

It is not my intention to rate value-for-money. I will mostly try standard whiskies that are produced and available, and expected to have somewhat consistent quality. I think it is more interesting to find good affordable available whiskies, than to seek the ultimate bottle from a lost distillery. Occasionally I will however try a more unique, rare and expensive bottle, to see how it compares.

General Findings

I am beginning to identify categories that work for me:

  • Standard
  • Sweet
  • Peat

Notes

Deanston 18 vs Old Pulteney 18: Color very similar, Old Pulteney somewhat darker. On the nose Old Pulteney is more pleasant; sweeter and richer. Deanston is dryer and slightly more chemical. Old Pulteney tastes perfectly balanced with a clear (but not overwhelming) hint of its Spanish oak casks, nice after taste. Deanston also very nicely balanced, with (to my taste) a more dry traditional single malt character. Both are very stable representatives of 18 year old Scotch single malt, but neither is very brave. If I have to choose I prefer the Deanston, I find it more interesting.

Jameson Black Barrel vs White Walker: Jameson has a deep sweet characteristic scent while White Walker is more subtle, a bit chemical to me. Taste impressions are quite the same; White Walker has a quite thin, somewhat sweet taste (perhaps the best I can say is that its not too bad considering its a blend). Jameson tastes caramel, very good, but a bit too much of something. I prefer Jameson, even without considering it is both cheaper an generally available. The reason I tried these two is that I found White Walker ice cold quite nice. I froze another blend (J&B) and it was not at all as good, and not as sweet. So I thought perhaps White Walker had a sweetness like Jameson Black Barrel, but it wasn’t so. I will try Black Barrel frozen some day (since White Walker is limited edition).

Glenmorangie 10 vs Storm: Both rather pale color, and light fruity on the nose. The Storm may actually have a slightly richer aroma. Glenmorangie tastes excellent in its light simplicity, although some bitterness remains. Storm is heavier, more flavour, less fruity, a bit chemical and more bitterness: I lack a defined character. After a while, I clearly prefer Glenmorangie, despite it is lighter (usually a more heavy whisky wins head to head, is my experience). Later, Glenmorange remains flawless in its simplicity, while there is something unpleasant about Storm.

Makers Mark vs Motörhead: Unsurprisingly they are both nice dark amber in color, very similar. Makers Mark has a much sweeter (raisin, vanilla) aroma while Motörhead is much more subtle. Same is true for the taste; Makers Mark has a fine Bourbon flavour also after drinking the drier and lighter Motörhead. They are both good. For those who like Bourbon Makers Mark is clearly the winner. Motörhead is still a good oakflavoured whisky, perhaps too sweet and Bourbon-like to those who don’t like that. Considering price, or not, I must say Makers Mark is the better whisk(e)y. Although, there are situations when I could prefer Motörhead.

Caol Ila 12 vs Kilchoman Machir Bay: As I expected quite similar color and aroma. Kilchoman slightly paler. On the nose they are clearly different, but I have a hard time putting words on it. Caol Ila is heavier, more oily. Starting tasting Kilchoman is like a sparkling firework in the mouth, very good. Caol Ila is, even when it comes to flavour heavier, more oily and more smooth. Sometimes I love heavily peated whisky and sometimes I think it is too much. This time I like them both. Ultimately, Caol Ila comes out slightly better for being richer and more smooth, but it is very close.

Kilchoman Machir Bay vs Longrow (no age): Longrow is clearly a bit darker in color, while Kilchoman is clearly is more peaty on the nose. Longrow needs water and has a balanced, somewhat dry, bitter and pale flavour (not so salty though). Kilchoman is richer in flavour and has an Islay and island character not present in Longrow (despite it is a bit peated). These two whiskies are a bit too different to compare head to head, and neither of them really benefit from being compared to each other (they both smell funny, a bit like soap, after a while). While (the young) Longrow is very good and perhaps more easy to enjoy, head to head Kilchoman is much more interesting.

Deanston Virgin Oak vs Glenmorangie 10: Deanston is a bit more amber colored while Glenmorangie is not that pale. Glenmorangie is light, almost like a wine on the nose, Deanston has a distinct oak and dried fruit aroma. These impressions are well reflected in a first tasting round. Deaston is a bit more rough and raw and Glenmorangie remains subtle and sophisticated. Both are rather young single malts in the lower price segment, both are very good, but lack perfection. I do prefer Deanston.

Jura Superstition vs Longrow: The Jura is more golden in color but quite similar. Both have a pleasant aroma, Longrow more peaty. Tasting both head to head is a clear win to Longrow: the Jura is hardly pleasant and Longrow is quite perfect.

Andalusia Triple Destilled vs Glenmorangie 10: The Texan is much darker in color, but to the nose they are very similar: Andalusia a bit more raisins perhaps, and Glenmorangie slightly lighter. The difference in taste is more significant: Andalusia focuses on the sweet oak flavour which is not bad at all (but a bit simple), while Glenmorangie has wider palette of flavours (but a little bitter). I realise that Andalusia, being triple destilled, should be compared to an Irish whisky rathern than Scotch. Head to head, Andalusia is the more pleasant whisky.

Hven Tychos Star vs Mackmyra Svensk Rök: two Swedish peated (well, at least smoky) whiskies. Hven has a somewhat darker color. They smell rather different. Mackmyra has a very clear dry smoke smell, like burnt, almost fire, and not much else. First impression of Hven is that it has a more traditional peat aroma, but after a while I don’t know; it smells sweet. Starting to taste Macmyra it is surprisingly good – not very much flavour (just like its color and aroma) but not bad. I immediately add water. Hven has a much richer flavour, also surprisingly good and balanced. Mackmyra softens with some water but there is not much to discover. I prefer Hven, but it was more even than I thought, and I had lower expectations and was surprised.

Simple Mobile First Design

If you build a web site today you need to think about the experience on mobiles, tablets and desktops with different screen sizes. This is not very easy. In this article I have applications (SPAs) in mind rather than sites/pages.

If you are a real, ambitious, skilled designer with a significant budget, there is nothing stopping you from doing it right. Responsive design is dead, because most often you have no choice, so it is just design.

However, you may not have that budget, skill, time and ambition, but you still need to think about vastly different screen sizes. Or perhaps you just need to build a simple native-app-like website.

Two separate implementations

In many cases I would argue that it makes sense to simply make a separate site for mobile and desktop. There are many arguments but I will give one: use cases are often very different. A desktop app is often opened, kept open for a long time, and much data may be presented and analysed on screen, in memory. A mobile app is often opened shortly, to accomplish a single task, and then closed. This means that you probably want to manage state, data and workflow very differently as well.

Bootstrap (or similar)

There are frameworks (like Bootstrap) and technologies like Flexbox to allow you to build a responsive app. Before using those, I think you should ask yourself a question.

How do you want to take advantage of more screen space?

Think of regular desktop applications (Word, Photoshop, Visual Studio) or your operating system: when you have more screen available you can have more stuff next to each other. You can have more windows and more panels at the same time. Mostly. Also, but less so, small things get larger (when they benefit from it). It helps to be able to see an entire A4 page when you work with Word. But when you have an Excel sheet with 4 used columns, those don’t use your entire screen just because they can.

Bootstrap tends to create larger space between elements, and larger elements where it is not needed (dropdown <select>, input fields). I say tends to, because if you are good and very careful, you can probably do a better job than I can. But it is not automatic and it is not trivial, to make it good

What I mean is that if my calendar/table looks gorgeous when it is 400px wide, what good does it make to make it larger if the screen gets larger? So I think a better approach to responsiveness is to say that my calendar/table takes 400px. If I have more space available, I can show something else as well.

Mobile Screen Sizes

To complicate things further, mobile phones have different screen sizes, different screen resolutions, and then there are hi-resolution screens that have different virtual and physical resolutions.

So you have your table that looks good on a “standard” mobile with 320px width. What do you want to do if the user has a better/larger screen?

  1. make it look exactly the same (just better/larger)?
  2. reactively change the way your app looks and works?

If you are opting for (2), I need to wonder why, really?

I argue that if you pick (1) you can make development, testing, documentation and support easier. And your users will have a more consistent experience. At the expense that those with a large mobile may not get the most out of it when using your app.

I propose a simple Mobile First Responsive design

What I propose is not for everyone and everywhere. It may suck for your product and project. That is fine, there are different needs.

I propose a Mobile First (Semi-)Responsive design:

  1. Pick a width (320px is fine).
  2. Design all parts, all pages, all controllers of your app for that width.
  3. On mobile, set the viewport to your width for consistent behaviour on all mobiles.
  4. Optionally, on desktop (and possibly tablets), allow pages to open next to each other rather than on top of (and hiding) each other to make some use of more screen when available.

Seems crazy? Please check out my Proof of Concept and decide for yourself! It is only a PoC. It is not a framework, not a working app, not demonstrating Vue best practices, and it is not very pretty. Under Settings (click ?) you can check/change between Desktop, Tablet and Mobile mode (there is a crude auto-discover mechanism in place but it is not perfect). You can obviously try it with “Responsive Design Mode” in your browser and that should work quite fine (except some elements don’t render correctly).

Implementation Details

First, I set (despite this is not normally a recommended thing to do):

<meta id="viewport" name="viewport" content="width=320">

Later I use JavaScript to change this to 640 on a tablet, to allow two columns. Desktops should ignore it.

Second, I use a header div fixed at the top, a footer div fixed at the bottom, and the rest of the page has corresponding margins (top/bottom).

.app_headers {
   position: fixed;
   top: 0;
   left: 0;
 }
 .app_header {
   float: left;
   height: 30px;
   width: 320px;
 }
 .app_footers {
   position: fixed;
   bottom: 0;
   left: 0;
 }
 .app_footer {
   float: left;
   height: 14px;
   width: 320px;
 }
 .app_pages {
   clear: both;
 }
 .app_page {
   margin-top: 30px;
   margin-bottom: 12px;
   width: 320px;
   float: left;
 }

In mobile mode I just add one app_header, app_footer and app_page (div with class). But for Tablets and Desktops I can add more of them (equally many) as the user navigates deeper into the app. It is basically:

<div class="app_headers">
  <div class="app_header">
    Content of first header (to the left)
  </div>
  <div class="app_header">
    Content of second header (to the right)
  </div>
</div>
<div class="app_pages">
  <div class="app_page">
    Content of first page (to the left)
  </div>
  <div class="app_page">
    Content of second page (to the right)
  </div>
</div>
<div class="app_footers">
  <div class="app_footer">
    Content of first footer (to the left)
  </div>
  <div class="app_footer">
    Content of second footer (to the right)
  </div>
</div>

I use little JavaScript to not add too many pages side-by-side should the display/window not be large enough.

It is a good idea to reset margins, paddings and borders to 0 on common items.

I also found that you need a font size of 16px on iPhone, otherwise the Apple mobile Safari browser will immediately zoom when user edits <input> and <select>.

Most effort when I wrote my Proof of Concept was

  1. Getting the HTML/CSS right and as simple as possible (I am simply not good enough with HTML/CSS to just get it right)
  2. Implementing a “router” that supports this behaviour

Being able to scroll the different pages separately would be possible, a bit more complicated, and perhaps not so desirable.

Conclusions

Exploiting the viewport you can build a web app that works fine on different mobiles, and where the issue with different screen sizes and screen resolution is quite much out of your way.

The site will truly be mobile-first, but with the side-by-side-strategy presented, your users can take advantage of larger screens on non-mobiles as well.

This way, you can build a responsive app, with quite little need for testing on different devices as the app grows. You just need to keep 320px in mind, and have a clear idea about navigating your site.

First look at Swift

Apple invented the Swift programming language to make application programming for iOS and macOS a better experience. If you are new to all this (as I am), I guess there are three approaches (depending on your background):

  1. Learn with the Swift Playground App for iOS
  2. Find a book/guide/tutorial to build actual iOS apps (learning Swift along the way)
  3. Use tools that you are used to, solving problems you are familiar with, using Swift (a programmers’ approach)

I decided to just write some Swift code. There is a cool web page called Rosettacode.org with implementations of different “problems” in different languages. I started looking at Swift code there to see if I could learn anything, and decided I could to better. (Admittedly, that is quite arrogant: I have never written a line of Swift code before, and now contribute Swift code)

I started looking at the problem Caesar Encryption and solved it for Swift. The full code comes below (in case someone changes it on Rosettacode)

I have a C/C#/Java/JavaScript background. This is what I find most notable about Swift.

Backward declaration of variables, arguments and function return types. Type comes after the name (with colon in between).

Named parameters to function, unless you prepend an _ to the name.

Closures can be written (quite just) like in JavaScript. (see charRotateWithKey in the caesar function)

Wrapping/optional: a normal variable, after it is declared must have a valid value. The language ensures this for you. Look at the first line in the function charRotate below: the ! means that if the parameter c does not have an ascii value the program will terminate right there. Look at the line starting with guard in main. The language guarantees that key is a valid integer after the guard, otherwise the function (program) must exit. I am far from an expert on this, find a better source! But you can’t do what you do in C/C#/Java/JavaScript – just hope it goes well, and if it does not catch an exception or deal with it afterwards.

ARC rather than garbage collection or explicit memory management. This matters not in my program, but it is worth mentioning. I first thought Swift and Rust were very similar and that it is more or less an incident that they are different languages, but I don’t really think so anymore.

The swift command can be used not only to compile a source file. It can be used to set up a swift project (directory), run tests, run the REPL (read-eval-print-loop) and more things. This seems quite nice, but I will write no more of it here.

My program below demonstrates type conversions, command arguments, usage of map and closures, string and ascii low level operations and output.

I think Swift is a quite fine language that I would be happy to use. I notice that the language has evolved quite much over the few years it has exited. So when you find things on the web or stackoverflow, you might not find current best practices.

func usage(_ e:String) {
   print("error: \(e)")
   print("./caeser -e 19 a-secret-string")
   print("./caeser -d 19 tskxvjxlskljafz")
 }
  
 func charIsValid(_ c:Character) -> Bool {
   return c.isASCII && ( c.isLowercase || 45 == c.asciiValue ) // '-' = 45
 }
  
 func charRotate(_ c:Character, _ by:Int) -> Character {
   var cv:UInt8! = c.asciiValue
   if 45 == cv { cv = 96 }  // if '-', set it to 'a'-1
   cv += UInt8(by)
   if 122 < cv { cv -= 27 } // if larget than 'z', reduce by 27
   if 96 == cv { cv = 45 }  // restore '-'
   return Character(UnicodeScalar(cv))
 }
  
 func caesar(_ enc:Bool, _ key:Int, _ word:String) -> String {
   let r = enc ? key : 27 - key
   func charRotateWithKey(_ c:Character) -> Character {
     return charRotate(c,r)
   }
   return String(word.map(charRotateWithKey))
 }
  
 func main() {
   var encrypt = true
  
   if 4 != CommandLine.arguments.count {
     return usage("caesar expects exactly three arguments")
   }
  
   switch ( CommandLine.arguments[1] ) {
   case "-e":
     encrypt = true
   case "-d":
     encrypt = false
   default:
     return usage("first argument must be -e (encrypt) or -d (decrypt)")
   }
  
   guard let key = Int(CommandLine.arguments[2]) else {
     return usage("second argument not a number (must be in range 0-26)")
   }
  
   if key < 0 || 26 < key {
     return usage("second argument not in range 0-26")
   }
  
   if !CommandLine.arguments[3].allSatisfy(charIsValid) {
     return usage("third argument must only be lowercase ascii characters, or -")
   }
  
   let ans = caesar(encrypt,key,CommandLine.arguments[3])
   print("\(ans)")
 }
  
 func test() {
   if ( Character("a") != charRotate(Character("a"),0) ) {
     print("Test Fail 1")
   }
   if ( Character("-") != charRotate(Character("-"),0) ) {
     print("Test Fail 2")
   }
   if ( Character("-") != charRotate(Character("z"),1) ) {
     print("Test Fail 3")
   }
   if ( Character("z") != charRotate(Character("-"),26)) {
     print("Test Fail 4")
   }
   if ( "ihgmkzma" != caesar(true,8,"a-zecret") ) {
     print("Test Fail 5")
   }
   if ( "a-zecret" != caesar(false,8,"ihgmkzma") ) {
     print("Test Fail 6")
   }
 }
  
 test()
 main()

macOS 10.14 on unsupported MacBook Pro

Update 20190528: macOS suggested two things today. First it found something strange about EFI and wanted to send a dump to Apple (I rejected). Secondly, it wants to install a Mac Book Pro Supplemental Update. I tried it and it took a while but the computer came up again with no need to repatch.

20190524: All Good With 10.14.5

Update 20190524: With the latest/current version of Mojave Patcher I successfully upgraded to 10.14.5. When the upgrade restarts it will eventually hang. So you need to re-patch from your USB Mojave Patcher after installing 10.14.5. All fine!

Update 20190520: Today I innocently let my computer update itself. Not smart. It crashed during update. It didnt start. I tried to re-patch it, still does not start. It obviously tried to install 10.14.5 which I successfully installed later on another supported computer.

Update 20190519: A few Das ago I installed an update from macOS Mojave Patcher that was supposed to fix Random Kernel Panics. That worked fine and the problems that much of this article covers are fixed.

Update: I upgraded the MacBook Pro 6,2 to 8GB RAM (2×4 bought for purpose). I had 2×8 modules slightly faster RAM: either of them was very unstable and both of them together did not boot at all. I guess if you have 1×8 of the correct speed that it would work, but I would recommend to buy 2×4. I also replaced the hard drive with a cheap SSD. I would say the computer performs nicely for practical purposes.

Original Post

I got custody of a MacBook Pro 6,2 that has seen very little use. It is the first MacBook with an i5 cpu, it has a 320GB replacable hard drive (a long gone feature), 4GB RAM upgradable to 8GB, and nice 15 inch display and a very nice keyboard.

It was running macOS 10.6.8. I realised it supports 10.13 but official Apple support ends there.

So I found out about a project/software called macOS Mojave Patcher that allows you to install mac OS 10.14 Mojave on certain unsupported Macs, including the MacBook Pro 6,2. I gave it a try and I will write about my findings.

Summary

This MacBook Pro 6,2 runs macOS Mojave 10.14 quite perfectly if installed using Mojave Patcher. However, it seems absolutely critical for stable operation to disable Automatic Graphics Switching (System Preferences -> Energy Saver). This may make the computer run warmer, consume more energy and suffer shorter battery time than it would otherwise.

Below follows details of all my findings. If you don’t want the details, you can skip to Is it worth it in the end of this post.

Attempt 1 : Clean install of 10.14

I made a clean install of 10.14 (using a Mojave Patcher USB). That did not start at all. Instead of the familiar Apple and progress bar, I just got a question mark. I think the problem is that some kind of “firmware” upgrade is needed.

Attempt 2: Install 10.12 – upgrade to 10.14

I made a clean standard install of 10.12. I am quite sure it did some kind of firmware update. Then I upgraded to 10.14 (using Mojave Patcher USB). It took very long time (several hours, usually that takes less than an hour). It first appeared to be good – 10.14 started – but it turned out not to be stable. I got kernel panics of different types quite often (after just minutes of use). I also realised the upgrade had not converted HFS+ to APFS as I expected.

Attempt 3: Clean install of 10.14

I made a new clean attempt with 10.14 (on APFS) and this time the system started up as expected. But the kernel panics remain (as I read them, it was about Nvidia some times, Audio one time, Crypto some time, APFS some time).

Back to 10.13

I made a clean install of 10.13 to ensure there isn’t anything wrong with the computer itself. Currently I am writing this blog post while installing Xcode (not from App Store because that is not allowed with 10.13), and the computer has been stable for a few hours.

Attempt 4: Minimal patches

I made a new clean install of 10.14 on APFS. When running the Mojave Patcher, I only selected these two patches:

  • Boot.plist Patch (Disable Platform Check)
  • SIP Disabler Patch

Installation was successful, system came up, I am writing right here right now, and here is a screenshot (installing Xcode).

A notice a few obvious differences from when I had all recommended patches:

  1. Audio does not work (I did not pick Legacy Audio Patch)
  2. GFX uses the NVIDIA GT 330M only. Not the Intel card. This computer has two GPUs, and it is supposed to switch between them depending on load. Now it only uses the more powerful card. Also the display menu (top right) is not aware of resolutions as it used to. But it seems the display is running at 1440×900, so I am happy with that. It may get warmer now, and there are occasional graphics glitches, but not particularly disturbing. (I did not pick Legacy Video Card Patch)
  3. The Install Patch Updater is obviously not installed, since I did not pick it.
  4. Photos (that come with macOS) crashed when I tried to edit the above picture.
  5. It does not go to sleep, neither if you close it or choose Sleep from the Apple menu.

USB seems fine at first glance (despite I did not pick Legacy USB Support Injector). WiFi works (there was no such patch).

Installing Xcode took an eternity… after more than an hour I got impatient, restarted the computer, installed “Software Update Patch”, started the Xcode installation again, and went to bed. Next morning: Xcode installed and computer still running peacefully.

24 hours later

The computer was stable with 10.14 for an entire workday doing programming (mostly Node.js and Safari). So I decided to apply the Audio patch as well, which gives this list of patches installed:

  • Boot.plist Patch (Disable Platform Check)
  • SIP Disabler Patch
  • Legacy Audio Patch
  • Software Update Patch

So far so good: 30 minutes of Audio Play, both locally and streaming over Bluetooth.

48 hours later

After another stable work day I decided to install all patches except the Video patch. That is:

  • Boot.plist Patch (Disable Platform Check)
  • Legacy USB Support Injector
  • SIP Disabler Patch
  • Install Patch Updater
  • Legacy Audio Patch
  • Software Update Patch

When the system started, the Patch Updater wants to install two things (related to Siri and Night mode), and for now I rejected it.

A few hours later

After a few hours I also installed:

  • Night Shift Patch
  • Siri Patch

This is done using Patch Updater, within macOS (no need to boot on Mojave Patcher USB). It appears to cause no trouble.

The only option left

All this leaves me with just the Legacy Video Patch. There is a twist, as this system has two GPUs, and without the patch it is using only the Nvidia GPU (it is not aware of the Intel GPU).

Under System Preferences -> Energy Saver, there is an option Automatic Graphics Switching. With that one set to OFF the computer should only use the Nvidia GPU anyway. Why would I want to do that? Well, there are currently some minor graphics glitches. Those could potentially go away. Also, I have some limited functionality when it comes to video:

  • no screen resolution options (although it runs at best resolution)
  • connecting an external display does not work
  • cannot adjust screen brightness

If I install the Legacy Video Patch I shall get an unstable system, but what if I also disable Automatic Graphics Switching? (to be completely honest, I am not even certain it was ever enabled).

And finally Legacy Video Patch

Its just been a few hours, but I installed Legacy Video Patch (all recommended patches) and disabled Automatic Graphics Switching. The computer seems stable and now sleep, brightness and external display works. Also the occasional graphics glitches seem gone.

Is this worth it?

I suppose, to some people running unsupported 10.14 rather than supported 10.13 makes sense. To me:

  • 10.14 allows latest version of Xcode
  • 10.14 has dark mode
  • 10.13 is still supported (receiving updates) by Apple (as of May 2019), this should change as 10.15 is release (could be end of 2019).

There could be a 10.15 Patcher in the future. And it could work with this computer. Or not. As long as you can run the latest current macOS without too much hazzle, that could be preferred to running an unsupported version of macOS. But today (May 2019) 10.13 is not unsupported. In fact, Apple still seems to release security updates to 10.12.

But I am curious, and I think I want the latest Xcode, so here I am.

Adobe CS4 and macOS 10.14 Mojave

How about using an old unsupported CS application on current macOS?

Today I had a reason to try it out. The short version is that it seems to work with Photoshop and Illustrator, but not with Indesign.

Background

Adobe used to sell “perpetual licenses” with their Creative Suite software. Years ago they stopped doing that and changed to a subscription model with Creative Cloud. People with perpetual licenses could still use them, but support in mac OS is getting more troublesome with every upgrade. A perpetual license could be used on two computers, and there was an activation/deactivation feature.

My Case

I have a friend who use CS4 on two older Macs running unspported version of macOS. My friend now got a brand new MacMini (with macOS 10.14 Mojave) and ideally we wanted to deactivate CS4 on the old MacMini, and activate it on the new. That was obviously a gamble.

Activation Servers Down

It appears the Adobe Activation servers are down. Deactivation on the old MacMini was not possible. Thus proper Activation should also not be possible.

Trying an old workaround

There used to be an old workaround (clearly used for software piracy) described here. We tried installing CS4 on Mojave and that worked fine. We then did the hack in the article, and that worked too. It seems Photoshop and Illustrator worked correctly, but Indesign did not. Indesign ran into a perhaps well known problem about “empty toolbox”. Also, when choosing “New”, an empty dialog window opens. This can perhaps be fixed, but we did not bother.

Restoring CS4 using Time Machine gave the same result for practical purposes: Photoshop worked but Indesign appeared broken and could not even be started.

CS5, CS6

Obviously, I can’t say anything definite about CS5 and CS6. I guess the workaround does not work. And I believe after reading some forums that the activation servers are down as well.

Conclusion

You may be able to install and use CS4 Illustrator and Photoshop on macOS 10.14 Mojave. Rumours indicate that it will not work at all in 10.15 when 32-bit support is finally dropped from macOS.

Performance, Node.js & Sorting

I will present two findings that I find strange in this post:

  1. The performance of Node.js (V8?) has clearly gotten consistently worse with newer Node.js versions.
  2. The standard library sort (Array.prototype.sort()) is surprisingly slow, often slower than a simple textbook mergesort.

My findings in this article are based on running a simple program mergesort.js on different computers and different node versions.

You may also want to read this article about sorting in Node.js. It applies to V8 version 7.0, which should be used in Node.js V11.

The sorting algorithms

There are three sorting algorithms compared.

  1. Array.prototype.sort()
  2. mergesort(), a textbook mergesort
  3. mergesort_opt(), a mergesort that I put some effort into making faster

Note that mergesort is considered stable and not as fast as quicksort. As far as I understand from the above article, Node.js used to use quicksort (up to V10), and from V11 uses something better called Timsort.

My mergesort implementations (2) (3) are plain standard JavaScript. Nothing fancy whatsoever (I will post benchmarks using Node.js v0.12 below).

The data to be sorted

There are three types of data to be sorted.

  1. Numbers (Math.random()), compared with a-b;
  2. Strings (random numbers converted to strings), compared with default compare function for sort(), and for my mergesort simple a<b, a>b compares to give -1, 1 or 0
  3. Objects, containing two random numbers a=[0-9], b=[0-999999], compared with (a.a-b.a) || (a.b-b.b). In one in 10 the value of b will matter, otherwise looking at the value of a will be enough.

Unless otherwise written the sorted set is 100 000 elements.

On Benchmarks

Well, just a standard benchmark disclaimer: I do my best to measure and report objectively. There may be other platforms, CPUs, configurations, use cases, datatypes, or array sizes that give different results. The code is available for you to run.

I have run all tests several times and reported the best value. If anything, that should benefit the standard library (quick)sort, which can suffer from bad luck.

Comparing algorithms

Lets start with the algorithms. This is Node V10 on different machines.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
NUC i7 82 82 61 110 81 54 95 66 50
NUC i5 113 105 100 191 130 89 149 97 72
NUC Clrn 296 209 190 335 250 196 287 189 157
RPi v3 1886 1463 1205 2218 1711 1096 1802 1370 903
RPi v2 968 1330 1073 1781 1379 904 1218 1154 703

The RPi-v2-sort()-Numbers stand out. Its not a typo. But apart from that I think the pattern is quite clear: regardless of datatype and on different processors the standard sort() simply cannot match a textbook mergesort implemented in JavaScript.

Comparing Node Versions

Lets compare different node versions. This is on a NUC with Intel i5 CPU (4th gen), running 64bit version of Ubuntu.

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 84 107 96 143 117 90 140 97 71
v10.15.3 109 106 99 181 132 89 147 97 71
v8.9.1 85 103 96 160 133 86 122 99 70
v6.12.0 68 76 88 126 92 82 68 83 63
v4.8.6 51 66 89 133 93 83 45 77 62
v0.12.9 58 65 78 114 92 87 55 71 60

Not only is sort() getting slower, also running “any” JavaScript is slower. I have noticed this before. Can someone explain why this makes sense?

Comparing different array sizes

With the same NUC, Node V10, I try a few different array sizes:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
10 000 10 9 11 8 12 6 4 7 4
15 000 8 15 7 13 14 11 6 22 7
25 000 15 35 12 40 27 15 11 25 18
50 000 35 56 34 66 57 37 51 52 30
100 000 115 107 97 192 138 88 164 101 72
500 000 601 714 658 1015 712 670 698 589 558

Admittedly, the smaller arrays show less difference, but it is also hard to measure small values with precision. So this is from the RPi v3 and smaller arrays:

(ms)     ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
5 000 34 57 30 46 59 33 29 52 26
10 000 75 129 64 100 130 74 63 104 58
20 000 162 318 151 401 290 166 142 241 132
40 000 378 579 337 863 623 391 344 538 316

I think again quite consistently this looks remarkably bad for standard library sort.

Testing throughput (Version 2)

I decided to measure throughput rather than time to sort (mergesort2.js). I thought perhaps the figures above are misleading when it comes to the cost of garbage collecting. So the new question is, how many shorter arrays (n=5000) can be sorted in 10s?

(count)  ===== Numbers =====   ===== Strings =====   ==== Objects =====
sort() merge m-opt sort() merge m-opt sort() merge m-opt
v11.13.0 3192 2538 4744 1996 1473 2167 3791 2566 4822
v10.15.3 4733 2225 4835 1914 1524 2235 4911 2571 4811
RPi v3 282 176 300 144 126 187 309 186 330

What do we make of this? Well the collapse in performance for the new V8 Torque implementation in Node v11 is remarkable. Otherwise I notice that for Objects and Node v10, my optimized algorithm has no advantage.

I think my algorithms are heavier on the garbage collector (than standard library sort()), and this is why the perform relatively worse for 10s in a row.

If it is so, I’d still prefer to pay that price. When my code waits for sort() to finish there is a user waiting for a GUI update, or for an API reply. I rather see a faster sort, and when the update/reply is complete there is usually plenty of idle time when the garbage collector can run.

Optimizing Mergesort?

I had some ideas for optimizing mergesort that I tried out.

Special handling of short arrays: clearly if you want to sort 2 elements, the entire mergesort function is heavier than a simple function that sorts two elements. The article about V8 sort indicated that they use insertion sort for arrays up to length 10 (I find this very strange). So I implemented special functions for 2-3 elements. This gave nothing. Same performance as calling the entire mergesort.

Less stress on the garbage collector: since my mergesort creates memory nodes that are discarded when sorting is complete, I thought I could keep those nodes for the next sort, to ease the load on the garbage collector. Very bad idea, performance dropped significantly.

Performance of cmp-function vs sort

The relevant sort functions are all K (n log n) with different K. It is the K that I am measuring and discussing here. The differences are, after all, quite marginal. There is clearly another constant cost: the cost of the compare function. That seems to matter more than anything else. And in all cases above “string” is just a single string of 10 characters. If you have a more expensive compare function, the significance of sort() will be even less.

Nevertheless, V8 is a single threaded environment and ultimately cycles wasted in sort() will result in overall worse performance. Milliseconds count.

Conclusions

Array.prototype.sort() is a critical component of the standard library. In many applications sorting may be the most expensive thing that takes place. I find it strange that it does not perform better than a simple mergesort implementation. I do not suggest you use my code, or start looking for better sort() implementations out there right away. But I think this is something for JavaScript programmers to keep in mind. However, the compare function probably matters more in most cases.

I find it strange that Node v11, with Timsort and V8 Torque is not more of an improvement (admittedly, I didnt test that one very much).

And finally I find it strange that Node.js performance seems to deteriorate with every major release.

Am I doing anything seriously wrong?