Category Archives: Linux - Page 2

USB Drives, dd, performance and No space left

Please note: sudo dd is a very dangerous combination. A little typing error and all your data can be lost!

I like to make copies and backups of disk partitions using dd. USB drives sometimes do not behave very nicely.

In this case I had created a less than 2GB FAT32 partition on a USB memory and made it Lubuntu-bootable, with a 1GB file for saving changes to the live filesystem. The partition table:

It seems I forgot to change the partition to FAT32, but it is formatted with FAT32 and that seems to work fine 😉

$ sudo /sbin/fdisk -l /dev/sdc

Disk /dev/sdc: 4004 MB, 4004511744 bytes
50 heads, 2 sectors/track, 78213 cylinders, total 7821312 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3a78

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1   *        2048     3700000     1848976+  83  Linux

I wanted to make an image of this USB drive that I can write to other USB drives. That is why I made the partition/filesystem significantly below 2GB, so all 2GB USB drives should work. This is how I created the image:

$ sudo dd if=/dev/sdb of=lubuntu.img bs=512 count=37000000

So, now I had a 1.85GB file named lubuntu.img, ready to write back to another USB drive. That was when the problems began:

$ sudo dd if=lubuntu.img of=/dev/sdb
dd: writing to ‘/dev/sdb’: No space left on device
2006177+0 records in
2006176+0 records out
1027162112 bytes (1.0 GB) copied, 24.1811 s, 42.5 MB/s

Very fishy! The write speed (42.5MB/s) is obviously too high, and the USB drive is 4GB, not 1GB. I tried with several (identical) USB drives, same problem. This has never happened to me before.

I changed strategy and made an image of just the partition table, and another image of the partion:

$ sudo dd if=/dev/sdb of=lubuntu.sdb bs=512 count=1
$ sudo dd if=/dev/sdb1 of=lubuntu.sdb1

…and restoring to another drive… first the partition table:

$ sudo dd if=lubuntu.sdb if=/dev/sdb

Then remove and re-insert USB Drive, make sure it does not mount automatically before you proceed with the partition.

$ sudo dd if=lubuntu.sdb1 if=/dev/sdb1 

That worked! However, the write speed to USB drives usually slow down as more data is written (in one chunk, somehow). I have noticed this before with other computers and other USB drives. I guess USB drives have some internal mapping table that does not like big files.

Finally, to measure progress of the dd command, send it the signal:

$ sudo kill -USR1 <PID OF dd PROCESS>

Above behaviour noticed on x86 Ubuntu 13.10.

Build Node.js on Debian ARM

Update 2015-02-15: So far, I have failed building Nodejs v0.12.0 on ARMv5

I have a QNAP TS109 running Debian (port:armel, version:7), and of course I want to run node.js on it. I don’t think there are any binaries, so building from source is the way to go.

About my environment:

$ cat /etc/debian_version
7.2
$ gcc --version | head -n 1
gcc (Debian 4.6.3-14) 4.6.3
$ uname -a
Linux kvaser 3.2.0-4-orion5x #1 Debian 3.2.51-1 armv5tel GNU/Linux
$ cat /proc/cpuinfo
Processor       : Feroceon rev 0 (v5l)
BogoMIPS        : 331.77
Features        : swp half thumb fastmult edsp
CPU implementer : 0x41
CPU architecture: 5TEJ
CPU variant     : 0x0
CPU part        : 0x926
CPU revision    : 0

Hardware        : QNAP TS-109/TS-209
Revision        : 0000
Serial          : 0000000000000000

I downloaded the latest version of node.js: node-v0.10.25, and this is how I ended up compiling it (first writing build.sh, then executing it as root):

$ cat build.sh
#!/bin/sh
export CFLAGS='-march=armv5t'
export CXXFLAGS='-march=armv5t'
./configure
make install
$ sudo ./build.sh

That takes almost four hours.

A few notes…

make install
Naturally, make install has to be run as root. When I do that, everything is built again, from scratch. This is not what I expect of make install, and to me this seems like a bug. This is why I put the build lines into a little shell script, and ran the entire script with sudo. Compiling as root does not make sense

-march=armv4 and -march=armv4t
Compiling with -march=armv4t (or no -march at all, defaulting to armv4 I believe) results in an error:

../deps/v8/src/arm/macro-assembler-arm.cc:65:3: error:
#error "For thumb inter-working we require an architecture which supports blx"

You can workaround this by above line 65 in the above file:

#define CAN_USE_THUMB_INSTRUCTIONS 1

as I mentioned in my old article about building Node.js on Debian ARM.

-march=armv5te
I first tried building with -march=armv5te (since that seemed closest to armv5tel which is what uname tells me I have). The build completed, but the node binary generated Segmentation fault (however node -h did work, so the binary was not completely broken).

I do not know if this problem is caused by my CPU not being compatible with/capable of armv5te, or, if there is something about armv5te that is not compatible with the way Debian and its libraries are built.

Install Citrix Receiver 13 on Ubuntu 13.10

In this post I will explain how I installed Citrix Receiver (version 13) on Ubuntu 13.10 (Xubuntu x64 and Lubuntu x86 – but keep reading for other Ubuntu variants too).

The quick summary
Go to Citrix Receiver for Linux Download Page. Pick the generic tar.gz-version under 32-bit (yes, do this for 64-bit Ubuntu).

Then:

$ cd ~/Downloads

(nasty habit of not including a folder in the tar file:)
$ mkdir citrixtmp
$ cd citrixtmp
$ tar -xzf ../linuxx86-13.0.0.256735.tar.gz

(install, not as root)
$ ./setupwfc
   (choose 1=install
    answer yes to all questions
    use all defaults
    finally, 3=exit installer)

Now, if you are on 64-bit Ubuntu there are some 32-bit dependencies to take care of:

$ sudo apt-get install libgtk2.0-0:i386
$ sudo apt-get install libxml2:i386
$ sudo apt-get install libstdc++6:i386

If, on the other hand, you are on 32-bit Ubuntu, you can instead install these packages:

$ sudo apt-get install libwebkitgtk-1.0.0
$ sudo apt-get install libxerces-c3.1

Now, (re)start your browser, log in to your Citrix Portal, open an application. Your browser should suggest you open it with wfica.sh (located in ~/ICAClient/linuxx86). Do it – it should work!

You should now be able to use your Citrix applications in a productive way from your Ubuntu computer!

If you are on 32-bit Ubuntu, you should also be able to use the GUI Self Service application (I have not figured out how to fix the webkit dependencies for 64-bit ubuntu).

Feel free to read on for more comments and details.

What is Citrix Receiver and how do I use it

Citrix is a technology that allows an organization (your employer) to package applications (typically Windows applications) and make them available over the intranet or the internet. This way, you can run the applications on a computer without the need to install those application on the computer itself.

I have two ways to access my Citrix Applications.

The first way is via a web based Citrix Portal. I open my web browser, enter the URL of the portal and log in. Now, in the web browser, I see all my applications as icons, and as I click the applications they start in separate windows via Citrix Receiver.

The second way is to launch the Citrix Receiver Self Service application, give the address of the citrix servers and then authenticate. This method can enable “desktop integration” (your Citrix Applications are available via your normal Start Menu or whatever you call it). This Self Service application is new to Citrix v13, and replaced something else in v12.

The Web-browser way is easier to make work. There are unresolved dependency issues with the Self Service program and my solution above.

My #1 priority is to get a working solution at all.

Why not use the .deb packages
The deb-packages are obviously not built for Ubuntu 13.10. I believe they are built for Debian, but this must be confirmed.

The purpose of deb-packages is to automatically resolve all dependencies. But the dependencies are wrong for Ubuntu, and you will need to “force” installation of the deb-packages. In the future, this can leave you with conflicts and confustion.

So, I prefer the generic tar.gz-installation (which also works fine without sudo/being root).

Why not use the 64-bit packages
Well, first there is no generic 64-bit package, so I would end up resolving the dependency problems with the deb-package.

Also, the 64-bit deb-package actually contains 32-bit binaries. It is just the dependencies that are configured against the 32-bit compability libraries in Debian (instead of the standard 64-bit libraries).

So, nothing fun with 64-bit until Citrix actually compiles a real 64-bit binary with no 32-bit dependencies.

Other versions of Ubuntu
I believe what I have written applies not only to Xubuntu, but also Lubuntu, Kubuntu (may require more gtk-installation as it is QT based) and standard Ubuntu, and more. Please comment below if you experience something else.

Other versions of Linux
If you are on Debian or a Debian-derived distribution (like Crunchbang) I guess you should go with the deb-packages.

You really need the Self Service
Consider installing 32-bit Ubuntu on your 64-bit PC. Depending on what computer you have and what you do with it this may be a quite ok idea, or a very poor idea. I can admit I have been running 32-bit Ubuntu on a 64-bit PC for years, at work, specifically because Citrix worked better that way (even the old Citrix Reciever 12 had this issue, even if the Self Service looked different then).

What is the difference between Receiver 12 and 13
If you use Citrix via your web browser, you will not notice much difference (if any).

The Self Service is much different, visually, from the old Receiver. The old one looked like something for SUN Solaris and the 80s (motif-based). The new one looks like some kind of mobile app. I dont know which is worst. Many components are still the same.

If you currently run Citrix 12 and you are happy with it, I suggest you dont upgrade to 13.

Problems installing Citrix Receiver 12
If you want to install the old Citrix Reciever 12, have a look at my old post.

Troubleshooting
Your browser should allow you to download the ICA file (instead of launching it). Do it – it should be saved to ~/Downloads/lauch.ica. Now try to start it manually with wfica.sh:

$ ~/ICAClient/linuxx86/wfica.sh ~/Downloads/launch.ica

If you are missing dependencies they should show up here.

Final words
I consider this post “work in progress”. I’d like to

  • make Self Service work
  • confirm extra features (audio, drive mapping, etc) that might not work properly with my install above

But I hope it can be helpful even in this state. Feel free to comment!

Upgrade Lubuntu 13.04 to 13.10 on Eee 701

Lubuntu is the perfect distribution for your Eee 701. Now the time has come to upgrade to 13.10, and since I have had a few problems with that before I was a bit reluctant to upgrade my Eee 701, especially since it just has a 4GB SSD.

Since I installed 13.04 on the Eee, the available disk space has disappeared. It turns out, the kernel has been upgraded several times, but the old versions have not been discarded. You just need the latest version (the one you are running, check with uname -a). If you have more linux-images than needed, purge them. Do the same with linux-headers-packages.

$ dpkg -l | grep linux-image
$ uname -a
$ sudo apt-get purge linux-image-3.8.0-XX
$ dpkg -l | grep linux-headers
$ sudo apt-get purge linux-headers-3.8.0-XX

When it was time for upgrade, I had 1.6 GB (df -h) available on /. To play safe I formatted an SD card (1GB should be enough) and mounted it on /var/cache/apt (where all downloaded packages go during upgrade).

$ sudo apt-get clean
$ sudo mkfs.ext2 /dev/sdb1
$ mount /dev/sdb1 /var/cache/apt

I updated using the normal GUI upgrade program. During upgrade, the peak disk usage (just before cleaning) was less than 550MB on the SD card /var/cache/apt and my /-device was down to 700MB available (so my 1.6GB available in the first place should have been just enough).

The computer restarted nicely. The fact that the SD card was not immediately mounted on /var/cache/apt caused no problems. After upgrade I just had 1.1Gb available on / though. After again purging unused linux-image I was up at 1.2Gb. I wonder where the extra 400Mb went; I found Firefox, and I doubt it was installed in 13.04… removing it saves about 60Mb.

So, the conclusion is that upgrading Lubuntu from 13.04 to 13.10 on your Eee 701 should be just fine, if you have about 1.5Gb available space on /, and if you feel you have about 400MB to spare on the upgrade. A permanent SD card or mini-usb-memory that can host /home, /var, /tmp and/or /usr is of course nice.

Upgrading Qnap TS109 from Squeeze to Wheezy

Update: new instructions for upgrading Wheezy to Jessie

Now that Wheezy has been out for a while I thought it is stable enough even for my old QNAP TS109. A great source of information for Debian on Qnaps is Martin Michlmeyr, so I decided to upgrade from squeeze to wheezy using Debain standard instructions.

Package Checking
I did not have any packages on hold, but over the years I have installed quite many packages I dont need. So I spent some time listing and removing packages:

$ sudo dpkg -l
$ sudo apt-get purge SOMEPACKAGES

I thought, faster to delete them now, than to upgrade them a little later.

/etc/apt/sources.list
First real upgrade-related step is fixing /etc/apt/sources.list:

deb http://ftp.se.debian.org/debian/ wheezy main
deb-src http://ftp.se.debian.org/debian/ wheezy main non-free

deb http://security.debian.org/ wheezy/updates main
deb-src http://security.debian.org/ wheezy/updates main non-free

I have just replaced ‘squeeze’ for ‘wheezy’ four times.

update upgrade
Now the point of no return:

$ sudo apt-get update
$ sudo apt-get upgrade

This presented me with a few challenges.

???????????????????????????Configuring linux-base?????????????????????????????
? Boot loader configuration check needed                                     ?
?                                                                            ?
? The boot loader configuration for this system was not recognized. These    ?
? settings in the configuration may need to be updated:                      ?
?                                                                            ?
?  * The root device ID passed as a kernel parameter;                        ?
?  * The boot device ID used to install and update the boot loader.          ?
?                                                                            ?
?                                                                            ?
? You should generally identify these devices by UUID or label. However,     ?
? on MIPS systems the root device must be identified by name.                ?
?                                                                            ?
?                                                                            ?
??????????????????????????????????????????????????????????????????????????????
?                                 <  OK  >                                   ?
??????????????????????????????????????????????????????????????????????????????

What is an ARM user gonna do about it? You can safely ignore this (if you are upgrading Debian on a QNAP – probably not if you are upgrading Ubuntu on your laptop!). This is supposed to be grub/lilo-related, and not relevant.

In the end of apt-get upgrade I got these messages, ensuring my system will boot properly even after upgrade. You should probably see something like this too, or consider to find out how to do it manually.

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-5-orion5x
flash-kernel: installing version 2.6.32-5-orion5x
Generating kernel u-boot image... done.
Flashing kernel... done.
Flashing initramfs... done.

Sudo was a little challenge:

Configuration file `/etc/sudoers'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** sudoers (Y/I/N/O/D/Z) [default=N] ? D

The “diff” told me that it intended to delete my sudo line related to me; the new way is to add people to the group (/etc/group) named sudo. So I added myself to the sudo group and bravely answered ‘Y’ to the question above.

Immediately, sudo did not work, as I was no longer in the sudoers file… However, a little logout/login fixed that, and the group works all fine.

After apt-get upgrade had completed I decided to reboot my system, before proceeding. For the first time ever it came up with another IP-address than usual. Obviously the dhcp-client did not bother to ask for the same address anymore, and the dhcp-server did not bother to hand out the same address either. So, a few nervous minutes before I found my QNAP on another IP.

apt-get dist-upgrade
Now that the system rebooted properly it was time for the final upgrade step:

$ sudo apt-get dist-upgrade

This procedure mostly works on it own, occationally asking something.

I answered Yes to this question (after reading the diff, not remembering having edited this file)

Configuration file `/etc/default/rcS'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** rcS (Y/I/N/O/D/Z) [default=N] ? y

The dist-upgrade once again replaced the kernel…

Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.2.0-4-orion5x
flash-kernel: installing version 3.2.0-4-orion5x
Generating kernel u-boot image... done.
Flashing kernel... done.
Flashing initramfs... done.

…so I made a final reboot. Everything seems just fine.

Ubuntu 13.10 and GeForce 8200

Update 2015-04-24: Installing 15.04 from scratch worked perfectly.
Update 2014-11-02: Upgrading from 14.04 to 14.10 worked perfectly.
Update 2014-04-27: Upgrading from 13.10 to 14.04 worked perfectly.

As I have written before, Ubuntu has been a little tricky with a GeForce 8200. How about 13.10?

Well, the Live CD (which is a Live DVD, because it does not fit on a CD anymore) works just fine, using the Nouveau driver. That is good news.

I upgraded my 13.04 to 13.10 – upgrade was fine, but X (or whatever it is in 13.10) did not start. I had to do:

  $ sudo apt-get purge nvidia-173
  $ sudo apt-get install xserver-xorg-video-nouveau

That took me to the login window, but after that, black display. At this point I tried a few things, but decided to make a clean install of 13.10 instead.

The clean install went fine. After logging in the first time I logged out again, closed down lightdm and restored my home directory from backup (using rsync). To my surprise, after startig lightdm logging in did not work – just as after my upgrade. It turns out that after deleting ~/cache and ~/.config/xfce4 I could log in again! So, my clean reinstall was probably never needed… I will not know.

Finally, I can mention that to enable Ubuntu One in Xubuntu 13.10, the only thing I did was install ubuntu-one-control-panel-qt. Since I had restored my old home directory, both my Ubuntu One settings and my Ubuntu One files were already there, and it worked perfectly.

Connect to Office Communicator/Lync with Pidgin

This is a post I have wanted to write for a very long time 😀

My company has an Office Communciator 2010 setup, and for a long time I have tried to connect to it with my (X)Ubuntu computer. Pidgin has not worked, and installing Office Communicator 2010 in Wine has not worked either. But now… the stars were obviously aligned, I was lucky to try the right configuration, or some update to the Office Communicator Plugin for Pidgin fixed something. Error messages from Pidgin are usually not very detailed or helpful.

My configuration:

  1. You need the pidgin-sipe package (Xubuntu 13.04)
  2. Username: my email address
  3. Login: username\domain
  4. Advanced:
  5. Server: set this one
  6. Connection Type: TCP
  7. Authentication Scheme: NTML

Of course, your OCS Server might be configured in a different way. But perhaps this is a little helpful to someone. The way I obtained the server address was to run the netstat command on a windows computer before and after starting and stopping the Office Communicator 2010 client.

Building Node.js on Debian ARM (old)

Update 20140130: I suggest you first have a look at my new article on the same topic.

I thought it was about time to extend my JavaScript curiosity to the server side and Node.js.

A first step was to install it on my web server, a QNAP TS-109 running Debian 6. I downloaded the latest version (v0.10.15), and did the usual:

$ ./configure
$ make

after hours:

../deps/v8/src/arm/macro-assembler-arm.cc:65:3: error: #error "For thumb inter-working we require an architecture which supports blx"

That is not the first time my TS 109 has been too old. However, the english translation of the above message is that you have to have an ARM cpu V5 or later, and it has to have a ‘t’ in its name (at least, this is what the source tells, see below). In my case

$ uname -a
Linux kvaser 2.6.32-5-orion5x #1 Sat May 11 02:12:29 UTC 2013 armv5tel GNU/Linux

so I should be fine. I found a workaround from which I extracted the essential part.

// We always generate arm code, never thumb code, even if V8 is compiled to
// thumb, so we require inter-working support
#if defined(__thumb__) && !defined(USE_THUMB_INTERWORK)
#error "flag -mthumb-interwork missing"
#endif

// ADD THESE THREE LINES TO macro-assembler-arm.cc

#if !defined(CAN_USE_THUMB_INSTRUCTIONS)
# define CAN_USE_THUMB_INSTRUCTIONS 1
#endif

// We do not support thumb inter-working with an arm architecture not supporting
// the blx instruction (below v5t).  If you know what CPU you are compiling for
// you can use -march=armv7 or similar.
#if defined(USE_THUMB_INTERWORK) && !defined(CAN_USE_THUMB_INSTRUCTIONS)
# error "For thumb inter-working we require an architecture which supports blx"
#endif

After adding the three lines, I just ran make again, and after a few hours more everything was fine. Next time I will try the -march or -mcpu option instead.

Inactive SSH Sessions Die

It happens that my SSH connections die after a period of inactivity. Sometimes after very long time, but sometimes quite quickly. I do not know if it has to do with routers, proxies, TCP/IP or the SSH server configuration, but I have found a remedy.

ssh -o ServerAliveInterval=10 ...your normal options

Note, this does not recover a connection lost due to network problems – it just makes sure the connection stays alive, even if you dont use it for a while. There are other tools (Autossh, I think) to make ssh reconnect.

This option is especially useful in combination with the -D flag; that is, when you use SSH as a SOCKS proxy. Then you typically do not use the remote console itself, just the tunnel that comes with it.

Ubuntu on Pentium M without PAE

Update 2014-05-25: I have now tested the new forcepae feature in 14.04 (new post) myself and it works just fine.

Update 2014-04-27: Ubuntu 14.04 has been released and it works almost “out of the box” for Pentium M CPUs that have PAE but fail to advertise it properly. This seems to be very relevant now that XP is out of support and many owners of fine laptops will try to find an alternative to Windows XP. If this is your situation and you are completely new to Linux/Ubuntu I suggest you give Xubuntu a try first, and if you find it too heavy or slow, try Lubuntu. You dont have to install it to try it – you can run it “Live” from a DVD/USB-memory – but it will run faster and better if you install it on your hard drive. Finally, the PAE issue: The instructions are found in the section “Installing on Pentium M laptop (with forcepae)” of the official Ubuntu PAE page. Those few lines relevant to you are:

The ISO image will fail to boot (“This kernel requires the following features not present on the CPU: pae.”) If a few lines above this text there is a warning “WARNING: PAE disabled. Use parameter ‘forcepae’ to enable at your own risk!”, then you can boot by pressing tab at the boot screen and appending the kernel parameter “forcepae” after the “– “.

You should not need to read or understand anything else from that page. I do not have any such hardware available at the moment, so I can not write a guide myself. You are welcome to comment about your success/failure here.

Update 2014-03-13: It seems some progress has been made and this should not be a problem with Ubuntu 14.04 (which should be on the way soon). There are already daily builds of Lubuntu, Xubuntu and Ubuntu for the upcoming 14.04 that are supposed to work (unclear about Kubuntu and the others). I have not tested this myself yet – currently no such hardware available. The boot is supposed to fail with a message like “kernel flag forcepae is required”, and when adding that flag at boot time things should proceed normally.

Update 2013-12-26: I successfully upgraded 13.04 with FakePAE installed to 13.10. No warnings, no errors, no problems, no hacks. And FakePAE still hacks /proc/cpuinfo correctly.

This may be relevant to you if you want to install (or upgrade to) Ubuntu 12.10 or 13.04 on a computer that reportedly lacks PAE support. It only applies to x86/32-bit AMD/Intel CPUs and Ubuntu versions.

A little background first. 32-bit computers can handle up to 2GB of RAM perfectly; in principle, if you have up to 2GB you neither need PAE or 64-bit CPU/OS. If you have more than 2GB or RAM, but not more than 4GB you may have some benefits of PAE or 64-bit CPU/OS. If you have more than 4GB or RAM, you should have 64-bit CPU/OS, or if your CPU does not support it, use PAE to make use of more than 4GB RAM.

PAE makes the CPU/OS handle 64GB of RAM instead of 4GB (the normal 32-bit limit), but applications still only see a maximum of 4GB each (in practice, most often not more than 2GB).

Until 12.04 this was not an issue for Ubuntu users. There was both a PAE and a non-PAE 32-bit kernel available for x86 ubuntu, and a serparate 64 bit version of Ubuntu. But beyond 12.04, there is no longer a non-PAE kernel. This means that if you have an x86 CPU that lacks PAE support, you can not run Ubuntu beyond 12.04 (unless you find/build a non-standard kernel).

However, there are CPUs, especially Pentium M CPUs that have perfect PAE support, but they do not advertise it properly – and Ubuntu refuse to try to run the PAE-kernel at all (but it works perfectly).

Now, if you are still reading and you think this is relevant information to you, you should first read this article. It is written by the people who know stuff and it tells you what to do.

The rest of my article will describe some details about my experiences when I tried to upgrade a Pentium M computer from 12.04 to 12.10.

First, this is my CPU (as it reports under 12.04 non-PAE kernel). It denies PAE support (pae not present under flags).

$ cat /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 9
model name	: Intel(R) Pentium(R) M processor 1400MHz
stepping	: 5
microcode	: 0x5
cpu MHz		: 600.000
cache size	: 1024 KB
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 2
wp		: yes
flags		: fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 tm pbe up bts est tm2
bogomips	: 1194.24
clflush size	: 64
cache_alignment	: 64
address sizes	: 32 bits physical, 32 bits virtual
power management:

I only have 768Mb of RAM – I dont really need PAE, but I have to have it to run Ubuntu 12.10 or 13.04.

I first tried to install 13.04 from scratch – the installation immediately failed because my (false) lack of PAE. So I decided (knowing that I would probably run into some problems) to upgrade from 12.04 to 12.10. Now, the upgrade was mostly fine, but, the kernel was not upgraded from 3.2 to 3.5, all packages were not completely set up, and there were broken dependencies. But the computer was working.

So I tried to install fake-pae using the method in the article. However, that did not really work – probably because of my broken dependencies apt-get didn’t want to do anything before the problems were fixed. So I downloaded the fake-pae package manually, unpacked it, and had a look at it. The magic trick is (as root):

# cat /proc/cpuinfo | sed 's/flags\t*:/& pae/' > /tmp/cpuinfo_pae
# mount -o bind /tmp/cpuinfo_pae /proc/cpuinfo
# mount -o remount,ro,bind /proc/cpuinfo

You can run this yourself (without installing/downloading fake-pae). I did, and then the kernel upgrade finally worked, the dependency problems were solved, and both me and Ubuntu were happy again.

Now, with dependency problems solved, fake-pae installed perfectly, and this is my CPU (now running 12.10):

$ cat /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 9
model name	: Intel(R) Pentium(R) M processor 1400MHz
stepping	: 5
microcode	: 0x5
cpu MHz		: 600.000
cache size	: 1024 KB
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 2
wp		: yes
flags		: pae fpu vme de pse tsc msr mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 tm pbe up bts est tm2
bogomips	: 1194.22
clflush size	: 64
cache_alignment	: 64
address sizes	: 36 bits physical, 32 bits virtual
power management:

Notice, address sizes is now 36 bit physical – this is real information from the PAE kernel. Also not the pae-flag; which is the result of fake-pae.

You should keep fake-pae installed, and hopefully you will be able to upgrade your computer in the future.

A few final words
I find this design choice of Ubuntu odd. I mean, computers with more than 4GB of RAM should use 64-bit version of Ubuntu anyway. And seriously, how many 32-bit systems have more than 4GB or RAM anyway? I think it would make more sense to not support PAE or more than 4GB or RAM for x86, than to abandon non PAE-CPUs. But probably there are companies (Ubuntu customers) out there running top modern 64-bit i7-CPUs in x86 PAE mode, rather than in x64 mode, for reasons I do not understand. Anyway, since this problem applies to a lot of Pentium M CPUs, the PAE check probably makes more Pentium M upgrade fail than it protects Pentium PRO and earlier systems.