Alexander D. Chamberlain

 MMath

Hi! I'm Alex Chamberlain, a recent Mathematics graduate from The University of Warwick. Whilst at Uni, I learnt to program in C/C++ and developed my knowledge at Bloomberg over 2 internships; I'll be starting there in November as a Financial Software Developer.

In my spare time, I enjoy playing Canoe Polo, generally Kayaking and Canoeing and volunteering within the Scout Association. Formally, I hold the role of a Scout Sectional Assistant, but have helped and will help organise several county level events.

Wouldn't it be nice if we could cache across domains?

You probably already know that caching content in the browser can make accessing your site faster. Great! But... this only helps for repeat users. They still have to download all the files the first time.

This is obvious though, right? How could your browser possibly know what your files contain before they have ever accessed your site?

Well, they can. There has been a recent trend of using public CDNs, such as Google Hosted Libraries, Google Web Fonts and cdnjs. They allow you to link to an external address, such as http://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js. When requested, the server sends the appropriate caching headers, such that browsers will use the file for whichever websites request it. This is speeding up the web for your users and reducing the burden of data on the internet for everyone.

However, this implies a lot of trust between you, the webmaster, and the CDN owners. Personally, I trust Google not to insert malicious code as it would severely affect its reputation, but you may not. Furthermore, some countries limit requests to certain domains, so you could inadvertingly be limiting your audience. Finally, whether they are doing it or not - and I am not claiming they are - you are giving the CDNs the opportunity to occasionally track your users.

What am I proposing?

We should add a new attribute to <link> and <script> tags, called sha256. The value should be a hex-encoded string representing the hash of the file you wish to use. If the browser has a file in its cache that has the same hash, it should use that. Otherwise, it should request it from the server and validate the hash.

We still speed up the web by caching, but:

  1. We don't need to trust 3rd parties.
  2. We distribute the responsibility for serving these common files to everyone, rather than just relying on some very large players. This removes the single point of failure that is Google CDN.
  3. We reduce the number of requests, as the cache remains valid for as long as the hash matches. It's ETags on steroids... sort of.
  4. We may be improving the security of the caches, as browsers can validate the contents of the cache before running it and makes it that much harder for malicious viruses to sabortage your site.

How do you deliver statically compressed files using Nginx?

If you are serving static files, but not compressing them, you are being irresponsible with your bandwidth. If you are serving static files, but compressing them on the fly, you are being irresponsible with your CPU. So, how do you deliever statically compressed files using nginx?

(If you don't use nginx, you probably should... you might know better, of course.)

It's really simple and consists of just 2 steps:

  1. Compress files
  2. Configure nginx

1. Compress files

You could use gzip, if your version has a flag not to delete the original file. Mine didn't...

I whipped up a quick Python script to do it for me.

#!/usr/bin/python3
# Usage: gzip.py <file>
import gzip
import sys

def gzip_file(path):                                                            
  with open(path, 'rb') as f_in:                                                
    with gzip.open('{}.gz'.format(path), 'wb') as f_out:                        
      f_out.writelines(f_in)

if __name__ == "__main__":
  gzip_path(sys.argv[1])

2. Configure nginx

Configuring nginx is really easy, but ngx_http_gzip_static_module must be enabled on compilation; this can be done with the --with-http_gzip_static_module option. Just add the following lines to your http/server/location block in nginx.conf (or an included file).

gzip_static on; 
gzip_http_version   1.1;
gzip_proxied        expired no-cache no-store private auth;
gzip_disable        "MSIE [1-6]\.";
gzip_vary           on; 

You can find more details on the Nginx Wiki. Once the configuration has been reloaded, nginx will automatically serve the compressed files, as long as

  1. You have created a compressed file in the same folder and with the same name with a .gz suffix.
  2. The user-agent supports gzip compression.

Notes

I deploy this blog using git and githooks on the server. When my post-receive hook has built the site, it then compresses every file.

Arguement should be spelt arguement.

I recently graduated with a degree in Mathematics - I'm still proud - which clearly means I can't string a sentence together, let alone spell. It also means I like simple rules.

According to the Oxford [English] Dictionary, English has a simple rule:

The ... rule [is] that the final silent e [of a word] is kept when adding endings that begin with a consonant.

But argument ignores that rule; how rude!

TIME World recently published an article about misspelling. Ken Smith, a senior lecturer in criminology at Bucks New University, is so fed up with correcting freshers' spelling that he believes that we should start accepting minor variant spelling, rather than just marking them as wrong. So, I will be using the variant spelling of arguement from now on.

Setting Up an Arch Linux server on the Rackspace UK Cloud

I like Arch Linux. I have a bit of experience with Rackspace's UK cloud servers. So, I wanted to combine the two!

Easy, right? Rackspace provides an "Arch 2011.10" image. Just need to run pacman -Syu to update it. Unfortunately not... The image is too old and numerous errors occur, including all those relating to the glibc changes.

I reached out to Rackspace support and they gave the following list of commands to run. They worked! Well Done Rackspace... Please can you upgrade the image now?

Of course, these should be run as root.

pacman -Sy
rm -rf /var/run/ /var/lock && pacman -Sf filesystem && init 6 #Say no to the pacman upgrade
pacman -S tzdata haveged #Say no to the pacman upgrade
pacman -U http://pkgbuild.com/~allan/glibc-2.16.0-1-x86_64.pkg.tar.xz
rm /etc/profile.d/locale.sh
pacman -S pacman
haveged -w 1024 && pacman-key --init && pkill haveged && pacman-key --populate archlinux
pacman -Rs haveged
rm -rf /lib/modules/
pacman -Rns kernel26-xen xe-guest-utilities
pacman -Su --ignore glibc
init 6
pacman -Syu base-devel
wget http://aur.archlinux.org/packages/xe-guest-utilities/xe-guest-utilities.tar.gz
tar xzvf xe-guest-utilities.tar.gz
cd xe-guest-utilities
makepkg -si --asroot

How Do I Run My Native Pacman Against A Mounted Image?

I originally posted this answer to How do I run my native pacman against a mounted image? on Unix & Linux.SE.

So, you have read How Do I Update, Upgrade And Install Software Before Flashing An Image? and were wondering whether you could use the native pacman against an ARM image instead of using an emulated version?

It turns out you can and it's not too hard. Make sure you have followed the instructions on How do I update, upgrade and install software before flashing an image? carefully and you have qemu-user-static installed correctly on the mounted system.

pacman.conf

The /etc/pacman.conf file controls pacman, and normally, we wouldn't need to edit it. However, there is a problem with the supplied pacman.conf when used in this way. It includes the directive

Include = /etc/pacman.d/mirrorlist

Unfortunately, this picks up the mirror list from your host system, which probably won't mirror ARM packages. Copy /etc/pacman.conf from your mount to an appropriate directory and replace that line with

Server = http://mirror.archlinuxarm.org/arm/$repo

You can find my adapted pacman.conf at github.

Running pacman

You can now run pacman. Assuming your config file is in your pwd, run

sudo pacman -r <mount-point> --config pacman.conf -Syu

References

  1. Github project, which is forked from @Jivings Github project.

Raspberry Pi Tutorial: How Do I Update, Upgrade And Install Software Before Flashing An Image?

I originally posted this answer to Is it possible to update, upgrade and install software before flashing an image? on Raspberry Pi.SE.

It seems silly to use our limited SD write cycles to upgrade the software shipped on the images. So, how can we upgrade said software before flashing the image to an SD card?

The Hard Way

Preparing your system - Debian/Ubuntu

I know this doesn't work on Ubuntu 10.04 LTS, because some of the packages are too old.

Ensure your own system is up to date.

$ sudo apt-get update
$ sudo apt-get upgrade

Install some new software

$ sudo apt-get install binfmt-support qemu qemu-user-static unzip

qemu is an ARM emulator, and qemu-user-static and binfmt-support allows us to run ARM executables without emulating the ARM kernel. (How cool is that!?!)

Preparing your system - Arch

I can't find a statically linked qemu in the Arch repositories, so we will have to compile from source.

  1. Download the latest release from http://git.savannah.gnu.org/cgit/qemu.git
  2. Unzip and run

    ./configure --disable-kvm --target-list=arm-linux-user --static

  3. Build using make and install using sudo make install.

  4. Run the following as root

    echo ':arm:M::\x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/local/bin/qemu-arm:' > /proc/sys/fs/binfmt_misc/register

    echo ':armeb:M::\x7fELF\x01\x02\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff:/usr/local/bin/qemu-armeb:' > /proc/sys/fs/binfmt_misc/register

Warning You shouldn't run arbitrary commands you find online as root - these were taken from qemu-binfmt-conf.sh under the ARM cpu type. Please extract the commands from this file and run those.

Download and unzip the image

Go to raspberrypi.org and download the image you want. Unzip it and save the .img file somewhere useful.

$ sudo mkdir -p /images/debian-squeeze
$ sudo wget "http://files.velocix.com/c1410/images/debian/6/debian6-19-04-2012/debian6-19-04-2012.zip" -O "/images/debian-squeeze.zip"
$ sudo unzip "/images/debian-squeeze.zip" -d /images/debian-squeeze
$ sudo rm /images/debian-squeeze.zip

Find the correct partition

The .img will contain 3 partitions, including the boot partition.

$ cd /images/debian-squeeze/debian6-19-04-2012/
$ fdisk -lu debian6-19-04-2012.img
Disk debian6-19-04-2012.img: 1949 MB, 1949999616 bytes
4 heads, 32 sectors/track, 29754 cylinders, total 3808593 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ee283

                 Device Boot      Start         End      Blocks   Id  System
debian6-19-04-2012.img1            2048      155647       76800    c  W95 FAT32 (LBA)
debian6-19-04-2012.img2          157696     3414015     1628160   83  Linux
debian6-19-04-2012.img3         3416064     3807231      195584   82  Linux swap / Solaris

We need to know the offset of the Linux partition, in this case it is 157696 sectors, and the boot partition, which is at 2048 sectors. Each sector is 512 bytes, so the root offset is 157696*512=80740352 bytes and the boot offset is 2048*512=1048576.

Mount the image as a loopback device

Next, we need to mount the image as a file system. This can be done using a loopback device. We use the offset from the previous section to tell mount which partitions to mount and where. The order of these commands is important.

$ sudo mount -o loop,offset=80740352 "/images/debian-squeeze/debian6-19-04-2012/debian6-19-04-2012.img" /mnt
$ sudo mount -o loop,offset=1048576 "/images/debian-squeeze/debian6-19-04-2012/debian6-19-04-2012.img" /mnt/boot

Preparing the filesystem.

We're nearly ready to chroot into our file system and start installing new software. First, we must install the emulator into our image, as it won't be available once we use chroot.

Debian/Ubuntu
$ sudo cp /usr/bin/qemu-arm-static /mnt/usr/bin/
Arch Linux
$ sudo cp /usr/local/bin/qemu-arm /mnt/usr/local/bin/
All host systems

We also need to provide access to certain other parts of the system.

$ sudo mount --rbind /dev     /mnt/dev
$ sudo mount -t proc none     /mnt/proc
$ sudo mount -o bind /sys     /mnt/sys

chroot

We are done! chroot away...

$ sudo chroot /mnt

You are now in your Raspberry Pi, but the services aren't running etc. Be careful, you are root!

Update/Install software - Debian Image

To update the software, we use apt-get.

 # apt-get update
 # apt-get upgrade

You can also install software using apt-get install as per usual.

Update/Install software - Arch Image

To update the software, we use pacman.

 # pacman -Syu

You can also install software using pacman -S as per usual.

NOTE You can run pacman natively by following the instructions on How Do I Run My Native Pacman Against A Mounted Image?.

Exiting

You can exit the chroot by using Ctrl+D and unmount the system by running sudo umount /mnt - you will have to unmount each mount point separately.

You should remove qemu-user-static from /usr/bin or qemu-arm from /usr/local/bin on the RPi, then the image is ready to be flashed.

Final Words

This is a little long and tedious, but do it once and you'll learn loads about how this all works!

The Easy Way - piimg

I've started work on a utility for doing a lot of this for you. It is called piimg and can be found at github.com/alexchamberlain/piimg.

So far, it can mount the SD card for you by running

piimg mount /images/debian-squeeze/debian6-19-04-2012/debian6-19-04-2012.img /mnt

and unmount them again by running

piimg umount /mnt

You just need to install qemu and chroot away.

References
  1. Running ARM Linux on your desktop PC: The foreign chroot way
Alexander D. Chamberlain
alex@alexchamberlain.co.uk @alexchamberlain Subscribe via RSS