CrunchBang 11 on Macbook 2,1

I finally updated my arch installation after 9 months and it was broken badly by grub2 upgrade and moving bin directories around. Arch had taught me a lot but after surviving several large migrations I was tiring of having to repair my machine when sometime I just wanted to work.

I’ve now moved to crunchbang booting from grub2 in bios legacy mode. If anyone else is having trouble getting it to boot after the installation you need to reboot into the live cd and run gptsync which creates a gpt/mbr hybrid boot record.

Tivic Windrunner – Breakdown

I recently purchased a Tivic Windrunner to learn about embedded linux platforms and subsequently have spent the last few weekends putting apart its firmware images in the hope that I could produce a SnakeOS firmware image compatible with the Tivic’s update servce. In the following post I will discuss the methodogly used to reverse engineer the Windrunner’s firmware format and replicating the automatic update server locally allowing us to host our custom firmware updates for the Windrunner to suck up.


Traffic Sniffing

Rather than let the Tivic wild on the net where it could go show wild doing who knows what, I connected it directly to my laptops ethernet port and setup a vm to host a DNS and DHCP server. Limiting the Tivic to a tightly controlled network I could then control the Windrunner’s IP Address and the results of its DNS requests whilst capturing all the traffic originating from the Windrunner.
Dnsmasq was my choice for a DNS and DHCP server as it is a light weight and easy to configure my configuration files are available on my github here.

With wireshark running I discovered that after the Windrunner is allocated an IP address it immediately attempts to phone home. The Windrunner requests the DNS server configured by DHCP to resolve the address It is also worth noting that if DHCP doesn’t allocate a local DNS server the Windrunner will send DNS request to either or

Once it has resolved it makes a HTTP POST REQUEST to with the devices macaddress as the post payload in the form MAC_ADDR=002XXXXXXXXX

The wireshark logs also revealed that the Windrunner will register a number of microsoft shared folders upon conneciton to the local network.

Intercepting Requests

I now had a starting point but I would need to know how the Teltel servers would respond to such requests. Rather than letting the Windrunner go show wild online I started to proxy the requests between the Windrunner and the real servers. I configured dnsmasq to resolve the DNS requests to my local machines which would accept the requests from the Windrunner and then return slightly modified responses I had collected manually from the Tivic servers.
This way I could control what the Windrunner would get from the remote service as I particularly didn’t want the Windrunner to upgrade its firmware and potentially be locked out or for the Windrunner to stop looking for online updates.

The Windrunners interaction with the a remote service is as follows

Tivic -> Teltel


Tivic <- Teltel

<!--?xml version='1.0' standalone='yes'?-->



Tivic -> Teltel


Tivic <- Teltel

Tivic -> Teltel


This revealed a number of things
1. URL where I could get a copy of the firmware blob.
2. How the Windrunner is configured, specifically the auto update interval and the latest firmware image file.

You can follow the firmware URL above and browse the different firmware images provided. I downloaded the one specified for the Windrunner and started to pull it part to discover its composition.

Pulling apart the Firmware

The firmware for the Tivic is distributed as a binary blob with the unusual extension .ba. Using the unix tool ‘file’ doesn’t return anymore information either. Using tools ‘strings’ however returns alot more interesting bits and pieces.

# routine #######################################################
# !cj! - killall process and confirm
   local PROC_PID counter
      PROC_PID=`$BUSYBOX pidof $1`

Looks as though there are some file names, strings and even bash scripts included in the firmware image. Using the hex viewer xxd we can see that the firmware image starts with a magic string followed by some random data before a bash script starts.

0000000: 2174 656c 7465 6c2d 6468 7321 be37 2a24  !teltel-dhs!.7*$
0000010: 0000 0000 0310 c013 0000 0a64 2f00 0006  ...........d/...
0000020: 70ca 0000 0670 bf0f 0006 0020 3800 7374  p....p.....
0000030: 6172 745f 696e 7374 616c 6c2e 7368 2321!
0000040: 2f62 696e 2f73 680a 0a23 2072 6f75 7469  /bin/sh..# routi
0000050: 6e65 2023 2323 2323 2323 2323 2323 2323  ne #############
0000060: 2323 2323 2323 2323 2323 2323 2323 2323  ################
0000070: 2323 2323 2323 2323 2323 2323 2323 2323  ################
0000080: 2323 2323 2323 2323 2323 0a0a 2320 2163  ##########..# !c
0000090: 6a21 202d 206b 696c 6c61 6c6c 2070 726f  j! - killall pro
00000a0: 6365 7373 2061 6e64 2063 6f6e 6669 726d  cess and confirm
00000b0: 0a6b 696c 6c5f 7072 6f63 2829 0a7b 0a20  .kill_proc().{.
00000c0: 2020 6c6f 6361 6c20 5052 4f43 5f50 4944    local PROC_PID
00000d0: 2063 6f75 6e74 6572 0a20 2020 5052 4f43   counter.   PROC

Further examination usind xxd we discover that there are infact five file names included the in the firmware image, namely;, fw_install Config, Kernel and Rootfs. The first two are probably used to perform the update and the last three are the target images.

The format of the file seems to be
The header more than likely contains checksum, versioning and file sizes for the contents of the firmware image. By doing some manual calculations we find that the headers format is
[magic word][checksum?][version?][filename size][file size]…

At this point I was still unsure what the bytes immediately after the magic header represented. It wasn’t until is started to craft my own firmware images and using a serial connection to the board did I receive errors of bad checksums. At that point I reverse engineered an executable from the RootFS to discover how they were calculating the checksum. They simply add bytewise the entire contents of the file. After doing this myself it was obvious an integer immediately after the magic header was the checksum. I am still unsure what the next portion represents but it is possibly version or even how many files are in the firmware image after the installation elf and script.

I have created two python scripts which can be used to pack and unpack Tivic firmware images. Find them on my github as and

Examining the pieces

After reverse engineering the file format and creating some tools to pull the firmware blobs apart I could now start to examine the pieces.


I mounted the config partition using the following commands and dumped the file tree below.

modprobe mtdblock
modprobe jffs2
modprobe mtdram total_size=20000
cat /proc/mtd
dev:    size   erasesize  name
mtd0: 01388000 00020000 "mtdram test device"
dd if=Config of=/dev/mtdblock0
mount -t jffs2 /dev/mtdblock0 mnt


cron/  etc/  home/  root/



fstab           init.d/    ppp/       services
fw_env.config*  inittab     profile    TZ
group           issue            nsswitch.conf  rcS        version
host.conf*  passwd         securetty  vsftpd.conf

./etc/init.d:*  rc@  rc.reboot*  rcS*  S10sys*  S20crond*  S40insmod*  S50mknod*  S99rc*



config/  plugins/

button.config@  kbs.config  led.config@  sys.conf  upnp.conf@



file Kernel
Kernel: u-boot legacy uImage, Linux Kernel Image, Linux/ARM, OS Kernel Image (Not compressed), 1056768 bytes, Tue Feb  9 20:43:22 2010, Load Address: 0x00100000, Entry Point: 0x00100040, Header CRC: 0x4D1B3158, Data CRC: 0xAA76B039


Weird squashfs build that can be found with firmware-modkit (on a side note whilst compiling found an error and had my first patch accepted by opensource project). It nicely runs through a multitude of old and discontinued squashfs builds and finds which one will the correct version. This is necessary as squashfs was outside the kernel for quite a while before it accepted and there are numerous random incarnations of the filesystem out in the wild. However the Tivic is using squashfs-3.3-lzma.
The file system break down is below. Nothing too exciting

bin/  dev/  etc@  flash/  home@  kbsconf/  lib/  linuxrc@  media/  mnt/  mnt2/  opt/  proc/  root@  sbin/  sys/  tmp@  usr/  var/

ash@      chgrp@  cp@    df@     egrep@  fw_printenv*  grep@      kill@   ls@     mktemp@  mv@       pidof@  pwd@    sed@    sync@   true@    usleep@  watch@
busybox*  chmod@  date@  dmesg@  false@  fw_setenv@    hostname@  ln@     mkdir@  more@    netstat@  ping@   rm@     sh@     tar@    umount@  vi@
cat@      chown@  dd@    echo@   fgrep@  getopt@       ip@        login@  mknod@  mount@   nice@     ps@     rmdir@  sleep@  touch@  uname@   vsntp*

pts/  root@  snd/  sound/  usb/






group  host.conf  init.d/  inittab  passwd  services  version


button.config  led.config

fiqtest.ko                            ***                                     ******   *** **       modules/   *        **         *   *   *                 **


build@   modules.alias      modules.ccwmap  modules.dep.bin      modules.inputmap   modules.ofmap   modules.seriomap  modules.symbols.bin  source@
kernel/  modules.alias.bin  modules.dep     modules.ieee1394map  modules.isapnpmap  modules.pcimap  modules.symbols   modules.usbmap

drivers/  fs/  lib/  net/

cdrom/  ide/  net/  star/


ide-cd.ko  ide-core.ko  ide-disk.ko  ide-generic.ko  pci/

generic.ko  it821x.ko  pdc202xx_new.ko

ppp_async.ko  ppp_deflate.ko  ppp_generic.ko  pppoe.ko  pppox.ko  ppp_synctty.ko  slhc.ko



ext2/  ext3/  fuse/  jbd/  lockd/  nfs/










media0/  media1/  media2/  media3/

part0/  part1/  part2/  part3/  part4/  part5/  part6/  part7/










part0/  part1/  part2/  part3/  part4/  part5/  part6/  part7/









part0/  part1/  part2/  part3/  part4/  part5/  part6/  part7/









part0/  part1/  part2/  part3/  part4/  part5/  part6/  part7/









hda1/  hda2/  usbdev/  usbdev1/  usbdev2/  usbdev3/  usbdev4/  usbdev5/











full_ver  version


buttonpoll*  fw_unpack*     halt@      insmod@   klogd@     lsmod@*  ntfs-3g*     poweroff@  reboot@       route@    swapon@*  vconfig@
cronop*      fw_unpack.idb  ifconfig@  kbs*      ledflash*  macktkey*     modprobe@           parseMD5*    pppd**  svsd*     syslogd@   udhcpc@    watchdog@
fdisk@       getty@         init@      killkbs*  logread@*  msgqueue*           pivot_root@  pppoe*     rmmod@        swapoff@*  upnpd*


bin/  lib/  local/  sbin/  share/

[@    basename@  crontab@  dos2unix@  expr@  ftpget@  head@     iperf*    md5sum@  readlink@  sort@     tail@  tftp@  tr@          uniq@      wc@     whoami@
[[@   clear@     cut@      du@        find@  ftpput@  hexdump@  killall@  nc@      reset@     strings@  tee@   time@  traceroute@  unix2dos@  wget@   xargs@
awk@  cmp@       dirname@  env@       free@  fuser@   id@       logger@   printf@  rz*        sz*       test@  top@   tty@         uptime@    which@  yes@


etc/  samba/

init.d/  samba/


dfree*  samba-init.d*  smb.conf  smbpasswd  ttsmb-init.d*

bin/  etc@  lib/  man/  sbin/  share/




nmbd*  smbd*



codepage.936  codepage.950  unicode_map.936  unicode_map.950  unicode_map.ISO8859-1

brctl*  chroot@  crond@  mtd_debug*  readprofile@  telnetd@



run/  tmp/



For those wanting to have a go at cracking the root password here is hash from /etc/passwd


I had john-the-ripper instance running for 5weeks but had no luck cracking it. Since we can run code as root with carefully crafted firmware updates you can however blank out the hash and login.

Hardware & Baremetal

Under the bonnet

After looking at the software side of the Tivic I started to examine the hardware. Below is a picture of the unit with the top off and I have marked the logic level serial lines on the 5 pin header. I never investigated the other two headers but I assume they are likely for jtag and flash programming.

Connecting to the serial ports using a TTL->RS232 converter(max232 or equivalent) will allow you to examine the U-boot bootloader and a getty login to the linux kernel that the Tivic ships with. Nothing particularly exciting here but makes working with the device significantly easier as it allows the Tivic to provide feedback to your actions.


I haven't spent the time to document the entire BOM but the processor is CNS2132, flash is NANYA 16MB. If anyone would like to provide a more comprehensive list I am more than happy to post it here.
A side note about the CNS2132 - It contains an armv4 core which in my experience is NOT well supported by any of the generic cross-compilers out their and I had to spend significant amounts of time building my own toolchain. In a later post I will detail that section of the project.

Flash map

The Windrunner ships with the following 9 flash partitions

0x00000000-0x01000000 : "U-Boot"
0x00020000-0x00140000 : "Kernel"
0x00140000-0x00640000 : "RootFS"
0x00640000-0x00ac0000 : "Config"
0x00ae0000-0x00be0000 : "Kimage"
0x00be0000-0x00ea0000 : "Rimage"
0x00ea0000-0x00f80000 : "Cimage"
0x00f80000-0x00fa0000 : "u-boot-env"
0x00fa0000-0x00fc0000 : "dev"

1. U-Boot partition contains the boot loader and access to the entire flash.
2. Kernel is where the Linux Kernel lives - der
3. Where the squashfs rootfs lives
4. Where the jffs2 config lives
5. I think where the kernel is held during updates
6. Where the rootfs is held during updates
7. Where the config is held during updates
8. U-boot boot environment variables - these are very important as they tell the boot loader how to boot the machine
9. I think these are U-boot environment variables used during development, no idea how to boot using these however.


This concludes the breakdown on the CNS2132 based Tivic Windrunner. It has proven an interesting project and I hope the information provided here will help others get the most out of their Tivic. From here I have built my own toolchain, kernel and rootfs to replace the current Tivic firmware and will provide details on that in a later post. Enjoy ~


I must remind anyone reading this breakdown that opening your Tivic Windrunner probably voids the warrenty and making changes to the flash, bootloader, sending your own firmware images to the target can result in bricking the device. I take no responsibility for any of the trouble you manage to get yourself in when playing around with this wonderful board.

Luakit – The beautiful browser

Its not often you come across a piece of software that is truly amazing, today I found Luakit and it blew my mind.

I use vim at work and home all the time to do my text editing and have played around with a few vimish plugins/extensions/browsers before but have always found them in some respects lacking. Luakit feels however as if it was written for me, without even reading a guide an intermediate vim’er can effortlessly glide across pages with only the use of a keyboard.

If you plan to give luakit ago some helpful hints

  • Default configuration is stored /etc/xdg/luakit/ and should be copied to ~/.config/luakit/
  • o for open t for new tab
  • when opening or a new tab typing will search, leading with a search engines name will search using that service. ie : open wikipedia lua – will search for lua on wikipedia
  • pressing ‘f’ will highlight all the links on a page and allow you to follow them by typing the corresponding number.
  • Quickmarks – a common feature amoungst minimal browsers allows you to assign pages to certain buttons. To assign type ‘M’ followed by the key you wish to map the page to. Opening a quickmark is easy as go[MARK] to open in current tab and gn[MARK] for a new tab.

Here are the obligatory screen shots of luakit in action




Protected: LanParty – Prep

This post is password protected. To view it please enter your password below:

CATCH.hpp – Easy Unit Testing

After coding Repost for months on multiple platforms I ended up with some fairly decent and lightweight logging, synchronisation and data classes. When dabbling in some new code I decided it would be nice to have the more generic parts of repostlib available so I decided to create a new library to help with how I like to code, it is call rplib.

I would hardly describe myself as pedantic coder, sometimes I can be outright sloppy when trying to find a solution, but when writing core sections of code that will undoubtedly be used time and time again I really like to unit test. In the past I have really only used CuTest which was fairly solid but it is for testing C and I had C++ library to test.

When searching for a new test framework I came across CATCH.hpp. CATCH (which stands for C++ Adaptive Test Cases in Headers) is a unit testing framework in a single header that you can immediately start writing and running unit tests with. It has no dependencies besides basic STL and all the tests you write are self registering which keeps your test code tidy and succinct.

The work flow I have found to be successful with CATCH.hpp is to create a new test folder to house the unit tests and for each module create a new cpp file to house the appropriate tests.
A test written for CATCH.HPP generally looks like

    /* Test code */
    REQUIRE(Outcome == The_right_outcome);

To bind all the tests together a single file must include the define


It is in this file CATCH.HPP will place its main function and the magic that makes the rest of the tests happen.

I did discover however that it was undocumented howto write your own version of the main function and subsequently call CATCH.HPP initialisation stages. This was particularly annoying as I was testing a library which required special initialisation steps before its use and I couldn’t explicitly call the necessary functions before the tests began.
I solved this problem in my test.cpp file where I included the CATCH.HPP main define by creating a test initialiser class and putting the required initialisation steps in its constructor and then creating a global instance of the class. This ensured that the library would be initialised before any of the test code was called.

#include "catch.hpp"

** Initialisation of Library
class TestInitialiser
       /* Initialise Library here */

TestInitialiser now;

Upon compilation of your tests it creates a nice little commandline utility that allows you to select which tests to run, how to report the results and which results to report. The basic options are as follows.

testapp is a CATCH host application. Options are as follows:

    -l, --list <tests | reporters> [xml]
    -t, --test <testspec> [<testspec>...]
    -r, --reporter <reporter name>
    -o, --out <file name>|<%stream name>
    -s, --success
    -b, --break
    -n, --name <name>

This is great as I use buildbots to continually integrate code across all my platforms and providing a neat way to run the tests from the command line will make automation easy.

CATCH.HPP is definitely an impressive unit testing library. It is perhaps as full featured as more matured libraries, but the rate at which you can blast out tests and integrate it will existing projects is remarkable. It is definitely part of the standard kit.

So thanks to Philsquared for making this project openly available, its awesome.

Arch on Macbook 2,1


After much procrastination I finally made decided to make the switch from OSX to a linux distro. A friend suggested Arch and I thought I’d be a nice change from the debianish distros I normally use. A few days later here are some details on the stumbling blocks I came across when switching from OSX/debian lifestyle to the light-weight fast paced world of Arch.

Backing Up:

Normally when working on my own equipment I skip the backing up stage because most of the stuff I have is either already backed up or worthless. My macbook however I have been using since 2007 and even after backing up most of the important things I still couldn’t let go of my old install.

First I created a new partition on my external drive using cfdisk and for some reason(probably because I was backing up a Mac) I chose HFS+ as the filesystem. Which in hindsight was not a good choice as apparently it is not a journaled FS which can cause corruption issues with linux drivers. I suggest using something of the ext variety instead.

After creating the partition and formatting a file system on I used clonezilla to do a full backup. Clonezilla is an awesome tool and its usage is fairly straight forward. My hard drive is 160gb and it took about 3-4hrs to complete the operation.

First Crack – Archbang:

Not being familiar with arch I decided to install a pre-baked version to take away some of the pain. Archbang looked like a good choice as it had a soft gooey arch center encrusted in a hard outter openbox exterior. However after following the fairly short installation steps it failed to boot! After completing the installation process again I noticed that ‘mkinitcpio’ was failing to do its magic which is apparently essential to get a working arch OS. Bye bye Archbang.

Now we are getting somewhere

I quickly threw away my archbang CD and grabbed the bare bones arch install. This went far smoother and I quickly had a base install.

Packages, Packages, Packages

After the basic arch install there is NOTHING. Here is a short list of the packages I installed to get my system beyond just a bash

  • Openbox – Nice light weight window manager with alot of supporters. Never used it before and to be honest didn’t understand the fascination with it. I have since come to realise there is some real power in being able to configure all the short cuts and dynamically create menus.
  • Slim – Really, really simple login manager. It does the job but I found during configuration if i got something wrong I had to use a live cd to rescue the system as it wouldn’t bring up an xterm instance. Probably my configuration. It look sexy with a nice arch background.
  • PyPanel – Light weight task bar. Again completely configurable. Seems to just work.
  • yaourt – Pacman for AUR. This allows you to easily install other users packages they have uploaded to the AUR. There are a whole bunch of these tools but this one works and it has the same syntax as pacman. You are going to want a tool to automate the AUR package process as resolving dependencies quickly becomes tiresome.
  • TrayFreq – Show CPU freq on task bar and allows user configuration.
  • Laptop-mode – Set of tools that makes your linux distro act like a laptop in respects to power etc.. Alot of configuration options again. Still haven’t gotten around to nutting them all out.
  • Redshift – I love fl.ux on windows and mac but it seemed complicated to get it going on *nix. Redshift does the same thing and was part of the package management. Out of the box I found it a bit too red for me but it is easily configurable.
  • Urxvt – General terminal. Quickly fell in love with being able to run it as a daemon and have each window run as a client. The tabs are also quite nice and simple and fassstt. Had trouble getting transparency working but its alright now.
  • Xcompmgr – Not maintained but still the standard.
  • Nitrogen – Easy way to manage wallpapers in an arch’y way
  • Pidgin – Love pidgin
  • Vim - Preferred editor. You’re going to need to do alot of editing!
  • Wicd - Light weight network configuration tool.
After the installation and configuration of these packages arch went from super quick bash, you can feel the speed, to a super quick desktop with some nice eye candy. Now it was time to iron out the bugs.


Migration to tmux

I spend a lot of time ssh’d into a ubuntu 10.04 LTS machine where I like to use screen with some irssi sessions running and some bits and pieces I’m fiddling around with. HOWEVER I quickly found out that combo of urxvt locally and screen remotely doesn’t work so well. I was continually faced with

$TERM too long - sorry - 

Apparently this is due to a bug is screen where it doesn’t accept long $TERM environment vars which urxvt apparently has. This has been fixed in later version of screen but it probably won’t be ported back to the packages for ubuntu 10.04 so moving to a newer version of screen could lead to complications.  Another solution would have been to switch to another terminal but I liked the idea of urxvt running as a daemon and its tab functionality.

Instead I decided to use tmux which doesn’t have the same issue but still requires a little bit of tweaking to get it rolling. On the first run it returned this

open terminal failed: missing or unsuitable terminal: rxvt-unicode-256color

This means that Tmux doesn’t know how to display correctly on this terminal and you need to share the urxvt terminfo file with the remote machine. This can be quickly uploaded using the following command.

scp /usr/share/terminfo/r/rxvt-unicode HOME:~/.terminfo/r/rvxt-unicode-256color

Then it was all good to go!


It was now time for some conky-prn to be added to the desktop. I borrowed the conky orange theme and made some to tweak for my screen and here is the result.

Adapted ConkyOrange


You can checkout my config changes at


One of the problems I have found moving to linux in the past is wireless support. It was no surprise that it didn’t work reliably out of the box. After some fiddling around I found a fairly stable config.

I chose Wicd as my network manager because it small footprint but found it had problems using dhcpcd  as its dhcp client which is installed by default so I switched to dhclient instead. This has since dramatically improved the stability of the wireless connection.

Another gotcha that was also causing random disconnects was calls to CRDA daemon. Running dmesg after an unexplained drop out, I found the wifi connection was dropping after this message.

cfg80211: Calling CRDA to update world regulatory domain

After some reading I installing the package core/crda and configured it to AU country code. I now had a CRDA daemon running and calls to update the regulatory information would now add this to the dmesg rather than halting my wifi connection.

cfg80211: World regulatory domain updated:
cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp)
cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm)
cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm)
cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm)
cfg80211: Calling CRDA for country: AU
cfg80211: Regulatory domain changed to country: AU
cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp)
cfg80211: (2402000 KHz - 2482000 KHz @ 40000 KHz), (N/A, 2000 mBm)
cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2300 mBm)
cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2300 mBm)
cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm)

Disabling other wireless drivers and ensuring that my machine was using ath9k drives also improved the stability.


Setting up arch results in a lot of configuration files. Something nice to do is create a git/svn/etc.. repo and move all your config files in there then create soft links back to where they should sit. HOWEVER you should note that the default arch install has root and home folders located on different partitions so don’t move /etc/rc.conf and /etc/inittab to your repo and link as at boot time the /home partition isn’t loaded in time and your machine will complain about not being able to find inittab and will require a rescue.

My configs can be found here


Arch is pretty awesome and looks awesome. The configs at the start feels overwhelming but the fact I can take my config to other machines is nice. Being on the bleeding edge and AUR packages allows the community to quickly get each other up and running with the latest and greatest. There is also alot of high quality support, for every problem I’ve encountered there has almost always been a wiki page, forum, mailing list from either arch or gentoo.

I think I will be sticking with arch for quite a while as it has a nice feel. There has definitely been a lot of work to get it up and going but you reap what you sow and I can tell my arch configs are only going to get better with time.

A lot of information can be found on the arch wiki for macbooks here