Writing as Therapy

It’s been a long time since I last blogged. I have been dealing with some big personal problems since March and haven’t had time for technical work or the will to blog on anything. I have however been doing a lot of private writing or journalling. So I would like to talk a little about writing as therapy.

So far I have written 73,000 words in 5 months in a private journal. According to the wikipedia entry on word count that is nearly enough words for two full length novels.

Every day I open a text editor and write about what has been going on, what I have been worrying about, and in particular – how I have felt. Some times I combine the writing with exercise, like riding my bike to a cafe with my laptop to write and think. If I don’t have my laptop I jot down a few notes on paper as thoughts enter my mind, then type them up later.

I think it helps. Perhaps getting the thoughts down means getting them out of your head. It makes you express the ideas clearly, rather than half formed thoughts. It’s really interesting to go back a few months, read what you wrote, and see your thoughts and emotions evolving.

Another useful technique is writing an email that you don’t send. When there is a lot of tension it can be really difficult to write an email. It’s easy to tie yourself in knots trying to get the wording right. It’s hard to write while trying to avoid offense or unnecessary hurt. But the problem is you really need to express the bad stuff. Unfortunately if you say what you really feel, or even just mess up the wording, it can make the situation much worse.

So I have gotten into the habit of writing what I really feel, then not sending the email. Sometimes it helps to print it, or just save it as a draft. Tip: remove the intended recipient from the “To:” box – avoids embarrassing accidents.

linux.conf.au (LCA) 2011

Every time I attend lca.conf.au (LCA) I am reminded that it has a couple of really unique features:

It’s a really friendly, genuine, relaxed conference. So different to trade shows or academic conferences. No one is there to sell you anything or to advance their list of publications. People attend because they are interested in geeky stuff.

The second is intellectual stimulation. A lot of projects get started or greatly advanced at LCA. When I attend a talk, ideas start flowing, sometimes in completely unrelated areas. Several people said the same thing to me this week, “Well, I wasn’t really that interested in that talk, but then I got this idea about…….”.

That sort of creativity and motivation has a lot of value. Here are a few highlights.

Getting Your LCA Paper Accepted

I met a guy on the paper review board and thanked him for accepting our paper. We started talking about what it takes to get a LCA paper accepted. About 70 were accepted this year from 180 proposals. Some tips:

  • They encourage papers from outside the Sydney/Melbourne Australian population cluster, to ensure a good representation from around Australasia and beyond.
  • LCA likes practical applications of open source, rather than your great new pre-Alpha software that only runs on your laptop. For example our Dili Village Telco proposal talks about open source helping end users in a developing country.
  • A good track record helps. Actually it’s more accurate to say a bad track record won’t help. If you didn’t do a good job last year then you may struggle with this years proposal.

Many other people have written on this topic, for example Rusty Russell and Mary Gardiner. Sam Varghese also has an interesting article on how this years papers were selected.

I feel pretty honoured to have been accepted to talk at such a conference. To me it’s a sign that my work is on the right course.

OLPCs and Effective Learning Outcomes

I attended the OLPC talk by Sridhar Dhanapalan who is the CTO at One Laptop per Child Australia. They have deployed 5000 XOs in Australia which is pretty cool, as they operate largely as a charity without government support.

My pet interest with the XO is making sure they are effective tools for teaching. For example students using them should have better educational outcomes. They should make teaching easier. This stems from my interest in IT for development. In the developing world I see a lot of well meaning projects focus on technology rather than focussing on delivering effective benefits to end users. The more exciting the technology – the bigger this problem is. It’s a really easy trap for geeks to fall into. I should know, been bitten by this one myself many times.

Education in remote Outback Australia is especially challenging.

I have a little practical experience in teaching with XOs. In 2009 I took some XOs to an Outback school and with my daughter spent a week teaching Aboriginal children with them. It was damn tough and I gained a lot of respect for teachers. The locations are remote, schools are small, staff turnover is high. Creating lesson plans and keeping kids meaningfully engaged for 1 hour a day is really hard. Compounding this were other problems, e.g. many of the children have irregular school attendance, problems at home, constant illness, and even disabilities such as infection-related hearing loss. In a 6 year old. This is Australia’s third world.

So I think what OLPC Australia are working on part of really, really tough problem – educating kids. There are no easy solutions, and it will take some time, plus trial and error to effectively improve outback education using XOs.

I am concerned about the focus of XO work. If you have a hammer, everything is a nail. The XO is geek wonder-hammer. I would love to see more focus on wetware, and less on hardware and software. Effective teaching with the XO rather than more technology development.

So at lca.conf.au 2012 it would be great to see a teacher from the Northern Territory Education who has used XOs in a classroom for 12 months. Lets hear about what works, what doesn’t and how the learning outcomes are being improved.

Thanks to a teacher friend for helping out with this section.

Vint Cerf

Great keynote by Vint Cerf.

He made the understatement of the decade: “I am a little bit embarrassed about the 32 bit IP address limitation, as I was the guy who made the decision.” (!) “At the time I just thought it was an experiment, but it (the Internet) went a little further than I thought….”

He spoke about bit rot, e.g. will we be able to run 20th century software in the year 3000? Or will the hardware and operating systems be long extinct?

Vint is currently hacking the Interplanetary Internet, which uses Delay Tolerant Network (DTN) technology. Space is big, so your ping time to Uncle Martin on Mars is about 40 minutes. So you need protocols that can handle this, protocols that are delay tolerant. By coincidence I attended a conference on DTNs in September. DTNs are currently used for getting data to remote communities, for example using a helicopter or scooter as a data “mule”. At that conference some people were saying that with the growth of broadband around the globe DTN technology would soon be extinct. Maybe, for terrestrial networks. However our expansion into space will happen some day and you just can’t get around the speed of light.

A neat idea was re-purposing old space probes and satellites as “routers” at the end of their mission.

Rockets

Speaking of expansion into space….

Bdale Garbee’s rocketry talks are always very popular. This year there was a mini-conf stream where you could build and fly your own rocket. This was very popular, many people working on their rockets all week.

Here are a couple of proud Rocketeers, Tim and Joel:

Dili Village Telco Talk

Organising LCA is a huge effort. This year they had to uproot the whole conference to another venue at 1 weeks notice due to the Brisbane floods. As if they didn’t have enough work to do. I would also like to mention the kindness of the organisers for supporting Lemi Soares to travel from Timor Leste so we could co-present. Lemi is my partner in the Dili Village Telco project. Lemi got a lot out of LCA, and he met many people. Face to face is so much better than electronic contact.

We had a full house for our talk. As a speaker I can tell you this adds a huge amount of energy. Thank you very much to those who came along. A lot of people came up and thanked us later which was wonderful – you should so this if you liked a talk. It means a lot to a speaker. It adds a validity and importance to our work that we just don’t get when we sit behind a computer all day.

As a LCA speaker the first thing I do is check is my slot and who is speaking at the same time. Is a really well known good speaker like Rusty talking at the same time? Is it that %$^# cute robot bear that sucked my crowd away last year in a parallel session? Nope? Phew, we are set!

I have spoken many times on the Village Telco, so to keep it fresh I wanted something different for this talk. So we added a few slides that describe what it’s like to be a developing world geek. I wanted to capture the audiences attention and transport them into the shoes of a developing world hacker for 45 minutes. This ranges from the annoying (power blackouts) to being shot at by soldiers and thrown in jail by an occupying force. This seemed to work very well – the typing on laptops slowed, and faces emerged from behind them. It set the scene for our talk which deals with the unique telephony problems faced in the developing world.

Here are our slides in Open Office and PDF format, and here is the video. I found the picture below (of me) on the OMG Ubuntu blog.

After the talk I had an interesting suggestion for Wifi links with strong interference – use an optical long distance link such as the Ronja project.

Serval

Paul had a good talk on Serval, which uses mesh networks formed using smart phones. Now one problem with this approach is range. For example a smart phone in your pocket will be “range challenged” compared to a Mesh Potato mounted on your roof. Although if you get the Wifi broadcast rate just right all that microwave energy will keep your pants nice and warm. A possible solution (to the range issue, not warm pants) is to deploy some additional “relay” nodes at strategic locations, generally the higher the better. Paul had the brainstorm of using a tethered helium balloon – which he tried for the first time at LCA.

If there is a balloon doing something geeky, you will find Joel! Here is Joel (again!) launching the Serval Balloon with Google G1 phone payload below:

Thanks also to Mark Jessop for putting a lot of work into this experiment, supplying the balloon, and working on the required government clearances.

Paul reports some success with the idea, but encountered similar interference problems to our networks in Dili. The omnidirectional antenna on the balloon payload was receiving packets from every Wifi radio on campus. In contrast signals received from the balloon by radios on the ground were very good as their was an excellent line of sight path.

Every Router a Potato

I have had an idea kicking around in my head for a while. There are a lot of 802.11bg routers getting tossed out as 802.11 technology advances. What if we could recycle them into Mesh Potatoes, then ship them off to a developing country where they would be really useful? Now, what if we did this recycling work at a linux.conf.au 2012 miniconf?

So the plan is:

  • Develop a low cost FXS interface circuit, like the $10 ATA. This would be a small daughter board that can connect to the RS232 serial port and GPIOs of any OpenWRT-capable router. Like a customised Arduino with some analog components to interface to a telephone.
  • Get people to bring or donate old routers to linux.conf.au 2012.
  • At the conference, run a tutorial session where we solder the $10 ATA daughter boards, then attach them to the routers. Maybe add another small circuit to make the 12V port robust to developing world power problems.
  • Another team could handle flashing the routers and testing. Part of this job would be developing images for a range of previously popular routers.
  • At the end of the conference, ship them all off to a developing world country. If we ship in volume it will make the shipping cost quite economical. We would really help a lot of people, recycle some e-waste, and have a lot of fun building cool hardware and hacking routers.

Like me, the people attending lca.conf.au really want to help other people. We saw that in the response to our talk, and the questions afterwards. They are fascinated by the idea of using technology to help people in the developing world. But they don’t know where to start. The Every Router A Potato (ERAP) project is a way for anyone with a router and a soldering iron can help out.

Codec 2 – Alpha Release and Voicing

In this post I talk about the Codec 2 alpha release, problems with DSP algorithms, some bugs in the voicing estimator, and why speech codec development is tough.

V0.1 Alpha Release

About a month ago I released V0.1 of Codec 2. The response has been amazing. An early release wasn’t my idea – I was tempted to keep messing around with the codec algorithm. However Bruce Perens and others on the Codec 2 mailing list encouraged me to release early. At about the same time I listened to an early MELP simulation. A few samples convinced me that the quality of Codec 2 was already getting close to that of MELP.

So I had a busy few weeks of C coding to get to the alpha code into releasable form. It was mainly refactoring, integration, and writing separate encoder and decoder programs. I wasn’t looking at DSP or codec issues. After 20 years of C this sort of coding is easy for me, relaxing even.

Soon after the alpha release came a flood of patches, PayPal and equipment donations, and the project was Slashdotted!

Just after the V0.1 release Bruce presented a cool talk at the 2010 ARRL and TAPR Digital Communications Conference. Here are some Codec 2 slides which explains the project and a little about the codec algorithm. There are some notes under each slide.

An important part of Codec 2 is making speech coding algorithms accessible to everyone, rather than locked up as “secret sauce” in binary blobs or patents. So please feel free to use these slides for presentations on Codec 2 at your local Linux group or Ham Radio club.

Goals

Some broad goals for the project are emerging:

  1. A toll quality codec at 2000 to 4000 bit/s. An open source, free codec that sounds as good as 8000 bit/s g.729 at a fraction of the bit rate.
  2. A communications quality codec at 1200-2400 bit/s. The speech quality should be roughly the same as xMBE and MELP at 1200 – 2400 bits/s.
  3. A digital radio “mode” for HF and VHF radio applications that combines Codec 2, FEC, and a modem. The target is better speech quality than Single Side Band (SSB) at equivalent SNR.

For the next few months I want to take another look at the codec algorithms, hunt down some bugs, and see if I can improve the quality. In particular I would like to work on voicing estimation and LSP quantisation.

Voicing Bugs

Codec 2 uses a model based algorithm. Rather than sending the original speech waveform, it fits the incoming speech to a model, then transmits the model parameters. Codec 2 models speech as the sum of many sine waves:

sinusoidal model

Model parameters include the pitch, the amplitude of each sine wave, and a binary flag called “voicing”.

Speech can be broadly separated into voiced (vowels like “aaaaahhh”) and unvoiced sounds (consonants like “ssssss”). In Codec 2 the voicing estimator looks at the speech signal and makes a voiced/unvoiced decision every 10ms. However, it makes some mistakes.

Mistakes are a common problem with DSP algorithms that process real world signals. You read the scientific papers full of fancy math and the algorithms all sound fantastic. But in practice, with real world signals, they all make mistakes. DSP algorithms are about 20% math, 10% coding and 70% perspiration as you grind through all the real world exceptions.

Echo cancellation is a great example of this principal. The adaptive filters used are described in many books and papers but the devil is in the real world detail. For the Oslec echo canceller we worked through the real world problems using an open source approach of collecting echo samples from alpha testers all around the world.

But back to my voicing estimator problem. Bill Cowley spotted a problem in the “sssh” part of “dissh” in the synthesised speech from the Codec 2 decoder. Here is a plot of the input (top) and Codec 2 output (bottom) waveforms for the “sssh” part of “dissh”:

The output “shh” signal is distorted. Listen to this sample which combines the original and Codec 2 processed samples of “dish”. See if you can hear a difference between the “shh” sounds. One of the problems in speech codec development is hearing small differences in speech samples. In this case the problem (at least to my ear) is more obvious on the plot above than by listening to the samples. It depends a lot on your speakers, the speech you are processing, and your subjective preference.

Are these subtle problems worth tracking down? I think so. Sometimes small problems become more obvious after further processing, or on other speech material. Finding out why these small errors occur leads to a better understanding of the algorithms involved.

To track down this problem I dumped the voicing estimator output to a text file and wrote an Octave script to visualise the voicing decisions:

The voicing estimator (based on the MBE algorithm [1]) outputs a Signal to Noise (SNR) ratio in dB. These are plotted as the green crosses along the top of the speech waveform. Voiced speech should have a high SNR, unvoiced speech a low SNR. I apply a threshold to this SNR to obtain the voicing decisions, which are plotted along the bottom. I have used a 4dB threshold (red line) which sometimes declares unvoiced speech to be voiced (like the shhh in “dish”). This error is causing the spikes on the Codec 2 output waveform.

One alternative is using a higher threshold (green line) but this causes errors in the other direction – when I tested other samples some voiced speech was declared unvoiced. Like many DSP algorithms, the voicing estimation algorithm I am using is not perfect.

What to do? Well I tried another voicing estimator (the auto-correlation function). This had problems with similar areas of speech to the MBE algorithm. It’s output (and hence errors) was correlated with the MBE voicing estimator.

My next attempt is to try some sort of post processing or tracking algorithm. For example the pitch estimate is usually quite stable during voiced speech but jumps around randomly during unvoiced speech. We might be able to use the pitch estimator output to determine if the voicing estimate is correct.

Testing Speech Codecs is Tough

Normally when we develop a program we have some way of testing it. For example if I develop a DTMF decoder I can write a program to test the decoder under varying Signal to Noise Ratio (SNR) conditions. However testing a speech codec is really hard. The ear is an imprecise instrument. I have spent hours trying to listen to small differences between two speech samples processed in slightly different ways. Fatigue sets in after a while and everything sounds the same. People disagree over the same samples. Samples sound different depending on the headphones or loudspeaker used. Loudspeakers tend to hide small differences. Sometimes a profound difference on one day is inaudible the day after.

So visualising the operation of the codec can really help. For example the plots above helped visualise the operation of the voicing estimator and it’s effect on the output speech. Like a software oscillascope for DSP signals.

Links

[1] The MBE voicing estimation algorithm is summarised in section 3.6 of my thesis. For Codec 2 we compare the first 1KHz to an all-voiced spectrum to obtain a single voiced/unvoiced decision. Also check out the function est_voicing_mbe here.

[2] Codec 2 Project Page

ExtremeCom 2010 Part 2

Well, it was quite a walk, but we made it to Indrahar Pass at 4300m!

At this altitude we were above most of the clouds! It was like a view from a light plane at 13,000 feet.

The walk was staged over several days to give us a chance to acclimatise to the altitude. Our luggage, tents, food etc was carried by a pack of mules, so we were just carrying day packs.

Wow, the food! Each day we would arrive at our destination to find the camp set up. We were amazed to find a “dining tent” complete with chairs and a long table that was regularly covered with lovely Indian food. This was camping with cruise-ship style catering! We were fed about 5 times a day – breakfast, morning tea, lunch, afternoon tea, dinner. Each meal was delicious, and there was always plenty of food.

We spent 4 days trekking, from Sunday to Wednesday. Tuesday was the big day. We started from our camp at 3100m at 5am. After about 6 hours climbing we reached Indrahar Pass at 4300m. Six hours up hill over steep, rough, uneven ground was a tough trek for all of us. I adopted a slow plodding style which meant I was one of the last to reach the top but just getting there was fine for me. As we got closer to the top the air was much thinner. This means you run out of breath quickly. When you stop you recover fairly quickly, as your lungs collect the thin oxygen and put it back into your blood. When you start again you have energy for about 10 steps before you are out of breath again. To maintain a constant pace I would climb 0.5m, then stop for 10 seconds and take 3 big breaths, climb 0.5m, etc. Most of us experienced mild headaches from the altitude. Coming down felt riskier – an accidental fall or twisted ankle would have been easy. We spent about 3 hours walking down over rough, stony, steep terrain.

During the entire 4 day trek our guides were very kind and helpful. They kept a close eye on us, and we even had to doctors trained in emergency medicine along for the walk! We even had hot soup and sandwiches at 4300m, and a hot meal served half way down at about 3700m. The trek was incredibly well organised by Summit Adventures who I thoroughly recommend if you need any travel arrangements in this part of India. Ten out of ten.

One of our wonderful guides:

Lunch at 3700m:

The Snow Line Chai shop between Triund and Indrahar Pass was a life saver on our way down. Just as we arrived it started raining, so about 20 of us huddled inside. We had been walking for about 10 hours but a 90 minute break sipping chai worked wonders. I am always fascinated by little shops and homes in other countries, how real people live and work is more important to me than tourist attractions. The owner of the shop spends 9 months a year there, closing down around Christmas when the snow comes. He is a kind man who looked after us very well and made nice chai.

On the first and last nights we stopped at Triund, a relatively flat spot which overlooks Dharamsala far below. Although we had been walking most of the day, the Line of Site (LOS) distance from the villages below was only 3-4km. Out came the laptop, and I managed to connect to an AirJaldi AP down in the valley! Ping with short packets gave about 3% packet loss, long packets (ping -s 1400) loss about 20%. Just good enough. So there I was, hanging off the edge of a mountain, doing emails over a 3.5km Wifi link with just my laptop. I even received and processed an order for my store. Quite amazing how far Wifi can go with good LOS and no interference.

Following the trek we had a pleasant two day workshop at the Tibetan Childrens Village in upper Dharamsala. The theme was network communications for extreme communications, so I gave a talk and demo on the Village Telco and Mesh Potato. There were many talks on Delay Tolerant Networks – an interesting alternative to Wifi for rural connectivity. Once again we had great food and the attendees were all very nice people who shared the common experience of the Trek described above. Special mention to Anders, Ben, Mikey, and Arti for a very well organised and interesting Workshop.

Links

ExtrememCom 2010 Part 1

ExtremeCom 2010 Part 1

As I write I am sitting in the AirJaldi office in upper Dharamsala, Northern India. I am here to attend Extreme Com 2010. This is communications conference with a twist – early tomorrow morning we head up into the Himalayas for a 4 day trek, peaking at a height of 4400m! I am not sure how I will go, as I have never been that high before. The highest point in Australia is only around 2,200m. So I thought I better write this post. Just in case….

upper dharamsala

I arrived in Dharamsala a little early to catch up with Yahel Ben David, a good friend who I met when I first visited Dharamsala in 2006. Yahel lived in Dharamsala for 11 years and was key to setting up AirJaldi, a Wifi network that delivers Internet to thousands of people in rural India. I understand that AirJaldi has around 500 radios in their Dharamsala network, serving around 2000-3000 end user computers. I am using it now and it works really well, a lot faster than I expected.

local telephone exchange in Mcleod Ganj

We have already had a vigorous debate over mesh versus point-multi-point Wifi networks and I have been showing off the Mesh Potato. I have really enjoyed the discussion and look forward to learning more about Wifi for developing countries while I am here.

We have also been brainstorming some ideas for battery backed power supplies for rural Wifi. Many rural locations in developing countries have mains power. However it may drop out for days, and have nasty high voltage spikes such as 1000V for < 1ms induced from electrical storms. They also experience wide variations e.g. 60-400Vrms rather than the nominal 220Vrms. Wifi stations (especially relay stations) require battery backed power supplies that incorporate a charger and a “low voltage disconnect” that disconnects the battery when it’s terminal voltage gets too low. There are no suitable products on the market. So power supplies for rural Wifi are a surprisingly big problem that needs solving. If anyone is interested in working on a power supply for rural Wifi please contact me or Yahel. It is a very worthwhile project that could help a lot of people. More when I get back down from the mountain in 4 days!

Wifi is Hard

While I was watching the Air Stream guys clamber over my roof I was thinking about Wifi and the the Village Telco.

My Wifi experience is steadily growing. I have now been involved in a couple of long distance links with directional antennas and several mesh networks. Some patterns are emerging:

  • Long distance Wifi is an inherently unreliable medium. It’s no where near as reliable a DSL, or mobile phones, or a TV antenna installation that “just works” for years. Expect a lot of work to set up a reliable link, and ongoing work by skilled people to maintain the links.
  • Each node takes a surprisingly large amount of hard work to set up – on that Sunday we invested around 4 man days, and the link is not complete yet. In Dili our first mesh link took over a week to set up due to interference problems. However even in Timor Leste I can buy a sim-card and have reliable telephony (and 3G Internet) in 60 seconds.
  • There is also work to maintain Wifi links, they periodically go down for one reason or another. Once you set them up you are not finished. There is a dubious plus side to this – it means a job for life for the people running the networks!
  • One reason Wifi links are hard in the need for Line of Sight (LOS). Do not underestimate this requirement. It means pain. Wifi works fine indoor for a few 10’s of meters. Outdoors, once you get past 100m or so, you need Line of Sight. If a tree or other obstacle get between you and the other node your link won’t work. This sounds easy, but in practice if you have a 25m tree, you need a 25m tower. This is very tough from a mechanical point of view – it means big, complex towers, lots of work, and physical safety issues. Even on my house we needed a 12m height, just to clear some of the local obstacles you get in a 1st world neighbourhood. This means a big, guyed, mast, and man-days of effort for installation. These photos show what was required for our first node in Dili (although it got easier after that):
  • Another big issue is interference from other 2.4 GHz activity, in particular the hidden node problem. This has been my biggest problem in the Dili and Kilkenny meshes. In practice it means small packets tend to get through, but large ones do not. In an extreme case for one link in Dili we needed to resort to Ethernet cable as a Wifi link simply wouldn’t work. Directional antennas can help this problem, but mesh routing need omnis by definition. In practice, the mesh networks I have seen have a mixture of omni and directional links.

Mesh nodes in the Village Telco are designed to be set up by people with modest Wifi skills in remote, developing world locations, where technical help (like an Air Stream team), 1st world hardware and Wifi equipment shops are not available. If you need one more D-shackle, a Nanostation 2, or a new grid antenna that’s too bad, expensive, and months of delay while it gets shipped in.

Village Telco end users are going to depend on these networks for telephone calls. In some cases it might be their only telecommunications. End users have high expectations for telephone network up-time – much higher than for Internet. People running these networks (the Village Telco Entrepreneur) will be investing their life savings and expecting to generate an income.

So a key challenge of the Village Telco is to take an inherently unreliable, hard to set up, hard to maintain technology (long range Wifi), and make it simple and reliable in a 3rd world environment when installed and maintained by local people.

I am gathering data on this challenge as the Dili Village Telco grows. Over the next few months we will get experience with up time and scale up to 100 nodes. I am hoping that mesh networks will offer reliability advantages over point-point, statically routed Wifi links, and that installation gets smoother with experience and a denser mesh. The up time of the 10 node pilot network to date has been good. Despite the set-up hassles, the Timorese guys are hungry for more Potatoes. The magic of free local calls makes the set-up effort worth it. I’ll post more on this (and an update of the Dili Village Telco) soon.

Last week I was chatting with Alipio, one of my friends in Timor Leste. He has experience in mesh networks and Ubuntu Linux, and was a great help with the Dili Village Telco Workshop last April. Alipio is excited about the possibilities of the Mesh Potato and Village Telco. If it can be shown to work, he wants to promote the system to the Timorese government. However he is wary – foreigners are always dropping out the sky with magic technology that breaks 2 days after they leave. I asked him what he needs to see:

  1. The Timor Leste government wants to see sustainable, durable, renewable, up to date technology.
  2. It needs to be locally owned and operated, not driven by 1st world people.
  3. It needs to work reliably for 6-12 months before he would consider promoting it to his government.
  4. It needs to work.

2010 Travel

Wow it’s been a busy year for me travelling. A few days ago I was in the Flinders Ranges, about 700km north of where I live in Adelaide. This got me thinking about the travel I have been lucky enough to do this year: New Zealand, Germany, Sweden, East Timor, and China. They were all great trips but this post is about some off beat places that I haven’t blogged about yet.

Mount Hua

A few weeks ago I was climbing Mount Hua in China (thanks to the generosity of Atcom). Some photos of Mount Hua including the infamous “plank walk” from the China trip below:

It’s not quite as scary as it looks. Not quite. The adventurous young lady in the picture is Grace, one of my good friends from Atcom. The rest of the walk was tough but worth it with many spectacular views. It’s a 2200m climb to the mountain peaks via some very steep stairs. In fact one stair after another for about 6 hours.

One interesting difference for me was all the people. When I have done similar mountain walks in Australia or the US there are very few people. On Mount Hua there were people everywhere, and little kiosks every 500m where you can buy hot food and cold drinks, and even hotels at the top of the mountain.

I can thoroughly recommend visiting China – the Xian area I visited was great. Xian has lots of wonderful history and was once the capital of China. Compared to Westerners the Chinese take a very long view of history – 1500 years is like yesterday for them as their culture has been continuous for thousands of years. I’m still absorbing exactly what that means to your outlook on the world – I live in a country that is just over 100 years old.

Squatter Life in Berlin

In March I visited Germany to attend Cebit. It was nice to meet some of you there! After Cebit I visited Elektra who lives in a squatter community in Berlin. This is a really different way of living and fascinating for me. Rather than buy or rent homes, they live in modified commercial trailers or trucks. These have been insulated and converted into small, comfortable homes. They use solar power for electricity and small amounts of gas or wood for heating. As it’s squatted land, they pay just the small capital cost (e.g. a few thousand Euro) for the homes, rather than rent or a large mortgage. People there come from all walks of life, and have jobs just as varied as people living in conventional homes. They use mesh Wifi for Internet access (indeed many Wifi developers like Elektra live in these communities).

In the first photo you can see Elektra working on her electric recumbent bike – I took this for a fun ride while in Berlin. It cruises happily at 30 km/hr with just a little it of peddling.

An Open Source Life

The “open source” life I have been living over the past few years has taken me on all sorts of adventures to wonderful places. I have met many great people and made some wonderful friendships. I can trace this all back to a decision in late 2005 to open source the hardware designs I was working on. I remember at the time thinking long and hard about this decision. But there is no way I would have had these travel experiences, met these people, or built great hardware and software had I stayed in a cubicle. Open source equals a good and fortunate life.

Rowetel 2.0 Web Site

For the past two weeks I have been working on a major upgrade to my web site, and here it is! Please let me know if you find any problems on the new site. This post talks about the problems I solved during the upgrade.

I have wanted to upgrade my site for a while. I was happy with the content but it needed a better look and feel. There were also some bugs in the simple web store I was using. For example it didn’t force selection of a shipping option so I kept getting orders with no shipping. Bart from the Flusko project suggested using WordPress, as it has nice themes and a bunch of plugins for various stores. Key advantages are a unified look and feel across the blog and static pages, easier navigation, and finding the Store is now much easier.

As I was already using WordPress for my blog this sounded like a good idea. With a bit of encouragement from Rosemary (she cracked up laughing when she saw the old web site) I was off. Like a lot of jobs we put off I actually started enjoying the work after a few days.

Managing Projects by Risk

Like all projects there were some major challenges. My style of project management is to work on the riskiest tasks first. If you nail the riskiest tasks there is much less to go wrong later and the schedule becomes more predictable.

WordPress Look and Feel

The first challenge was to get my head around the WordPress 3.0 and select a theme to get the look and feel I wanted. To get started I installed WordPress 3.0 on a local test machine. I fooled around with themes for a few days before settling on Atahualpa. I checked browser compatibility. I found a bug with the default WordPress 3.0 Twenty Ten theme on Firefox 2.0 – the body text on pages is offset way to the right. Atahualpa renders just fine on Firefox 2.0 and 3.x so Atahualpa is it.

Being a very geeky and not very arty person this phase was actually quite intimidating for me. I didn’t trust my judgement to come up with a good looking site. Where do I start? But like any project a good approach is to break it down into little steps, try a few things, make a few mistakes, and ask questions. I am content with the result and particularly happy with the banner. Fortunately I have lots of cool photos from 4 years of open source work.

The next challenge was the shopping cart.

Shopping Cart

I had to find a web store that could handle my weird shipping rules. For IP0X sales I don’t have per item shipping, just one fee for an entire shipment. However this is not a flat fee – I have several different shipping options (Air Mail and EMS Courier). For some products I don’t charge for shipping. I also need dual currency support at the same time on the same page. Most store applications support just one currency across the site at any one time.

Anyway my shipping rules and currency support were strange enough that none of the free WordPress shopping carts plug-ins seemed suitable. After playing with different cart plug-ins for a few days I chose to hack the WordPress Simple Paypal Shopping Cart. This was easy to use and was simple (one 700 line PHP source file), which made it easily hackable. So over a couple of days I added options for shipping and multiple currency support that exactly suited the needs of my store. The modified PHP file is here.

To add a store item to a page you insert a “short code” to the WordPress page:

  [ wp_cart:IP04 IP-PBX with 0 modules:price:399.00:currency:AUD:needs_shipping:1:anchor:#cart:end ]

This is the entry that creates the “Add to Cart” button for the IP04, like this:

I added new PHP code to support the “currency”, “needs_shipping” and “anchor:” fields. The anchor field tells the cart where on the page to return after an item is added to the cart, for example “store.html#cart”. The default behaviour for this cart is to return to the top of the page which gets annoying when adding multiple products. I use the anchor to return to the shopping cart after each product is added.

The “needs_shipping” flag needs more explanation. When one item in the cart has “needs_shipping” set the checkout button is disabled until a shipping item is added to the cart. You can see various shipping items on the IP0X Store Page. Each shipping item is just like a regular product in the store. Here is the EMS Courier short code:

  [ wp_cart:EMS Courier:price:60.00:shipping:0:currency:USD:is_shipping:1:anchor:#cart:end ]

The “is_shipping” flags tells the cart this item is a shipping item which then enables the checkout button. Until a shipping item is selected the customer cannot proceed to the checkout. For items that have shipping included I just don’t include a “needs_shipping” flag in the item short code. I think it’s cool I can hack an store app just for my specific needs. Open source e-business.

Migrating Static Pages from ASCIIDOC

The V1.0 web site used ASCIIDOC to render the static pages. This was actually quite a nice system. I could edit my pages using my favourite editor on my laptop, then use a Makefile to render the pages and automatically upload them. Writing web pages in ASCIIDOC is quick and easy, and the source is human readable even before rendering.

However this meant I had a bunch of web pages in ASCIIDOC markup format that I needed to convert to plain html so I could post them into the new WordPress pages. So I wrote a simple interpreter in Perl to partially render all the pages, called a2h.pl. The output was pasted into the WordPress editor and with a few manual tweaks to the HTML I was happy with the results. I like writing little Perl scripts for these sorts of jobs. Saves a lot of time and prevents many errors that I would make with manual markup. Also some coding work made the web site migration project more interesting. But I do miss using emacs to edit my web site, these web based editors are just not the same.

One not so nice thing about database-driven web sites is the use of page numbers like “/blog/?page_id=434” rather than “about.html”. You can get around this with the WordPress permalinks feature but this involves some .htaccess magic on the server. Not sure if I can do that on my hosted web site so I chickened out and just used some redirect pages like “about.html” below:
<html>
<head>
<META HTTP-EQUIV="Refresh"
CONTENT="0; URL=/blog/?page_id=434">
</head>
</html>

This means all my existing links like /ucasterisk/index.html won’t break.

Integrating Static Pages with Existing Blog Pages

Next step was to integrate my static pages with the existing blog posts. Of which there are many. My first attempt was to export the posts from the live rowetel.com/blog and then import them to the test machine using the WordPress Import/Export feature. This worked but all the post IDs were messed up after the import. So a post that had been “?p=1” was now “?p=450”. That wouldn’t do as it would break a bunch of my links.

So after some head scratching I tried another approach. I used phpMyAdmin to dump the live blog database (just like a regular wordpress backup). I then installed this data into a new database on my test machine using phpMyAdmin and fired up a fresh copy of WordPress 3.0. Which unfortunately just sat there and displayed nothing. I guess it doesn’t like starting up with a populated database, especially one from an earlier version of WordPress. I found a few ways around this:

1. Create a fresh database with nothing in it then start up WordPress 3.0. Then use phpMyAdmin to restore all of the database tables from the live site except wp_options.

2. Manually point your browser at the admin login page, “http://localhost/wordpress/wp-admin/. For some reason this would work when the index page of the blog wouldn’t come up.

3. It’s also possible to edit the wp_options table using phpMyAdmin to change any options (like the site URL) that might be messing up WordPress when you install the database on a test machine with a different URL. Using phpMyAdmin is also handy for resetting your password when the blog won’t display.

Anyway the above approaches gave me a working WordPress 3.0 test machine that had all of my old blog posts with the correct post IDs. From there I could import my new static pages into WordPress to get the final merged site. The page_ids of the static pages were also changed during the import but as they were new it didn’t really matter – no one was linking to them yet. I wrote a bunch or little redirect files (as above) to handle redirection of the static pages.

Doing a complete install on a test machine was also great practice. As I built the test site I wrote a check list which I used when I worked on the actual live site. When it takes you 10 minutes to find some obscure theme option it’s a good idea to write it down!

Command Line e-business

When some one places an order on my store I get a PayPal email. I save the email to a file the use a Perl script called paypal2invoice.pl that slurps up this email and converts it into an itemised invoice. Yes, I really do use the Linux command line to generate invoices!
[david@bunny invoices]$ ./paypal2invoice.pl DRR-PO-577-Luke.txt
found postal address
qty_ip04: 1 shipping: cart_total: $540.00 AUD
[david@bunny invoices]$

linux.conf.au (LCA) 2010

I have just returned from an amazing week at LCA, which was held in Wellington, New Zealand this year. I am really, really tired. A week at LCA make feels more like the jet lag from flying around the world a couple of times. It is just so intellectually stimulating, both during the conference, in the hallways and after hours. I met people who had flown from Europe and the US just to attend LCA – it’s that good.

Lots of interesting ideas at LCA, I thought I’d share some of them with you:

Electric Vehicles at LCA

Wellington impressed me with it’s vibrant trolley bus network, and many taxi companies driving the Prius.

I obtained my Electric Vehicle fix from Bill Dube and his amazing KillaCycle, which pulls 6 second quarter mile times and accelerates at 3G – thats 100 km/hr in less than 1 second. My wife only pulls two Gs backing out our driveway :-) (at least it feels like that). Best of all it uses the exact same Advanced DC motor as my EV (actually two of them)! Bill was in New Zealand as a guest of the local drag racing community and attended some of LCA and exhibited the KillaCycle at the nearby Te Papa museum. An inspiring guy who is doing wonderful things for Electric Vehicles. His philosophy is to promote EVs by making people want them. He makes them want them by showing how EVs can out perform Internal Combustion (ICE) vehicles. It’s actually really easy to make a very fast electric vehicle, and the KillaCycle costs a fraction of ICE drag bikes with equivalent performance. This is because Electric motors are small, don’t need a gearbox, and are all torque off the line.

Also present was Tom Parker and his electric mini, a nice AC conversion with an advanced microcontroller based Battery Management System (BMS). Tom is firmly in the “full function microcontroller per cell” camp of BMS design, compared to the simpler analog designs that some people favor. This is an interesting debate. Although I run an analog BMS I can see pros and cons in both approaches. Analysing failure paths for a BMS is a interesting exercise. Putting any software between my batteries and sudden death in a high EMI environment scares me. A “crash” in electric vehicle software (say a speed controller) can be very literal. So I like the idea of multiple analog and digital interlocks in failure paths. I considered building a uC type BMS, but I wanted my EV on the road fast, rather than go through an extended development and debug cycle.

Tom and Phillip Court are also working on the Tumanako project, which includes an open source AC speed controller for EVs, a very worthwhile project.

Key Notes

Both Key Notes were very good, really captured my attention and made me think. One part of Glyn Moody’s talk suggested the idea of open notebooks – sharing science as it develops in an open fashion. I think I have been doing just that on this blog: “open engineering” where I discuss projects I am working on as they develop. I make a point of talking about how it feels to have a bug, talk about the wins and losses, and use a narrative rather than text book style.

If you look to the right of my blog home page, you will see that these posts are consistently the most popular.

Benjamin Mako Hill had some really interesting ideas on how locked down phones, unskippable first tracks on DVDs, and other anti-features in software really mess with our life. A nice examples is cameras that won’t boot with third party batteries. Implementing these anti-features are actually complex programming jobs for some poor lost souls. I mean it’s hard to lock down Vista Basic to make sure it can only run 3 applications at once.

A really scary thought is that 3 Billion of us pass our most sensitive data through devices completely controlled by companies we don’t trust at all. These devices are called cell (or mobile) phones. Gives new and important meaning to telephony projects like the Village Telco and OpenBTS.

Mako’s memes are strongly aligned with the Cell-networks as a Walled Garden ideas of Steve Song.

Tridge, FOSS, and Patents

Great talk by Andrew ‘Tridge’ Tridgell on Patent Defense for Free Software. In particular how to analyse patents from an open source perspective. The key message for me was not to be frightened off by patents. Instead, we should apply the same serious analysis and rigor we apply to FOSS development to analyse patents so we can avoid them interfering with our FOSS projects. He also discussed various defenses – to my surprise the “prior art” is the weakest and hardest to prove. The best defense is to annihilate the specific claims of the patent. This requires careful analysis, far beyond simply scanning the patent abstract.

Furthermore, he suggests that the FOSS community make patent infringement claims so painful that closed companies wince at the thought of tangling with FOSS developers. Many patent claims are very narrow in practice so this is not as hard as it sounds. For example if a FOSS developer is hassled over a specific patent they should develop a work around and publish it. A free alternative to the patented (and presumably licensed) technique greatly reduces the value of that patent. I have written about the need for free speech codecs, an area where people constantly get spooked by patents.

This talk and a few questions to Tridge gave me a great plan for ensuring my codec2 project won’t hit any patent hassles. More on this topic in this APC mag story and of course check out the LCA talk videos when they are posted.

Village Telco at LCA 2010

I was involved in three talks at this years LCA. The first was presented by Joel at the business mini-conf on behalf of Atcom. Atcom are keen on building custom hardware for open source projects. This helps them create new business and I feel is “a good thing” for open source. I want to encourage the idea of hardware companies working closely with open source developers. So Joel, Edwin, and I put together a presentation on Hardware for Open Source. The presentation went well, thanks Joel.

I presented on A Big Phoney Mesh – an update on the Village Telco and Mesh Potato over the last 12 months. To keep the talk fresh I chose to talk mostly about topics that interested me, like the recent antenna experiments. I also made a point of finishing in just 30 out of the allocated 45 minutes, allowing plenty of time for questions. Too many talks run over time. You can’t inform people about your topic unless they have a chance to drive the content via questions.

We had an ambitious demo planned for the talk. At the start of the talk we threw 5 Mesh Potatoes into the crowd and told the audience to set up a Village Telco for me. Meanwhile I continued the talk. About 10 minutes later “ring-ring” goes the phone next to me – our little Village Telco was alive! Amazing! To cap it off we called Elektra in Berlin – half way around the world in Berlin, who was also using a Mesh Potato. I was impressed this all worked, the LCA Wifi is very busy with 500 people using laptops in a small area.

I had a lot of help from Elektra, Steve, Edwin and the Atcom guys, Joel, Paul, and Mike in setting up the conference bling and these demos – thanks everyone.

We also had a Village Telco booth at the open day where I must have talked to several hundred people over 4 hours. We set up a bunch of Mesh Potatoes in other booths so we could demo the system. I had a lot of very encouraging comments and could have sold a box of Mesh Potatoes – everyone wants them for first world applications!

Several people are interested in slight variations of the Linux plus microcontroller idea that we use for the Mesh Potato. Think of an Arduino with a Linux/Wifi back end, or a Wifi router with serious analog and digital I/O of a microcontroller for interfacing to the physical world.

Other

I enjoyed the annual (unofficial) LCA Hadley-David session. Hadley runs Nicegear, and distributes IP04s in New Zealand. Like last year we hooked up for an enjoyable couple of hours chatting about a variety of topics, for example geeky cell phones (Hadley has a N900), solar power, IP04 GUIs and the laid back, hacker lifestyle we both share.

I attended a Hacker Space BOF in a kebab shop (I survived on a diet of chicken kebabs at LCA this year). I really like the idea of a physical space where I can go to to work and interact with other hackers. Especially as I work at home and only interact virtually most of the time. Especially if it eventually has machine tools. So now I am talking to a bunch of people in Adelaide about setting one up. Key issue is how to boot strap, physical spaces cost money.

It was also nice catching up with Jason White who demoed the latest svox-pico TTS software. Good open source speech synthesis software is really important for the Blind Linux community.

Conclusions

I think I am getting more out of LCA each year as I develop as a hacker and become more a part of the open source scene. However fatigue is a serious problem for many of us. Think I need to “taper” next time, no more hacking other projects right up until I get on the plane to LCA.

Transmitting Continuous Wifi Signals

To measure the Mesh Potato transmit power under Linux I needed to generate some continuous Wifi signals at a fixed bit rate. This is not as easy as it sounds so I’m writing a short post as an addendum to the Measuring Wifi Transmit Power post as it might be useful for some one else.

Wifi transmit signals are pulsed with a low duty cycle. For example a short packet like a beacon might only transmit for a few 100uS every second. This makes life difficult for low cost test equipment like my Tek 492 analog spectrum analyser or Wifi antenna test kit that like continuous signals.

All I wanted was a (nearly) continuous signal at a fixed rate and power level so I could check the power level on the spec-an.

However Wifi signals are usually transmitted as part of an exchange with other Wifi devices and the 802.11bg protocols themselves require an exchange of ACKs and occasional beacons that may be transmitted at different rates. On top of that there are automated algorithms that shift the channel bit rate based on packet loss statistics.

Set up

  1. Boot Mesh Potato, connect via Ethernet and kill batmand to prevent any spurious 1 Mbit HNA packets while we are measuring at other rates. We are using ad-hoc mode, AP mode may need some other foo to stop transmitting any automated packet transmission.
  2. Set spec-an for pulsed Wifi measurements (pulse stretcher on my spec-an). I used 1MHz resolution BW, narrow Video BW, min-noise and max-hold functions but this will be spec-an specific.

Generating Continuous Wifi Signals

Couple of ways to do it:

  1. ping broadcasts:
    # iwpriv ath0 mcast_rate 54000
    # ./ping 10.130.1.255 -fqb -s 1400

    Note: Use a long packet (-s 1400) to get a decent packet length and hence transmission time.

  2. regular pings:
    # iwconfig ath0 rate 54M
    # iwconfig ath0 txpower 19
    # ping 10.130.1.1 -fq -s 1400

    Note: we can use regular iwconfig interface, this is also builds up a signal much faster than (1), seems to be a higher packet rate.

  3. netcat:
    # iwconfig ath0 rate 54M
    # iwconfig ath0 txpower 19
    # cat /dev/zero | ./netcat -u 10.130.1.1 7777

    Note: this is much faster again than (2), max-hold function on spec-an not really needed. Must be sending much more data (higher packet rate) than ping.

Notes

  1. I installed the full versions of ping and netcat, the busybox ping didn’t do ping floods (-f).
  2. To get accurate power results I needed to enable the “pulse stretcher” function on my spec-an to cope with the non-continuous Wifi energy. Especially at the higher rates this had a big effect on measured power (4dB at 54M, 0dB at 1M). More modern FFT based spec-ans are better at pulsed signals.
  3. The Tx-power level on iwconfig is not accurate for all rates (e.g. it reports/allows 19dBm at 54M when it’s really 14dBm). It’s closer at rates beneath 36Mbit/s which have 19-20dBm target calibration power levels. The calibration procedure has different target power levels for each bit rate which aren’t reflected in the iwconfig information.
  4. With these spec-an settings and command lines I managed to get measured power outputs under Linux consistent with the calibration test reports of the Mesh Potato. At the lower bit rates the waveform was almost continuous so the max-hold and pulse-stretcher weren’t really required.