Tag Archives: locative media

Sonic Bike Hacklab Part 3: The anti-cloud – towards bike to bike mesh networking

IMG_20130726_122857

[Continued from part 2] One of the philosophies that pre-dates my involvement with the sonic bikes project is a refusal of cloud technologies – to avoid the use of a central server and to provide everything required (map, sounds and computation) on board the bikes. As the problems with cloud technology become more well known, art projects like this are a good way to creatively prototype alternatives.

The need to abstractly “get the bikes to talk to one another” beyond our earlier FM transmission experiments implies some kind of networking, and mesh networking provides non-hierarchical peer to peer communication, appropriate if you want to form networks between bikes on the street over wifi (which may cluster at times, and break up or reform as people go in different directions). After discussing this a bit with hacklab participant and fellow Beagleboard enthusiast Adam Parkinson I thought this would be a good thing to spend some time researching.

The most basic networking information we can detect with wifi is the presence of a particular bike, and we decided to prototype this first. I’d previously got hold of a low power wifi usb module compatible with a Raspberry Pi (which I found I could run with power from the bike’s beagleboard usb!), and we could use an Android phone on another bike, running fluxa to plug the signal strength into a synth parameter:

mesh

It’s fairly simple to make an ad-hoc network on the Raspberry Pi via command line:

ifconfig wlan0 down
iwconfig wlan0 channel 4
iwconfig wlan0 mode ad-hoc
iwconfig wlan0 essid 'bikemesh'
iwconfig wlan0 key password
ifconfig wlan0 192.168.2.1

On the Android side, the proximity synth software continuously measures the strength of the wifi network from the other bike, using a WifiScanReceiver we set up like so:

wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE);
wifi.startScan();
registerReceiver(new WiFiScanReceiver(), 
                 new IntentFilter(
                 WifiManager.SCAN_RESULTS_AVAILABLE_ACTION));

The WifiScanReceiver is a subclass of BroadcastReceiver, that re-triggers the scan process. This results in reasonably high frequency scanning, a couple a second or so and we also check the SSID names of the networks around the bike for the correct “bikemesh” node:

import java.util.List;

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.net.wifi.ScanResult;
import android.net.wifi.WifiManager;
import android.util.Log;
import android.widget.Toast;

public class WiFiScanReceiver extends BroadcastReceiver {
    private static final String TAG = "WiFiScanReceiver";

    public WiFiScanReceiver() {
        super();
        Level=0;
    }

    static public int Level;

    @Override
    public void onReceive(Context c, Intent intent) {
        List<ScanResult> results = ((Earlobes)c).wifi.getScanResults();
        ScanResult bestSignal = null;

        for (ScanResult result : results) {
            if (result.SSID.equals("bikemesh")) {
                String message = String.format("bikemesh located: strength %d",
                                               result.level);
                Level = result.level;
            }
        }
        ((Earlobes)c).wifi.startScan();
    }

}

The synth was also using the accelerometers, but ramped up the cutoff frequency of a low pass filter on some white noise and increased modulation on the accelerometer driven sine waves when you were close to the other bike. The result was quite surprising with such a simple setup, as it immediately turned into a game playing situation, bike “hide and seek” – as rider of the proximity synth bike you wanted to hunt out where the wifi bike was, the rider of which would be trying to escape. The range was surprisingly long distance, about halfway across London Fields park. Here is an initial test of the setup (we had the sounds a bit more obvious than this in later tests):

With the hardware and some simple software tested, the next stage would be to run multiple wifi nodes and get them to connect and form a mesh network. I got some way into using Babel for this, which is very self contained and compiles and runs on Beagleboard and Raspberry Pi. The other side to this is what kind of things do we want to do with this kind of “on the road” system, how do we notate and artistically control what happens over a sonic bike mesh network?

Some ideas we had included recording sounds and passing them between bikes, or each bike forming a synth node, so you create and change audio dependant on who is around you and what the configuration is. I made few more notes on the technical stuff here.

Sonic Bike Hacklab: Part 1

IMG_20130726_122148

Time to report on the sonic bike hacklab Kaffe Matthews and I put on in AudRey HQ in Hackney. We had a sunny and stormy week of investigation into sonic bike technology. After producing three installations with sonic bikes, the purpose of the lab was to open the project up to more people with fresh ideas, as well as a chance to engage with the bikes in a more playful research oriented manner without the pressure of an upcoming production.

IMG_20130726_121414

Each of the three previous installation, in Ghent, Hailuoto island, Finland, and Porto, we’ve used the same technology, a Beagleboard using a GPS module to trigger samples to play back over speakers mounted on the handlebars. The musical score is a map created using Ushahidi consisting of zones tagged with sample names and playback parameters that the bikes carry around with them.

swamp

We decided to concentrate on two areas of investigation, using the bike as a musical instrument and finding ways to get the bikes to talk to each other (rather than being identical independent clones). We had a bunch of different components to play with, donated by the participants, Kaffe and I – while the bikes already provided power via 12v batteries, amplification and speakers. We focused on tech we could rapid prototype with minimal fuss.

components

The next few posts will describe the different experiments we carried out using these components.

‘The Marja trio’ – Sonic Bike Experience for Marjaniemi

I’ve been doing more remote install work on Kaffe’s latest piece she’s been building while resident at Hai Art in Hailuoto, an island in the north of Finland. The zone building, site specific sample composing and microscopic Beagleboard log debugging is over, and two new GPS Opera bikes are born! Go to Hai Art or Kaffe’s site for more details.

The Marja trio_score2_KM2013

183123_486670614739967_1256052974_n

Screen Shot 2013-05-03 at 9.55.33 AM

Bike Opera – layering sounds in space

New advancements on the the bike opera project with Kaffe Matthews include a brand new mapping tool based on, yes you guessed it – Ushahidi which I’ve been using for a lot of wildly different projects recently. This time the work has been mainly focused in improving the area mapping – adding features for editing polygons so Kaffe can layer her sounds in space:

swamp-edit

This work is fairly reusable, as it only concerns changes to the submit_edit_js.php file in the standard Ushahidi install. In the meantime, Kaffe has been collecting sounds from musicians in Porto and building up a work of truly operatic proportions. We keep our fingers crossed that the bike mounted BeagleBoards can cope with all this material!

Angelika

New Portuguese Bicycle Operatics

Prepare your bicycle clips! Kaffe Matthews and I are starting work on a new Bicycle Opera piece for the city of Porto, I’m working on a new mapping tool and adding some new zone types to the audio system.

While working on a BeagleBoard from one of the bikes used in the Ghent installation of ‘The swamp that was…’, I found (in true Apple/Google style) 4Mb of GPS logs, taken every 10 seconds during the 2 month festival that I forgot to turn off. Being part of a public installation (and therefore reasonably anonymised :) – this is the first 5th of the data, and about all it was possible to plot in high resolution on an online map:

It’s interesting to see the variability of the precision, as well as being able to identify locations and structures that break up the signal (such as the part underneath a large road bridge).

DORIS on the high seas

Yesterday was the first test of the full DORIS marine mapping system I’m developing with Amber Teacher and David Hodgson at Exeter University. We went out on a fishing boat from Mylor harbour for a 5 hour trip along the Cornish coast. It’s a quiet season for lobsters at the moment, so this was an opportunity to practice the sampling without too much pressure. Researcher Charlie Ellis was working with Hannah Knott, who work with the National Lobster Hatchery and need to take photos of hundreds of lobsters and combine them with samples of their genetic material.

IMG_20130214_104502

By going out on the boats they get accurate GPS positioning in order to determine detailed population structures, and can sample lobsters that are small or with eggs and need to be returned to the sea as well as the ones the fishermen take back to shore to be sold. Each photograph consists of a cunning visual information system of positioning objects to indicate sex, whether they are for return or removal and a ruler for scale.

lobster

map

pot

Android Camera Problems

The DORIS marine mapping platform is taking shape. For this project, touch screens are not great for people wearing gloves in small fishing boats – so one of the things the android app needs to do is make use of physical keys. In order to do that for taking pictures, I’ve had to write my own camera android activity.

It seems that there are differences in underlying camera behaviour across devices – specifically the Acer E330 model we have to take out on the boats. It seemed that nearly every time takePicture() is called the supplied callback functions fail fire. I’ve tested callbacks for the shutter, raw and jpeg and error events and also tried turning off the preview callback beforehand as suggested elsewhere, no luck on the Acer, while it always works fine on HTC.

The only solution I have so far is to close and reopen the camera just before takePicture() is called which seems to work as intended. As it takes some seconds to complete, it’s also important (as this is bound to a key up event) to prevent the camera starting to take pictures before it’s finished processing the previous one as that causes further callback confusion.

import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.hardware.Camera;
import android.hardware.Camera.CameraInfo;
import android.hardware.Camera.PictureCallback;
import android.util.Log;

class PictureTaker 
{
    private Camera mCam;
    private Boolean mTakingPicture;

    public PictureTaker() {
        mTakingPicture=false;
    }

    public void Startup(SurfaceView view) {
        mTakingPicture=false;
        OpenCamera(view);
    }

    private void OpenCamera(SurfaceView view) {
        try {
            mCam = Camera.open();
            if (mCam == null) {
                Log.i("DORIS","Camera is null!");
                return;
            }
            mCam.setPreviewDisplay(view.getHolder());
            mCam.startPreview();
        }
        catch (Exception e) {
            Log.i("DORIS","Problem opening camera! " + e);
            return;
        }
    }

    private void CloseCamera() {
        mCam.stopPreview();
        mCam.release();
        mCam = null;
    }

    public void TakePicture(SurfaceView view, PictureCallback picture)
    {
        if (!mTakingPicture) {
            mTakingPicture=true;
            CloseCamera();
            OpenCamera(view);
            
            try {
                mCam.takePicture(null, null, picture);
            }
            catch (Exception e) {
                Log.i("DORIS","Problem taking picture: " + e);
            }
        }
        else {
            Log.i("DORIS","Picture already being taken");
        }   
    }
}

Doris: Lobster mapping

A new project, coming from Borrowed Scenery’s Zizim project, converted into a scientific research tool in collaboration with the College of Life and Environmental Sciences at Exeter University and Helsinki University. Doris is named after the sea nymph from Greek mythology, and will be used for mapping Lobster catches on fishing boats so researchers working at the National Lobster Hatchery in Padstow can easily build up a picture of how the animal’s condition relates to location, sea conditions and tide.

Here is an initial plan for how the thing will work:

The main complexities include locating open data sources for sea states and tides and creating an interface that works easily enough on a small fishing boat under various weather conditions – for example touch screens aren’t much use if you’re wearing gloves. Approaches to try include using the physical buttons, shaking, or voice input. As with previous FoAM projects Boskoi and Zizim, this will be built on the Ushahidi platform. Source repo location to follow…

Aniziz and Zizim

The online part of the borrowed scenery project is an experiment in geotagging plants and plant related locations via a website/app called Zizim (the compass) combined with a multiplayer online game called Aniziz (the soil) where you can interact with the plants people have found. Having spent the last couple of months developing them, they are now ready for more of an open beta phase. Another part of the project is the forum here for collecting any feedback and thoughts.

Your role is to strengthen the connection between the world of Aniziz and the plants of Ghent. The plants are broadcasting messages which can only be correctly tuned into by energising them with fungi, the more plants you energise the higher your score will be.

The latest addition are specially tagged items called “pataportals” you can create with the android app which create “wormholes” in the Aniziz world. Stepping into one causes you to get sent to another one – which could be thousands of miles away. Right now Ghent is connected with the Cornish town of Penryn via a wormhole on the sea shore:

Swamp bike opera impressions…


Photo thanks to zzkt

As the coder for “The swamp that was…” bike opera, my view of things was from “inside” the bikes – listening to the GPS data and playing samples. So it was super (and somewhat surreal) to finally become a rider and take one of the bikes (called Nancy) for a spin through the streets of Ghent to experience it like everyone else at the Electrified festival.

I followed the different routes, and tried some out backwards and got lost in the “garden” – the zone of mysterious ghost butterflies and wandering sounds. During the end of the final route shelter had to be sought in Julius de Vigneplein during a gigantic thunderstorm, to the sound of looping saxophones before retreating back to the Vooruit.

It didn’t crash (always my main preoccupation with testing something I’ve been involved with writing software for) and there seemed to be continuous audio from the routes. Once I had ascertained that the software seemed to be working properly I could actually start to pay attention to the sounds which were a very fluid mix, interspersed with sudden bursts of Flemish – recordings of local people.

The sounds are a widely varied mix ranging from digital glitch to ethereal sounds and processed ducks that accompany you as you cycle along the canals. The “garden” is not a route as such but occupies a maze of small streets in the Ledeberg area and populates the streets with many insects, birds and other surprises.

The custom bike/speaker arrangement designed and built by Timelab was satisfyingly loud – pulling up next to other innocent cyclists at junctions with blaring jazz is quite an intriguing social experience. It makes you want to say “I can’t turn it off” or “I am an art installation!” The beagleboards also seem fairly durable, as the bikes have been running for a month now, and the cobbled streets and some areas with bumpy roadworks give them a lot of shocks to cope with.

The “click click” of car indicator relays tell you when you’ve reached junctions where you have to turn, and while our method of calculating direction (by comparing positions every 10 seconds) doesn’t really work well enough, they still had a useful role, saying “pay attention, you need to turn here!”. This installation, and the rest of the festival will be running for another month, until the 4th November.