Category Archives: Accidental art

Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.


The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.


Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.


A cryptoweaving experiment

Archaeologists can read a woven artifact created thousands of years ago, and from its structure determine the actions performed in the right order by the weaver who created it. They can then recreate the weaving, following in their ancestor’s ‘footsteps’ exactly.

This is possible because a woven artifact encodes time digitally, weft by weft. In most other forms of human endeavor, reverse engineering is still possible (e.g. in a car or a cake) but instructions are not encoded in the object’s fundamental structure – they need to be inferred by experiment or indirect means. Similarly, a text does not quite represent its writing process in a time encoded manner, but the end result. Interestingly, one possible self describing artifact could be a musical performance.

Looked at this way, any woven pattern can be seen as a digital record of movement performed by the weaver. We can create the pattern with a notation that describes this series of actions (a handweaver following a lift plan), or move in the other direction like the archaeologist, recording a given notation from an existing weave.

A weaving and its executable code equivalent.

One of the potentials of weaving I’m most interested in is being able to demonstrate fundamentals of software in threads – partly to make the physical nature of computation self evident, but also as a way of designing new ways of learning and understanding what computers are.

If we take the code required to make the pattern in the weaving above:

(twist 3 4 5 14 15 16)
(weave-forward 3)
(twist 4 15)
(weave-forward 1)
(twist 4 8 11 15)

(repeat 2
 (weave-back 4)
 (twist 8 11)
 (weave-forward 2)
 (twist 9 10)
 (weave-forward 2)
 (twist 9 10)
 (weave-back 2)
 (twist 9 10)
 (weave-back 2)
 (twist 8 11)
 (weave-forward 4))

We can “compile” it into a binary form which describes each instruction – the exact process for this is irrelevant, but here it is anyway – an 8 bit encoding, packing instructions and data together:

8bit instruction encoding:

Action  Direction  Count/Tablet ID (5 bit number)
0 1         2              3 4 5 6 7 

Action types
weave:    01 (1)
rotate:   10 (2)
twist:    11 (3)

forward: 0
backward: 1

If we compile the code notation above with this binary system, we can then read the binary as a series of tablet weaving card flip rotations (I’m using 20 tablets, so we can fit in two instructions per weft):

0 1 6 7 10 11 15
0 1 5 7 10 11 14 15 16
0 1 4 5 6 7 10 11 13
1 6 7 10 11 15
0 1 5 7 11 17
0 1 5 10 11 14
0 1 4 6 7 10 11 14 15 16 17
0 1 2 3 4 5 6 7 11 12 15
0 1 4 10 11 14 16
1 6 10 11 14 17
0 1 4 6 11 16
0 1 4 7 10 11 14 16
1 2 6 10 11 14 17
0 1 4 6 11 12 16
0 1 4 7 10 11 14 16
1 5

If we actually try weaving this (by advancing two turns forward/backward at a time) we get this mess:


The point is that (assuming we’ve made no mistakes) this weave represents *exactly* the same information as the pattern does – you could extract the program from the messy encoded weave, follow it and recreate the original pattern exactly.

The messy pattern represents both an executable, as well as a compressed form of the weave – taking up less space than the original pattern, but looking a lot worse. Possibly this is a clue too, as it contains a higher density of information – higher entropy, and therefore closer to randomness than the pattern.

Pixel Quipu

The graphviz visualisations we’ve been using for quipu have quite a few limitations, as they tend to make very large images, and there is limited control over how they are drawn. It would be better to be able to have more of an overview of the data, also rendering the knots in the right positions with the pendants being the right length.

Meet the pixelquipu!


These are drawn using a python script which reads the Harvard Quipu Database and renders quipu structure using the correct colours. The knots are shown as a single pixel attached to the pendant, with a colour code of red as single knot, green for a long knot and blue as a figure of eight knot (yellow is unknown or missing). The value of the knot sets the brightness of the pixel. The colour variations for the pendants are working, but no difference between twisted and alternating colours, also no twist direction is visualised yet.


Another advantage of this form of rendering is that we can draw data entropy within the quipu in order to provide a different view of how the data is structured, as a attempt to uncover hidden complexity. This is done hierarchically so a pendant’s entropy is that of its data plus all the sub-pendants, which seemed most appropriate given the non-linear form that the data takes.



We can now look at some quipus in more detail – what was the purpose of the red and grey striped pendants in the quipu below? They contain no knots, are they markers of some kind? This also seems to be a quipu where the knots do not follow the decimal coding pattern that we understand, they are mostly long knots of various values.


There also seems to be data stored in different kinds of structure in the same quipu – the collection of sub-pendants below in the left side presumably group data in a more hierarchical manner than the right side, which seems much more linear – and also a colour change emphasises this.


Read left to right, this long quipu below seems very much like you’d expect binary data to look – some kind of header information or preamble, followed by a repeating structure with local variation. The twelve groups of eight grey pendants seem redundant – were these meant to be filled in later? Did they represent something important without containing any knots? We will probably never know.


The original thinking of the pixelquipu was to attempt to fit all the quipus on a single page for viewing, as it represents them with the absolute minimum pixels required. Here are both pendant colour and entropy shown for all 247 quipu we have the data for:



The UAV toolkit & appropriate technology

The UAV toolkit’s second project phase is now complete, the first development sprint at the start of the year was a bit of research into what we could use an average phone’s sensors for, resulting in a proof of concept remote sensing android app that allowed you to visually program different scripts which we then tested on some drones, a radio controlled plane and a kite.


This time we had a specific focus on environmental agencies, working with Katie Threadgill at the Westcountry Rivers Trust has meant we’ve had to think about how this could be used by real people in an actual setting (farm advisors working with local farmers). Making something cheap, open source and easy to use, yet open ended has been the focus – and we are now looking at providing WRT with a complete toolkit which would comprise a drone (for good weather) a kite (for bad weather/no flight licences required) and an android phone so they don’t need to worry about destroying their own if something goes wrong. Katie has produced this excellent guide on how the app works.

The idea of appropriate technology has become an important philosophy for projects we are developing at Foam Kernow, in conjunction with unlikely connections in livecoding and our wider arts practice. For example the Sonic Bike project – where from the start we restricted the technology so that no ‘cloud’ network connections are required and all the data and hardware required has to fit on the bike – with no data “leaking” out.


With the UAV toolkit the open endedness of providing a visual programming system that works on a touchscreen results in an application that is flexible enough to be used in ways and places we can’t predict. For example in crisis situations, where power, networking or hardware is not available to set up remote sensing devices when you need them most. With the UAV toolkit we are working towards a self contained system, and what I’ve found interesting is how many interface and programming ‘guidelines’ I have to bend to make this possible – open endedness is very much against the grain of contemporary software design philosophy.

The “app ecosystem” is ultimately concerned with elevator pitches – to do one thing, and boil it down to the least actions possible to achieve it. This is not a problem in itself, but the assumption that this is the only philosophy worth consideration is wrong. One experience that comes to mind recently is having to make and upload banner images of an exact size to the Play Store before it would allow me to release an important fix needed for Mongoose 2000, which is only intended to ever have 5 or 6 users.


For the UAV toolkit, our future plans include stitching together photos captured on the phone and producing a single large map without the need to use any other software on a laptop. There are also interesting possibilities regarding distributed networking with bluetooth and similar radio systems – for example sending code to different phones is needed, as currently there is no way to distribute scripts amongst users. This could also be a way of creating distributed processing – controlling one phone in a remote location with another via code sent by adhoc wifi or SMS for example.

AI as puppetry, and rediscovering a long forgotten game.

AI in games is a hot topic at the moment, but most examples of this are attempts to create human-like behaviour in characters – a kind of advanced puppetry. These characters are also generally designed beforehand rather than reacting and learning from player behaviour, let alone being allowed to adapt in an open ended manner.

Rocketing around the gravitational wells.

Geo was an free software game I wrote around 10 years ago which I’ve recently rediscovered. I made it a couple of years after I was working for William Latham’s Computer Artworks – and was obviously influenced by that experience. At the time it was a little demanding for graphics hardware, but it turns out the intervening years processing power has caught up with it.

This is a game set in a large section of space inhabited by lifeforms comprised of triangles, squares and pentagons. Each lifeform exerts a gravitational pull and has the ability to reproduce. It’s structure is defined by a simple genetic code which is copied to it’s descendants with small errors, giving rise to evolution. Your role is to collect keys which orbit around gravitational wells in order to progress to the next level, which is repopulated by copies of the most successful individuals from the previous level.

A simple first generation lifeform.
A simple first generation lifeform.

Each game starts with a random population, so the first couple of levels are generally quite simple, mostly populated by dormant or self destructive species – but after 4 or 5 generations the lifeforms start to reproduce, and by level 10 a phenotype (or species) will generally have emerged to become an highly invasive conqueror of space. It becomes an against the clock matter to find all the keys before the gravitational effects are too much for your ship’s engines to escape, or your weapons to ‘prune’ the structure’s growth.

I’ve used similar evolutionary strategies in much more recent games, but they’ve required much more effort to get the evolution working (49,000 players have now contributed to egglab’s camouflage evolution for example).

A well defended 'globular' colony - a common species to evolve.
A well defended 'globular' colony – a common phenotype to evolve.

What I like about this form of more humble AI (or artificial life) is that instead of a program trying to imitate another lifeform, it really just represents itself – challenging you to exist in it’s consistent but utterly alien world. I’ve always wondered why the dominant post-human theme of sentient AI was a supercomputer deliberately designed usually by a millionaire or huge company. It seems to me far more likely that some form of life will arise – perhaps even already exists – in the wild variety of online spambots and malware mainly talking to themselves, and will be unnoticed – probably forever, by us. We had a little indication of this when the facebook bots in the naked on pluto game started having autonomous conversations with other online spambots on their blog.

A densely packed 'crystalline' colony structure.
A densely packed 'crystalline' colony structure.

Less speculatively, what I’ve enjoyed most about playing this simple game is exploring and attempting to shape the possibilities of the artificial life while observing and categorising the common solutions that emerge during separate games – cases of parallel evolution. I’ve tweaked the between-levels fitness function a little, but most of the evolution tends to occur ‘darwinistically’ while you are playing, simply the lifeforms that reproduce most effectively survive.

An efficient and highly structured 'solar array' phenotype which I’ve seen emerge twice with different genotypes.

You can get the updated source here, it only requires GLUT and ALUT (a cross platform audio API). At one time it compiled on windows, and should build on OSX quite easily – I may distribute binaries at some point if I get time.

A ‘block grid’ phenotype which is also common.

Easter Python/Minecraft programming day at dbsCode

Thursday saw our second dbsCode Easter programming taster, and like last year we focused on minecraft programming with our procedural architecture api.

The main change this time was that for the 20 11-16 year old participants we doubled our teachers to 4 (Glen Pike, Francesca Sargent and Matthew Dodkins and me), plus a couple of interested parents helped us out too. This meant that the day was much more relaxed and we noticed they were engaged with the programming for a much higher proportion of the time. Another factor was that we went straight into coding, as none of them needed introduction to minecraft this year. I think one of the biggest strengths of this kind of learning is that they are able to easily switch between playful interaction (jumping into each other’s worlds, building stuff the normal way) and programming. This means there is low pressure which I think makes it more of a self driven activity, as well as making a long (4 hour) workshop possible.

Here are some screenshots of their creations – this was a melon palace created in a world that had somehow become sliced apart:


Inside the melon palace, the waterfall pulsed with a while loop and sleeps that altered the water source blocks.


As last year there was a lot of mixing of activities, using code to create big shapes and then editing them manually for the finer details:



Here, a huge block of water inexplicably cuts through the scenery:


And two houses, that became merged together and then filled with bookshelves and other homely items:


Neural Network livecoding and retrofitting ZX Spectrum hardware

An experimental, and quite angry neural network livecoding synth (with an audio ‘weave’ visualisation) for the ZX Spectrum: source code and TZX file (for emulators). It’s a bit hard to make out in the video, but you can move around the 48 neurons and modify their synapses and trigger levels. There are two clock inputs and the audio output is the purple neuron at the bottom right. It allows recurrent loops as a form of memory, and some quite strange things are possible. The keys are:

  • w,d: move diagonally north west <-> south east
  • s,e: move diagonally south west <-> north east
  • t,y,g,h: toggle incoming synapse connections for the current neuron
  • space: change the ‘threshold’ of the current neuron (bit shifts left)

This audio should load up on a real ZX Spectrum:

One of the nice things about tech like this is that it’s easily hackable – this is a modification to the video output better explained here, but you can get a standard analogue video signal by connecting the internal feed directly to the plug and detaching the TV signal de-modulator with a tiny bit of soldering. Look at all those discrete components!


Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:


Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:


The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:


Project Nightjar: Camouflage data visualisation and possible internet robot predators

We’ve had tens of thousands of people spotting nightjars and donating a bit of their time to sensory ecology research. The results of this (of course it’s still on-going, along with the new nest spotting game) is a 20Mb database with hundreds of thousands of clicks recorded. One of the things we were interested in was seeing what people were mistaking for the birds – so I had a go at visualising all the clicks over the images (these are all parts of the cropped image – as it really doesn’t compress well):



Then, looking through the results – I saw a strange artefact:


Uncompressed high res version

My first thought was that someone had tried cheating with a script, but I can hardly imagine that anyone would go to the bother and it’s only in one image. Perhaps some form of bot or scraping software agent – I thought that browser click automation was done by directly interpreting the web page? Perhaps it’s a fall back for HTML5 canvas elements?

It turns out it’s a single player (playing as a monkey, age 16 to 35 who had played before) – so easy enough to filter away, but in doing that I noticed the click order was not as regular as it looked, and it goes a bit wobbly in the middle:

Someone with very precise mouse skills then? :)