Category Archives: Accidental art

AI as puppetry, and rediscovering a long forgotten game.

AI in games is a hot topic at the moment, but most examples of this are attempts to create human-like behaviour in characters – a kind of advanced puppetry. These characters are also generally designed beforehand rather than reacting and learning from player behaviour, let alone being allowed to adapt in an open ended manner.

Rocketing around the gravitational wells.

Geo was an free software game I wrote around 10 years ago which I’ve recently rediscovered. I made it a couple of years after I was working for William Latham’s Computer Artworks – and was obviously influenced by that experience. At the time it was a little demanding for graphics hardware, but it turns out the intervening years processing power has caught up with it.

This is a game set in a large section of space inhabited by lifeforms comprised of triangles, squares and pentagons. Each lifeform exerts a gravitational pull and has the ability to reproduce. It’s structure is defined by a simple genetic code which is copied to it’s descendants with small errors, giving rise to evolution. Your role is to collect keys which orbit around gravitational wells in order to progress to the next level, which is repopulated by copies of the most successful individuals from the previous level.

A simple first generation lifeform.
A simple first generation lifeform.

Each game starts with a random population, so the first couple of levels are generally quite simple, mostly populated by dormant or self destructive species – but after 4 or 5 generations the lifeforms start to reproduce, and by level 10 a phenotype (or species) will generally have emerged to become an highly invasive conqueror of space. It becomes an against the clock matter to find all the keys before the gravitational effects are too much for your ship’s engines to escape, or your weapons to ‘prune’ the structure’s growth.

I’ve used similar evolutionary strategies in much more recent games, but they’ve required much more effort to get the evolution working (49,000 players have now contributed to egglab’s camouflage evolution for example).

A well defended 'globular' colony - a common species to evolve.
A well defended 'globular' colony – a common phenotype to evolve.

What I like about this form of more humble AI (or artificial life) is that instead of a program trying to imitate another lifeform, it really just represents itself – challenging you to exist in it’s consistent but utterly alien world. I’ve always wondered why the dominant post-human theme of sentient AI was a supercomputer deliberately designed usually by a millionaire or huge company. It seems to me far more likely that some form of life will arise – perhaps even already exists – in the wild variety of online spambots and malware mainly talking to themselves, and will be unnoticed – probably forever, by us. We had a little indication of this when the facebook bots in the naked on pluto game started having autonomous conversations with other online spambots on their blog.

A densely packed 'crystalline' colony structure.
A densely packed 'crystalline' colony structure.

Less speculatively, what I’ve enjoyed most about playing this simple game is exploring and attempting to shape the possibilities of the artificial life while observing and categorising the common solutions that emerge during separate games – cases of parallel evolution. I’ve tweaked the between-levels fitness function a little, but most of the evolution tends to occur ‘darwinistically’ while you are playing, simply the lifeforms that reproduce most effectively survive.

An efficient and highly structured 'solar array' phenotype which I’ve seen emerge twice with different genotypes.

You can get the updated source here, it only requires GLUT and ALUT (a cross platform audio API). At one time it compiled on windows, and should build on OSX quite easily – I may distribute binaries at some point if I get time.

A ‘block grid’ phenotype which is also common.

Easter Python/Minecraft programming day at dbsCode

Thursday saw our second dbsCode Easter programming taster, and like last year we focused on minecraft programming with our procedural architecture api.

The main change this time was that for the 20 11-16 year old participants we doubled our teachers to 4 (Glen Pike, Francesca Sargent and Matthew Dodkins and me), plus a couple of interested parents helped us out too. This meant that the day was much more relaxed and we noticed they were engaged with the programming for a much higher proportion of the time. Another factor was that we went straight into coding, as none of them needed introduction to minecraft this year. I think one of the biggest strengths of this kind of learning is that they are able to easily switch between playful interaction (jumping into each other’s worlds, building stuff the normal way) and programming. This means there is low pressure which I think makes it more of a self driven activity, as well as making a long (4 hour) workshop possible.

Here are some screenshots of their creations – this was a melon palace created in a world that had somehow become sliced apart:


Inside the melon palace, the waterfall pulsed with a while loop and sleeps that altered the water source blocks.


As last year there was a lot of mixing of activities, using code to create big shapes and then editing them manually for the finer details:



Here, a huge block of water inexplicably cuts through the scenery:


And two houses, that became merged together and then filled with bookshelves and other homely items:


Neural Network livecoding and retrofitting ZX Spectrum hardware

An experimental, and quite angry neural network livecoding synth (with an audio ‘weave’ visualisation) for the ZX Spectrum: source code and TZX file (for emulators). It’s a bit hard to make out in the video, but you can move around the 48 neurons and modify their synapses and trigger levels. There are two clock inputs and the audio output is the purple neuron at the bottom left. It allows recurrent loops as a form of memory, and some quite strange things are possible. The keys are:

  • w,d: move diagonally north west <-> south east
  • s,e: move diagonally south west <-> north east
  • t,y,g,h: toggle incoming synapse connections for the current neuron
  • space: change the ‘threshold’ of the current neuron (bit shifts left)

This audio should load up on a real ZX Spectrum:

One of the nice things about tech like this is that it’s easily hackable – this is a modification to the video output better explained here, but you can get a standard analogue video signal by connecting the internal feed directly to the plug and detaching the TV signal de-modulator with a tiny bit of soldering. Look at all those discrete components!


Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:


Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:


The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:


Project Nightjar: Camouflage data visualisation and possible internet robot predators

We’ve had tens of thousands of people spotting nightjars and donating a bit of their time to sensory ecology research. The results of this (of course it’s still on-going, along with the new nest spotting game) is a 20Mb database with hundreds of thousands of clicks recorded. One of the things we were interested in was seeing what people were mistaking for the birds – so I had a go at visualising all the clicks over the images (these are all parts of the cropped image – as it really doesn’t compress well):



Then, looking through the results – I saw a strange artefact:


Uncompressed high res version

My first thought was that someone had tried cheating with a script, but I can hardly imagine that anyone would go to the bother and it’s only in one image. Perhaps some form of bot or scraping software agent – I thought that browser click automation was done by directly interpreting the web page? Perhaps it’s a fall back for HTML5 canvas elements?

It turns out it’s a single player (playing as a monkey, age 16 to 35 who had played before) – so easy enough to filter away, but in doing that I noticed the click order was not as regular as it looked, and it goes a bit wobbly in the middle:

Someone with very precise mouse skills then? :)

Open Sauces

Open sauces is a FoAM project to investigate the sharing of food, food culture and food systems. Last week in Brussels we started experimenting with ways to store, display and reason about recipes in different ways. Taking the recipes from the Open Sauces book we’re representing them as Petri Nets, which means we can feed them into various different visualisations, from Scheme Bricks – taken from the Naked on Pluto’s gallery installation projection:


To a new brand new circular representation:


These structures are filtered somewhat to be more readable than the raw petri nets, which can be rendered via graphviz for debugging:


Visual livecoding environments: big screenshots

Some decent sized screenshots of al jazari and scheme bricks rendered with fluxus’s tiled frame dump command. This set includes some satisfyingly glitchy al jazari shots – not sure what was causing this, I initially assumed it was the orthographic projection, but the same artefacts occurred on the perspective first-person robot views, so it needs further investigation.








New Portuguese Bicycle Operatics

Prepare your bicycle clips! Kaffe Matthews and I are starting work on a new Bicycle Opera piece for the city of Porto, I’m working on a new mapping tool and adding some new zone types to the audio system.

While working on a BeagleBoard from one of the bikes used in the Ghent installation of ‘The swamp that was…’, I found (in true Apple/Google style) 4Mb of GPS logs, taken every 10 seconds during the 2 month festival that I forgot to turn off. Being part of a public installation (and therefore reasonably anonymised :) – this is the first 5th of the data, and about all it was possible to plot in high resolution on an online map:

It’s interesting to see the variability of the precision, as well as being able to identify locations and structures that break up the signal (such as the part underneath a large road bridge).

Al Jazari – scooping out the insides of planets with scheme

Optimisation is a game where you write more code in order to do less. In Al Jazari 2 doing less means drawing less blocks. Contiguous blocks of the same type are already automatically collapsed into single larger ones with the Octree – but if we can figure out which blocks are completely surrounded by other blocks, we can save more time by not building or drawing them either.

Here is a large sphere – clipped by the world volume, showing a slice through the internal block structure:


The next version has all internal blocks removed, in this case shaving 10% off the total primitives built and drawn:


The gaps in the sphere from the clipping allow us to look inside at how the octree has optimised the structure. The gain is higher in a more normal Minecraft set up with a reasonably flat floor covering a large amount of blocks. Here is the code involved, built on top of a functional library I’m building up on to manipulate this kind of data. It maps over each Octree leaf checking all the blocks it touches on each of its six sides, taking into account that the size of the leaf block may be bigger than one.

(define (octree-check-edge f o pos size)
  (define (do-x x y)
     ((eq? x -1) #f)
     ((octree-leaf-empty? (octree-ref o (vadd pos (f x y)))) #t)
     (else (do-x (- x 1) y))))
  (define (do-y y)
     ((eq? y -1) #f)
     ((do-x size y) #t)
     (else (do-y (- y 1)))))
  (do-y size))

(define (octree-is-visible? o pos size)
   (octree-check-edge (lambda (x y) (vector size x y)) o pos size)
   (octree-check-edge (lambda (x y) (vector -1 x y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x size y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x -1 y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x y size)) o pos size)
   (octree-check-edge (lambda (x y) (vector x y -1)) o pos size)))

(define (octree-calc-viz o)
   (lambda (v pos size depth)
       o pos size)
      (octree-leaf-value v)))