Category Archives: Accidental art

Open Sauces

Open sauces is a FoAM project to investigate the sharing of food, food culture and food systems. Last week in Brussels we started experimenting with ways to store, display and reason about recipes in different ways. Taking the recipes from the Open Sauces book we’re representing them as Petri Nets, which means we can feed them into various different visualisations, from Scheme Bricks – taken from the Naked on Pluto’s gallery installation projection:


To a new brand new circular representation:


These structures are filtered somewhat to be more readable than the raw petri nets, which can be rendered via graphviz for debugging:


Visual livecoding environments: big screenshots

Some decent sized screenshots of al jazari and scheme bricks rendered with fluxus’s tiled frame dump command. This set includes some satisfyingly glitchy al jazari shots – not sure what was causing this, I initially assumed it was the orthographic projection, but the same artefacts occurred on the perspective first-person robot views, so it needs further investigation.








New Portuguese Bicycle Operatics

Prepare your bicycle clips! Kaffe Matthews and I are starting work on a new Bicycle Opera piece for the city of Porto, I’m working on a new mapping tool and adding some new zone types to the audio system.

While working on a BeagleBoard from one of the bikes used in the Ghent installation of ‘The swamp that was…’, I found (in true Apple/Google style) 4Mb of GPS logs, taken every 10 seconds during the 2 month festival that I forgot to turn off. Being part of a public installation (and therefore reasonably anonymised :) – this is the first 5th of the data, and about all it was possible to plot in high resolution on an online map:

It’s interesting to see the variability of the precision, as well as being able to identify locations and structures that break up the signal (such as the part underneath a large road bridge).

Al Jazari – scooping out the insides of planets with scheme

Optimisation is a game where you write more code in order to do less. In Al Jazari 2 doing less means drawing less blocks. Contiguous blocks of the same type are already automatically collapsed into single larger ones with the Octree – but if we can figure out which blocks are completely surrounded by other blocks, we can save more time by not building or drawing them either.

Here is a large sphere – clipped by the world volume, showing a slice through the internal block structure:


The next version has all internal blocks removed, in this case shaving 10% off the total primitives built and drawn:


The gaps in the sphere from the clipping allow us to look inside at how the octree has optimised the structure. The gain is higher in a more normal Minecraft set up with a reasonably flat floor covering a large amount of blocks. Here is the code involved, built on top of a functional library I’m building up on to manipulate this kind of data. It maps over each Octree leaf checking all the blocks it touches on each of its six sides, taking into account that the size of the leaf block may be bigger than one.

(define (octree-check-edge f o pos size)
  (define (do-x x y)
     ((eq? x -1) #f)
     ((octree-leaf-empty? (octree-ref o (vadd pos (f x y)))) #t)
     (else (do-x (- x 1) y))))
  (define (do-y y)
     ((eq? y -1) #f)
     ((do-x size y) #t)
     (else (do-y (- y 1)))))
  (do-y size))

(define (octree-is-visible? o pos size)
   (octree-check-edge (lambda (x y) (vector size x y)) o pos size)
   (octree-check-edge (lambda (x y) (vector -1 x y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x size y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x -1 y)) o pos size)
   (octree-check-edge (lambda (x y) (vector x y size)) o pos size)
   (octree-check-edge (lambda (x y) (vector x y -1)) o pos size)))

(define (octree-calc-viz o)
   (lambda (v pos size depth)
       o pos size)
      (octree-leaf-value v)))

Spork Factory: Amorphous Computing

Departing in a new direction after evolved light follower robots, take 500 processor cores spread out in space. Give them a simple instruction set which includes a instruction to copy (DMA) 8 bytes of their code/data to nearby cores (with an error rate of 0.5%). Fill the cores with random junk and set them running. If we graph the bandwidth used (the amount of data transmitted per cycle by the whole system) we get a plot like this:


This explosion in bandwidth use is due to implicit emergence of programs which replicate themselves. There is no fitness function, it’s not a genetic algorithm and there is no guided convergence to a single solution – no ‘telling it what to do’. Programs may arise that cooperate with each other or may exhibit parasitic behaviour as in alife experiments like Tierra, and this could be seen as a kind of self modifying, emergent Amorphous computing – and eventually, perhaps a way of evolving programs in parallel on cheap attiny processor hardware.

In order to replicate, the code needs to push a value onto the stack as the address to start the copy from, and then call the dma instruction to broadcast it. This is one of the 500 cores visualised using Betablocker DS emulator display, the new “up” arrow icon is the dma instruction.

Thinking more physically, the arrangements of the processors in space is a key factor – here are some visualisations. The size of each core increases when it transmits, and it’s colour is set by a hash of the last data transmission – so we can see the rise of communication and the propagation of different strains of successful code.

The source is here and includes the visualisation software which requires fluxus.

Spork Factory: evolving a light follower robot

Continuing with the structured procrastination R&D project on evolvable hardware, I’m proud to report a pretty decent light following robot – this is a video of the first real-world test, with a program grown from primordial soup chasing me around:

After creating a software model simulation of the robot in the last post, I added some new bytecode instructions for the virtual machine: LEYE and REYE push 1 on the stack if we are detecting light from the left or right photoresistor, zero if it’s dark. LMOT and RMOT pop the top byte of the stack to turn the motors on and off. The strategy for the genetic algorithm’s fitness function is running each 16 byte generated program on the robot for 1000 cycles, moving the robot to a new random location and facing direction 10 times without stopping the program. At the end of each run the position of the robot was compared to the light position, and the distances were averaged as the fitness. Note that we’re not assigning fitness to how fast we get to the light.

This is pretty simple stuff, but it’s still interesting to look at what happens over time in the genetic algorithm. Both motors are running at startup by default, so the first successful programs learn how turn one motor off – otherwise the robot just shoots off and scores really low. So the first generations tend to just go round in circles. Then they start to learn how to plug the eyes in, one by one edging them closer to the goal – then it’s a case of improving the sample rate to improve accuracy, usually by using jmps and optimising the loops.

This is an example of a fairly simple and effective solution, the final generation shown in the animation above:

  nop nop nop nop nop 
  nop nop nop nop
  jmpz loop

Some explanation, the right and left eyes are plugged into the left and right motors, which is the essential bit making it work, the ‘nop’s are all values that are not executable. The ‘rmot’ before the ‘jmpz’ makes the robot scan around in circles if there is no light detected (strangely, a case which doesn’t happen in the simulation). The argumant to ‘jmpz’ is 0 (loop) which is actually the 17th byte – so it’s cheekily using memory which has been initialised to zero as part of it’s program.

This is a more complicated and stranger program which evolved after 70 generations with a high fitness, I haven’t worked out what it’s up to yet:

  pshl 171 
  pip 111 
  pip 30 
  pshl 214 
  jmp loop

Re-interpreting history

A script for sniffing bits of supercollider code being broadcast as livecoding history over a network and re-interpreting them as objects in fluxus, written during an excellent workshop by Alberto de Campo and Julian Rohrhuber at /*VIVO*/ Mexico City.

(osc-source "57120")

(define (stringle str)
        (string->list str)))

;;(osc-destination "osc.udp:")
;;(osc-send "/vivo" "s" '("fluxus:hola"))

(define (safe l n)
    (list-ref l (modulo n (length l))))

(define (render arg)
    (let ((l (map (lambda (t) (/ t 255)) (stringle arg))))
            (scale (vector (safe l 0)
                    (safe l 1)
                    (safe l 2)))
            (rotate (vmul (vector (safe l 32)
                    (safe l 12)
                    (safe l 30)) 360))
            (colour (vector (safe l 0)
                    (safe l 3)
                    (safe l 4)))

            (build-torus 0.1 1 4 20))))

(scale 2)
;;(render "hello 343 323")

        (when (osc-msg "/hist")
            (printf "~a~n" (osc 1))
            (when (osc 1)
                (render (osc 1))))))

Hapstar graphs in the wild

Some examples of graphs that scientists have created and published using Hapstar, all these images were taken from the papers that cite the hapstar publication, with links to them below. I think the range of representations of this genetic information indicate some exciting new directions we can take the software in. There are also some possibilities regarding the minimum spanning tree, finding ways to visualise and explore the range of possible MST’s for a given graph.

IVENS, ABF, et al. “Reproduction and dispersal in an ant‐associated root aphid community.” Molecular Ecology (2012).

Wielstra, Ben, and Jan Arntzen. “Postglacial species displacement in Triturus newts deduced from asymmetrically introgressed mitochondrial DNA and ecological niche models.” BMC Evolutionary Biology 12.1 (2012): 161.

Kesäniemi, J. E., Rawson, P. D., Lindsay, S. M. and Knott, K. E. (2012), Phylogenetic analysis of cryptic speciation in the polychaete Pygospio elegans. Ecology and Evolution, 2: 994–1007. doi: 10.1002/ece3.226

Vos M, Quince C, Pijl AS, de Hollander M, Kowalchuk GA (2012) A Comparison of rpoB and 16S rRNA as Markers in Pyrosequencing Studies of Bacterial Diversity. PLoS ONE 7(2): e30600. doi:10.1371/journal.pone.0030600

Evolvable hardware

I’m modding a robot toy for the next Spork Factory experiment, the chassis provides twin motor driven wheels and I’m replacing it’s brains with a circuit based on the ATtiny85 for running the results of the genetic algorithm, and a pair of light dependant resistors for ‘seeing’ with.

Here’s the circuit (made in about 20 minutes after installing Fritzing for the first time). It’s quite simple – two LDR’s provide input, and some transistors are needed to provide enough power to the robot’s motors (it’s using PNP transistors as they were the first matching pair I could find, which means logical 0 is ‘on’ and 1 is ‘off’ in the code).

The robot needs to be emulated in software so the genetic algorithm can measure the fitness of hundreds of thousands of candidate programs. We need to simulate the effect of motor speeds on it’s position and velocity – here is a test running the right motor at a constant rate, and gradually increasing the speed of the left motor.

This is the code snippet that calculates the velocity from the 2 motor speeds – more work is needed to plug in the correct values for the distance between the wheels and the actual rotational speeds of the motor drives.

// update dir and pos from current motor speed
float relative_speed_diff=m_left_motor-m_right_motor;
float angle=atan(relative_speed_diff);
// rotate the direction by the angle
float nx=(m_dir.x*cos(angle))-(m_dir.y*sin(angle));
float ny=(m_dir.x*sin(angle))+(m_dir.y*cos(angle));
// update the position

Temporal recursion

Slub have a number of important livecoding transmissions coming up (including a performance at the Mozilla Festival!) so it’s time to work on fluxus/fluxa/scheme bricks. Here are some recording tests of a feature I’ve been wanting to use for a long time – temporal recursion.

These recordings were not changed by hand as they played, but started and left running in ‘generative audio’ mode in order to try and understand the technique. This method of sequencing is inspired by Impromptu which uses a similar idea. In fluxa it’s all based around a single new function: “in” which schedules a call to a function – which can be the current function (this is different to the existing ‘timed tasks’ in fluxus which are less precise for this kind of sequencing).

(define (tick time a)
    (play (+ time 3) (sample "ga.wav" (note (pick '(40 42 45) a))))
    (in time 0.22 tick (+ a 1)))

(in (time-now) 1 tick 0)

The “in” function takes the current time, the time to wait before the call, the function to call and it’s parameters. In the example above the argument “a” gets incremented each time, resulting in a sequence of notes being played. Recursion generally brings up thoughts of self similarity and fractal patterns – as in the graphical use of recursion in fluxus, but here it’s better to imagine a graph of function calls. Each function can branch to a arbitrary number of others, so limitations have to be put in place to stop the thing exploding with too many concurrent calls. What seems to happen (even with small function call graphs) is the appearance of high level time structures – state changes and shifts into different modes where different patterns of calls lock into sequence. You can hear this clearly in the second recording above which alters itself half way through.

I’ve also experimented with visualising the call graph, with limited success with this more complex example – the round nodes are functions, the boxes parameter changes and labels on the connections are the branch conditions:

(require fluxus-018/fluxa)

(searchpath "/home/dave/noiz/nm/")
(define n '(23 30))
(set-scale '(1 1 2 1))

(define (za time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.4 0.1 1 0.4 1) a)) 
        (sample "ga.wav" (note (- (modulo (- b a) 17) (* (pick n a) 2))))) -0.5)
    (when (zero? (modulo a 16)) (in time 0.3 zm a b))
    (if (eq? (modulo a 14) 12)
        (in time 1.0 zoom a (- b 1))
        (in time (- 0.5 (* (modulo a 13) 0.03)) za (+ a 1) (+ b 2))))

(define (zm time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.1 0.4 1) a))
        (sample "ga.wav" (note (+ (modulo b 5) (/ (pick n a) 2))))) 0.5)
    (if (> a 12)
        (in time 1.0 za b a)
        (in time (pick '(1.3 1.5 0.5) a) zm (+ a 1) b)))

(define (zoom time a b)
    (play (+ time 3) (mul (mul (adsr 0 0.1 1 1) (pick '(0.1 0.2 0.3) a)) 
        (sample "ga.wav" (note (+ (pick n a) (* 3 (modulo a (+ 1 (modulo b 5)))))))))
    (if (> a 16) 
        (in time 0.3 zm 0 (+ b 1))
        (in time (pick '(1.3 0.12) a) zoom (+ a 1) b)))

(in (time-now) 1 zoom 0 1)