Departing in a new direction after evolved light follower robots, take 500 processor cores spread out in space. Give them a simple instruction set which includes a instruction to copy (DMA) 8 bytes of their code/data to nearby cores (with an error rate of 0.5%). Fill the cores with random junk and set them running. If we graph the bandwidth used (the amount of data transmitted per cycle by the whole system) we get a plot like this:
This explosion in bandwidth use is due to implicit emergence of programs which replicate themselves. There is no fitness function, it’s not a genetic algorithm and there is no guided convergence to a single solution – no ‘telling it what to do’. Programs may arise that cooperate with each other or may exhibit parasitic behaviour as in alife experiments like Tierra, and this could be seen as a kind of self modifying, emergent Amorphous computing – and eventually, perhaps a way of evolving programs in parallel on cheap attiny processor hardware.
In order to replicate, the code needs to push a value onto the stack as the address to start the copy from, and then call the dma instruction to broadcast it. This is one of the 500 cores visualised using Betablocker DS emulator display, the new “up” arrow icon is the dma instruction.
Thinking more physically, the arrangements of the processors in space is a key factor – here are some visualisations. The size of each core increases when it transmits, and it’s colour is set by a hash of the last data transmission – so we can see the rise of communication and the propagation of different strains of successful code.
The source is here and includes the visualisation software which requires fluxus.
Continuing with the structured procrastination R&D project on evolvable hardware, I’m proud to report a pretty decent light following robot – this is a video of the first real-world test, with a program grown from primordial soup chasing me around:
After creating a software model simulation of the robot in the last post, I added some new bytecode instructions for the virtual machine: LEYE and REYE push 1 on the stack if we are detecting light from the left or right photoresistor, zero if it’s dark. LMOT and RMOT pop the top byte of the stack to turn the motors on and off. The strategy for the genetic algorithm’s fitness function is running each 16 byte generated program on the robot for 1000 cycles, moving the robot to a new random location and facing direction 10 times without stopping the program. At the end of each run the position of the robot was compared to the light position, and the distances were averaged as the fitness. Note that we’re not assigning fitness to how fast we get to the light.
This is pretty simple stuff, but it’s still interesting to look at what happens over time in the genetic algorithm. Both motors are running at startup by default, so the first successful programs learn how turn one motor off – otherwise the robot just shoots off and scores really low. So the first generations tend to just go round in circles. Then they start to learn how to plug the eyes in, one by one edging them closer to the goal – then it’s a case of improving the sample rate to improve accuracy, usually by using jmps and optimising the loops.
This is an example of a fairly simple and effective solution, the final generation shown in the animation above:
Some explanation, the right and left eyes are plugged into the left and right motors, which is the essential bit making it work, the ‘nop’s are all values that are not executable. The ‘rmot’ before the ‘jmpz’ makes the robot scan around in circles if there is no light detected (strangely, a case which doesn’t happen in the simulation). The argumant to ‘jmpz’ is 0 (loop) which is actually the 17th byte – so it’s cheekily using memory which has been initialised to zero as part of it’s program.
This is a more complicated and stranger program which evolved after 70 generations with a high fitness, I haven’t worked out what it’s up to yet:
A script for sniffing bits of supercollider code being broadcast as livecoding history over a network and re-interpreting them as objects in fluxus, written during an excellent workshop by Alberto de Campo and Julian Rohrhuber at /*VIVO*/ Mexico City.
(string->list str)));;(osc-destination "osc.udp:255.255.255.255:57120");;(osc-send "/vivo" "s" '("fluxus:hola"))(define(safe l n)(list-ref l (modulo n (length l))))(define(render arg)(let((l(map(lambda(t)(/t255))(stringle arg))))(with-state(scale(vector(safe l 0)(safe l 1)(safe l 2)))(rotate(vmul(vector(safe l 32)(safe l 12)(safe l 30))360))(colour(vector(safe l 0)(safe l 3)(safe l 4)))(build-torus0.11420))))(clear)(scale2);;(render "hello 343 323")(every-frame(begin(when(osc-msg"/hist")(printf"~a~n"(osc1))(when(osc1)(render(osc1))))))
Here is an initial plan for how the thing will work:
The main complexities include locating open data sources for sea states and tides and creating an interface that works easily enough on a small fishing boat under various weather conditions – for example touch screens aren’t much use if you’re wearing gloves. Approaches to try include using the physical buttons, shaking, or voice input. As with previous FoAM projects Boskoi and Zizim, this will be built on the Ushahidi platform. Source repo location to follow…