Monthly Archives: November 2012

slub at /* vivo */

My last /* vivo */ Mexico post, some data from our livecoding performance on the final day. This was one of those performances where we had a rough plan and got a bit too carried away by the crowd to follow it (I guess one of the great things about improvisation!). Also to a great deal the music was influenced by mezcal, the fermented spirit from the maguey plant, which I can report is the secret ingredient of Mexican livecoding. My edit history and a screen shot of the final state of the program is online here. The new temporal recursion system was actually pretty damn challenging (hence the serious face) but in combination with Alex’s pattern generation seemed to get people moving pretty well…

/* vivo */ musings

So much to think about after the /* vivo */ festival, how livecoding is moving on, becoming more self critical as well as gender balanced. The first signs of this was the focus of the festival being almost entirely philosophical rather than technical. Previous meetings of this nature have involved a fair dose of tech minutiae – here these things hardly entered the conversations.

Show us your screens

One of the significant topics for discussions was put under the spotlight by Iohannes Zmölnig – who are the livecoding audience, what do they expect and how far do we need to go in order to be understood by them? Do we consider the act of code projection as a spectacle (as in VJing) or is it – as Alex McLean asserts – more about authenticity, showing people what you are doing, what you are interacting with, and an honest invitation? Julian Rohrhuber and Alberto De Campo discussed how livecoding impacts on our school education conditioning, audiences thinking they are expected to understand what is projected in a particular didactic, limited manner (code projection as blackboard). Livecoding could be used to explore creative ways of compounding these expectations to invite changes to the many anti-intellectual biases in our society.

Luis Navarro Del Angel presented another important way of thinking about the potential of livecoding – as a new kind of mass creativity and participation, providing artistic methods to wider groups than can be achieved by traditional means. This is quite close to my own experience with livecoding music, and yet I’m much more used to thinking about what programming offers those who are already artists in some form, and familiar with other material. Luis’s approach was more focused on livecoding’s potential for people who haven’t found a form of expression, and making new languages aimed at this user group.

After some introductory workshops the later ones followed this philosophical thread by considering livecoding approaches rather than tools. Alex and I provided a kind of slub workshop, with examples of the small experimental languages we’ve made like texture, scheme bricks and lazybots, encouraging participants to consider how their ideal personal creative programming language would work. This provides interesting possibilities and I think, a more promising direction than convergence on one or two monolithic systems.

This festival was also a reminder of the importance of free software, it’s role to provide opportunities in places where for whatever reasons education has not provided the tools to work with software. Access to source code, and in the particular case of livecoding, it’s celebration and use as material, breeds independence, and helps in the formation of groups such as the scene in Mexico City.

Mexican livecoding style

At only around 2 years old, the Mexican livecoding scene is pretty advanced. Here are images of (I think) all of the performances at /*vivo*/ (Simposio Internacional de Música y Código 2012) in Mexico City, which included lots of processing, fluxus, pure data and ATMEL processor bithop along with supercollider and plenty of non-digital techniques too. The from-scratch technique is considered important in Mexico, with most performances using this creative restriction to great effect. My comments below are firmly biased in favour of fluxus, not considering myself knowledgeable enough for thorough examinations of supercollider usage. Also there are probably mistakes and misappropriations – let me know!

Hernani Villaseñor, Julio Zaldívar (M0M0) – A performance of contrasts between Julio’s C coded 8bit-shifting ATMEL sounds and Hernani’s from scratch supercollider scripts, both building up in intensity through the performance, a great opener. A side effect of Julio using avrdude to upload code resulted in the periodic sonification of bytecode as it spilled into the digital to analogue converter during uploads. He was also using an oscilloscope to visualise the sound output, some of the code clearly designed for their visuals as well as crunchy sounds.

Mitzi Olvera and Alejandro Franco – I’d been aware of Mitzi’s work for a while from her fluxus videos online so it was great to see this performance, she made good use of the fluxus immediate mode primitives, and started off with restricting them to points mode only, while building up a complex set of recursive patterns and switching render hints to break the performance down into distinct sections. She neatly transitioned from the initial hard lines and shapes all the way to softened transparent clouds. Meanwhile Alejandro built up the mix and blasted us with Karplus Strong synthesis, eventually forcing scserver to it’s knees by flooding it with silent events.

Julian Rohrhuber, Alberto de Campo – A good chunk of powerbooks unplugged (plugged in) from Julian and Alberto, starting with a short improvisation before switching to a full composition explored within the republic framework, sharing code and blending their identities.

Martín Zumaya (Stereo Vision), José Carlos Hasbun (joseCaos) – It was good to see Processing in use for livecoding, and Martin improvised a broad range of material until concentrating on iconic minimal constructions that matched well with José’s sounds – a steady build up of dark poly-rhythmic beats with some crazy feedback filtering mapped to the mouse coordinates to keep things fluid and unpredictable.

IOhannes Zmölnig – pure data morse code livecoded in Braille. This was an experiment based on his talk earlier that day, a study in making the code as hard to read for the performer as the audience. In fact the resulting effect was beautiful, ending with the self modification of position and structure that IOhannes is famous for – leaving a very consistent audio/visual link to the driving monotonic morse bass, bleeps and white noise.

Radiad3or (Jaime Lobato, Alberto Cerro, Fernando Lomelí, Iván Esquinca y Mauro Herrera) – part 1 was human instruction, analogue performance as well as a comment at the inadequacy of livecoding for a computer, with commands like “changeTimbre” for the performers to interpret using their voices, a drumkit, flutes and a didgeridoo. Following this, part 2 was about driving the computer with these sounds, inverting it into a position alongside or following the performers rather than a mediator, being reprogrammed by the music. This performance pushed the concept of livecoding to new levels, leaving us in the dust still coming to terms with what we were trying to do in the first place!

Benoît and the Mandelbrots (live from Karlsruhe) – a remote performance from Germany, the Mandelbrots dispatched layers upon layers of synthesised texture, along with their trademark in-performance text chat, a kind of code unto itself and a view into their collective mind. The time lag issues involved with remote streaming, not knowing what/when they could see of us, added an element to this performance all of it’s own. As did the surprise appearance of various troublemakers into the live video stream…

Jorge Ramírez – another remote performance, this time from Beijing, China. Part grimy glitch and part sonification of firewalls and effects of imagined or real monitoring and censorship algorithms this was powerful, and included more temporal disparity – this time caused by the sound arriving some time before the code that described it.

Si, si, si (Ernesto Romero Mariscal Guasp y Luciana Renner Maceralli) – a narrative combination of Luciana’s performance art, tiny webcam augmented theatre sets, and Ernesto’s supercollider soundtrack. Livecoding hasn’t ventured into storytelling much yet, and this performance indicated that it should. Luciana’s inventive use of projection with liquids and transparent fibres reminded me of the early days of film effects and was a counterpoint to Ernesto’s synthesised ambience and storytelling audio.

Luis Navarro, Emilio Ocelotl – ambitious stuff this – dark dubsteppy sounds from Emilio, driving parameters of a from-scratch fluxus sierpinski fractal exploration from Luis. Similar to Mitzi’s performance, Luis limited his scene to immediate mode primitives, a ternary tree recursion forming the basis for constantly morphing structures.

Alexandra Cárdenas, Eduardo Obieta – Something very exciting I noticed was a tendency when working in sound/visual pairs such as Alexandra and Eduardo for the sounds to be designed with the visuals in mind – e.g. the use of contrasting frequencies that could be picked out well by fft algorithms. This demonstrated a good mutual understanding, as well as a challenge to the normal DJ/VJ hierarchy. Eduardo fully exercised the NURBS primitive (I remember it would hardly render at 10fps when I first added it to fluxus!) exploding it to the sound input before unleashing the self-test script to end the performance in style!

Eduardo Meléndez – one of the original Mexican livecoders, programming audio and visuals at the same time! Not only that – but text (supercollider) and visual programming (vvvvv) in one performance too. I would have liked to have paid closer attention to this one, but I was a bit nervous

Slub finished off the performances, but I’ll write more about that soon as material comes in (I didn’t have time to take any photos!).

Hapstar graphs in the wild

Some examples of graphs that scientists have created and published using Hapstar, all these images were taken from the papers that cite the hapstar publication, with links to them below. I think the range of representations of this genetic information indicate some exciting new directions we can take the software in. There are also some possibilities regarding the minimum spanning tree, finding ways to visualise and explore the range of possible MST’s for a given graph.

IVENS, ABF, et al. “Reproduction and dispersal in an ant‐associated root aphid community.” Molecular Ecology (2012).

Wielstra, Ben, and Jan Arntzen. “Postglacial species displacement in Triturus newts deduced from asymmetrically introgressed mitochondrial DNA and ecological niche models.” BMC Evolutionary Biology 12.1 (2012): 161.

Kesäniemi, J. E., Rawson, P. D., Lindsay, S. M. and Knott, K. E. (2012), Phylogenetic analysis of cryptic speciation in the polychaete Pygospio elegans. Ecology and Evolution, 2: 994–1007. doi: 10.1002/ece3.226

Vos M, Quince C, Pijl AS, de Hollander M, Kowalchuk GA (2012) A Comparison of rpoB and 16S rRNA as Markers in Pyrosequencing Studies of Bacterial Diversity. PLoS ONE 7(2): e30600. doi:10.1371/journal.pone.0030600

Evolvable hardware

I’m modding a robot toy for the next Spork Factory experiment, the chassis provides twin motor driven wheels and I’m replacing it’s brains with a circuit based on the ATtiny85 for running the results of the genetic algorithm, and a pair of light dependant resistors for ‘seeing’ with.

Here’s the circuit (made in about 20 minutes after installing Fritzing for the first time). It’s quite simple – two LDR’s provide input, and some transistors are needed to provide enough power to the robot’s motors (it’s using PNP transistors as they were the first matching pair I could find, which means logical 0 is ‘on’ and 1 is ‘off’ in the code).

The robot needs to be emulated in software so the genetic algorithm can measure the fitness of hundreds of thousands of candidate programs. We need to simulate the effect of motor speeds on it’s position and velocity – here is a test running the right motor at a constant rate, and gradually increasing the speed of the left motor.

This is the code snippet that calculates the velocity from the 2 motor speeds – more work is needed to plug in the correct values for the distance between the wheels and the actual rotational speeds of the motor drives.

// update dir and pos from current motor speed
float relative_speed_diff=m_left_motor-m_right_motor;
float angle=atan(relative_speed_diff);
    
// rotate the direction by the angle
float nx=(m_dir.x*cos(angle))-(m_dir.y*sin(angle));
float ny=(m_dir.x*sin(angle))+(m_dir.y*cos(angle));
m_dir.x=nx;
m_dir.y=ny;
    
// update the position
m_pos=m_pos.add(m_dir);

Spork factory

A system for creating an abundance of useless software for tiny devices. Spork Factory evolves programs that run on Atmel processors – the same make as found on the Arduino, in this case the ATtiny85 – a £2.50 8 pin 8bit CPU. I’m currently simply using a piezo speaker as an output and evolving programs based on the frequency of the sound produced by flipping the pins up and down, so creating 2bit synths using the Fourier transform as the fitness function. With more hardware (input as well as output) perhaps we could evolve small robots, or even maybe cheap claytronics or programmable matter experiments.

This project reuses the previous genetic programming experiments (including jgap as its genetic algorithm framework), and is also inspired by Till Bovermann’s recent work with Betablocker in Supercollider for bytecode synthesis.

The programs generated don’t use the Atmel instruction set directly, but interpret a custom one derived from Betablocker for two reasons. Atmel processors separate their instruction memory from data (the Harvard architecture) which makes it difficult to modify code as it’s running (either uploading new evolved code or running self modifying instructions), the other is that using a simplified custom instruction set makes it easier for genetic algorithms to create all kinds of strange programs that will always run.

I’ve added an ‘OUT’ instruction, which pops the top of the stack and writes it to the pins on the ATtiny, so the first thing a program needs to do is generate and output some data. The second thing it needs to do is create an oscillator to create a tone, after that the fitness function grades the program on the amount of frequencies present in the sound, encouraging it to make richer noises.

Here are two example programs from a single run, first the ancestor, a simple oscillator which evolved after 4 or 5 generations:

out
out
nop
nop
dec
nop
nop
nop
out
nop
jmpz 254
nop
nop
nop
dup

It’s simply outputting 0’s, then using the ‘dec’ to decrement the top of the stack to make a 255 which sets the rightmost bit to 1 (the one the speaker is attached to) and then loops with the ‘jmpz’ causing it to oscillate. This program produces this fft plot:

After 100 or so further generations, this descendant program emerges. The dec is replaced by ‘pshl 81’ which does the same job (pushes the literal value 81 onto the stack, setting our speaker bit to 1) but also uses a ‘dup’ (duplicate top of the stack) to shuffle the values around to make a more complex output signal with more frequencies present:

out
out
not
nop
pshl 81
pshi 149
out
nop
out
nop
dup
psh 170
jmp 0

Some further experiments, and perhaps even sound samples soon…