Category Archives: slub

MS Stubnitz Algorave #2

Our second offshore Algorave on the MS Stubnitz, during the ship’s final night in London. The crowd was pleasingly diverse with lots of people new to algorave and livecoding, and although behind the scenes we had some hitches due to the ship’s impending departure for France, the event was relaxed and went smoothly. Our performance was honoured by guest appearance from Elvi$ Ca$h as well as a re-compile of the Al Jazari rave bots. As one of those spending the night on the ship afterwards, I had to be careful not to have too much of a lie in!

eee
Alexandra Cárdenas live coded dark textures and sharply angled beats.
eee
Shelly and some Mandelbrots, transmitting multi-layered synthetic tones with Davide Della Casa in the foreground operating livecodelab who, with Guy John live coded inspiring minimalist geometric expressions to match the music throughout the night.

Better, and considerably wider angle photos can be found here by Yoshizen.

Algorave practice

It’s been a huge amount of time since I recorded anything, but I thought I would a) try and do some actual livecoding practice for the upcoming algorave on Thursday and b) record everything. As usual I’m following my foolhardy approach of improvising both musical structure and sound material by livecoding synth graphs from scratch. Sometimes it takes while longer than I would like to reach a suitable musical complexity (this is faster in a real live situation, with increased adrenaline), and some fiddly things there never seems time to sort out, such as stereo! For these recordings, and live on stage with slub I use the scheme bricks visual programming language. Here are some of my favourites, the complete set is here.

Life on an Algorave Tour

Some pictures taken during the recent Algorave Tour, making people dance to algorithms in Brighton, London, Karlesruhe, Cologne and Dusseldorf.

The MS Stubnitz moored in Canary Wharf surrounded by financial architecture:
IMG_20130418_135658

IMG_20130418_140810

Wandering around during soundcheck, a heavy duty workshop on the Stubnitz:
IMG_20130418_143850

Sound checking with Andrew Sorenson:
IMG_20130418_162540

A speaker close up, one of many:
IMG_20130418_162740

Norah Lorway shaking the boat’s superstructure with sub bass:
IMG_20130418_200317

MYK livecoded acid squelch:
IMG_20130418_204823

Sick Lincoln deploying highly danceable crowd pleasing algorithms:
IMG_20130418_213634

Mico Rex, Mexico’s finest algorave pop duo closing the night:
IMG_20130418_222153

Onward to Karleruhe, a random photo of the slub soundcheck:
IMG_20130420_080039

Hernani Villaseñor 8bit house with visible parentheses:
IMG_20130420_082809

Fredrik Olofsson livecoding 5 arduino’s simultaneously for 2 bit grindcore:
IMG_20130420_111707

algoravin in the UK

A short trip around the UK for slub over the last couple of days, a livecoding gig in London at Bartlett Nexus in UCL at an event concerning architecture, games and hand made technology. A full video of the event (with us at the end) is here.

bartletts1

bartletts2

Also collected along the way, a photo of the xname manufacturing lab:
xname

Then up to Birmingham to attend the Network Music Festival and do another performance late on Saturday night. While there we had a chance to meet the members of The Hub, pioneers of networked music and livecoding. It was inspiring to chat with such experienced musicians in this field. The NMF included a huge range of performances, for example Melatab who used Kinect cameras for networked performance in a shared virtual space. I’m planning some Kinect hacking soon, so I took some photos:

nmf1

nmf2

slub at /* vivo */

My last /* vivo */ Mexico post, some data from our livecoding performance on the final day. This was one of those performances where we had a rough plan and got a bit too carried away by the crowd to follow it (I guess one of the great things about improvisation!). Also to a great deal the music was influenced by mezcal, the fermented spirit from the maguey plant, which I can report is the secret ingredient of Mexican livecoding. My edit history and a screen shot of the final state of the program is online here. The new temporal recursion system was actually pretty damn challenging (hence the serious face) but in combination with Alex’s pattern generation seemed to get people moving pretty well…

/* vivo */ musings

So much to think about after the /* vivo */ festival, how livecoding is moving on, becoming more self critical as well as gender balanced. The first signs of this was the focus of the festival being almost entirely philosophical rather than technical. Previous meetings of this nature have involved a fair dose of tech minutiae – here these things hardly entered the conversations.

Show us your screens

One of the significant topics for discussions was put under the spotlight by Iohannes Zmölnig – who are the livecoding audience, what do they expect and how far do we need to go in order to be understood by them? Do we consider the act of code projection as a spectacle (as in VJing) or is it – as Alex McLean asserts – more about authenticity, showing people what you are doing, what you are interacting with, and an honest invitation? Julian Rohrhuber and Alberto De Campo discussed how livecoding impacts on our school education conditioning, audiences thinking they are expected to understand what is projected in a particular didactic, limited manner (code projection as blackboard). Livecoding could be used to explore creative ways of compounding these expectations to invite changes to the many anti-intellectual biases in our society.

Luis Navarro Del Angel presented another important way of thinking about the potential of livecoding – as a new kind of mass creativity and participation, providing artistic methods to wider groups than can be achieved by traditional means. This is quite close to my own experience with livecoding music, and yet I’m much more used to thinking about what programming offers those who are already artists in some form, and familiar with other material. Luis’s approach was more focused on livecoding’s potential for people who haven’t found a form of expression, and making new languages aimed at this user group.

After some introductory workshops the later ones followed this philosophical thread by considering livecoding approaches rather than tools. Alex and I provided a kind of slub workshop, with examples of the small experimental languages we’ve made like texture, scheme bricks and lazybots, encouraging participants to consider how their ideal personal creative programming language would work. This provides interesting possibilities and I think, a more promising direction than convergence on one or two monolithic systems.

This festival was also a reminder of the importance of free software, it’s role to provide opportunities in places where for whatever reasons education has not provided the tools to work with software. Access to source code, and in the particular case of livecoding, it’s celebration and use as material, breeds independence, and helps in the formation of groups such as the scene in Mexico City.

Mexican livecoding style

At only around 2 years old, the Mexican livecoding scene is pretty advanced. Here are images of (I think) all of the performances at /*vivo*/ (Simposio Internacional de Música y Código 2012) in Mexico City, which included lots of processing, fluxus, pure data and ATMEL processor bithop along with supercollider and plenty of non-digital techniques too. The from-scratch technique is considered important in Mexico, with most performances using this creative restriction to great effect. My comments below are firmly biased in favour of fluxus, not considering myself knowledgeable enough for thorough examinations of supercollider usage. Also there are probably mistakes and misappropriations – let me know!

Hernani Villaseñor, Julio Zaldívar (M0M0) – A performance of contrasts between Julio’s C coded 8bit-shifting ATMEL sounds and Hernani’s from scratch supercollider scripts, both building up in intensity through the performance, a great opener. A side effect of Julio using avrdude to upload code resulted in the periodic sonification of bytecode as it spilled into the digital to analogue converter during uploads. He was also using an oscilloscope to visualise the sound output, some of the code clearly designed for their visuals as well as crunchy sounds.

Mitzi Olvera and Alejandro Franco – I’d been aware of Mitzi’s work for a while from her fluxus videos online so it was great to see this performance, she made good use of the fluxus immediate mode primitives, and started off with restricting them to points mode only, while building up a complex set of recursive patterns and switching render hints to break the performance down into distinct sections. She neatly transitioned from the initial hard lines and shapes all the way to softened transparent clouds. Meanwhile Alejandro built up the mix and blasted us with Karplus Strong synthesis, eventually forcing scserver to it’s knees by flooding it with silent events.

Julian Rohrhuber, Alberto de Campo – A good chunk of powerbooks unplugged (plugged in) from Julian and Alberto, starting with a short improvisation before switching to a full composition explored within the republic framework, sharing code and blending their identities.

Martín Zumaya (Stereo Vision), José Carlos Hasbun (joseCaos) – It was good to see Processing in use for livecoding, and Martin improvised a broad range of material until concentrating on iconic minimal constructions that matched well with José’s sounds – a steady build up of dark poly-rhythmic beats with some crazy feedback filtering mapped to the mouse coordinates to keep things fluid and unpredictable.

IOhannes Zmölnig – pure data morse code livecoded in Braille. This was an experiment based on his talk earlier that day, a study in making the code as hard to read for the performer as the audience. In fact the resulting effect was beautiful, ending with the self modification of position and structure that IOhannes is famous for – leaving a very consistent audio/visual link to the driving monotonic morse bass, bleeps and white noise.

Radiad3or (Jaime Lobato, Alberto Cerro, Fernando Lomelí, Iván Esquinca y Mauro Herrera) – part 1 was human instruction, analogue performance as well as a comment at the inadequacy of livecoding for a computer, with commands like “changeTimbre” for the performers to interpret using their voices, a drumkit, flutes and a didgeridoo. Following this, part 2 was about driving the computer with these sounds, inverting it into a position alongside or following the performers rather than a mediator, being reprogrammed by the music. This performance pushed the concept of livecoding to new levels, leaving us in the dust still coming to terms with what we were trying to do in the first place!

Benoît and the Mandelbrots (live from Karlsruhe) – a remote performance from Germany, the Mandelbrots dispatched layers upon layers of synthesised texture, along with their trademark in-performance text chat, a kind of code unto itself and a view into their collective mind. The time lag issues involved with remote streaming, not knowing what/when they could see of us, added an element to this performance all of it’s own. As did the surprise appearance of various troublemakers into the live video stream…

Jorge Ramírez – another remote performance, this time from Beijing, China. Part grimy glitch and part sonification of firewalls and effects of imagined or real monitoring and censorship algorithms this was powerful, and included more temporal disparity – this time caused by the sound arriving some time before the code that described it.

Si, si, si (Ernesto Romero Mariscal Guasp y Luciana Renner Maceralli) – a narrative combination of Luciana’s performance art, tiny webcam augmented theatre sets, and Ernesto’s supercollider soundtrack. Livecoding hasn’t ventured into storytelling much yet, and this performance indicated that it should. Luciana’s inventive use of projection with liquids and transparent fibres reminded me of the early days of film effects and was a counterpoint to Ernesto’s synthesised ambience and storytelling audio.

Luis Navarro, Emilio Ocelotl – ambitious stuff this – dark dubsteppy sounds from Emilio, driving parameters of a from-scratch fluxus sierpinski fractal exploration from Luis. Similar to Mitzi’s performance, Luis limited his scene to immediate mode primitives, a ternary tree recursion forming the basis for constantly morphing structures.

Alexandra Cárdenas, Eduardo Obieta – Something very exciting I noticed was a tendency when working in sound/visual pairs such as Alexandra and Eduardo for the sounds to be designed with the visuals in mind – e.g. the use of contrasting frequencies that could be picked out well by fft algorithms. This demonstrated a good mutual understanding, as well as a challenge to the normal DJ/VJ hierarchy. Eduardo fully exercised the NURBS primitive (I remember it would hardly render at 10fps when I first added it to fluxus!) exploding it to the sound input before unleashing the self-test script to end the performance in style!

Eduardo Meléndez – one of the original Mexican livecoders, programming audio and visuals at the same time! Not only that – but text (supercollider) and visual programming (vvvvv) in one performance too. I would have liked to have paid closer attention to this one, but I was a bit nervous

Slub finished off the performances, but I’ll write more about that soon as material comes in (I didn’t have time to take any photos!).

Making time

Time, the ever baffling one directional mystery. A lot of it has been spent between the members of slub on ways to synchronise multiple machines to share a simple beat, sometimes attempting industrial strength solutions but somehow the longest standing approach we always come back to for our various ad-hoc software remains to be a single osc message. This is the kind of thing that seems to normally involve stressed pre-performance hacking, so after having to rewriting it for temporal recursion I thought I should get it down here for future reference!

The message is called “/sync” and contains two floating point values, the first the number of beats in a “bar” (which is legacy, we don’t use this now) and then the current beats per minute. The time the message is sent is considered to be the start of the beat. A sync message comes into my system via a daemon called syncup. All this really does is attach a timestamp to the sync message recording what the local time on my machine was when it arrived, and sends it on to fluxus. Shared timestamps would be better, but don’t make any sense without a shared clock, and they seem fragile to our demands. The daemon polls on a fairly tight loop (100ms) and the resulting timestamp seems accurate enough for our ears (fluxus runs on the frame refresh rate which is too variable for this job).

So now we have a new sync message which includes a timestamp for the beat start. The first thing the system does is to assume this is in the past, and that the current time has already moved ahead. There are 3 points of time involved:

From the sync time (in the past, on the left) and the bpm we can calculate the beat times into the future. We have a “logical time” which is initialised with the current time from the system clock, a safety margin added, and then gets “snapped” to the nearest beat. The safety margin is needed as the synth graph build and play messages coming from fluxus need to be early enough to get scheduled by fluxa’s synth engine to play with sample accuracy.

The beat snapping has to be able to move back in time as well as forwards, for tiny adjustments from the sync messages (as they never come in exactly when they are expected) otherwise we skip beats. The algorithm to do this is as follows:

(define (snap-time-to-sync time)
  (+ time (calc-offset time last-sync-time (* (/ 1 bpm) 60)))) 

(define (calc-offset time-now sync-time beat-dur)
  ;; find the difference in terms of tempo
  (let* ((diff (/ (- sync-time time-now) beat-dur))
         ;; get the fractional remainder (doesn't matter how
         ;; far in the past or future the synctime is)
         (fract (- diff (floor diff))))
    ;; do the snapping
    (if (< fract 0.5) 
        ;; need to jump forwards - convert back into seconds
        (* fract beat-dur) 
        ;; the beat is behind us, so go backwards
        (- (* (- 1 fract) beat-dur)))))

The last thing that is needed is a global sync offset, which I add at the start of the process, to the incoming message timestamps – this has to be tuned by ear, and accounts for the fact that the latency between the synth playing a note and the speakers moving air seems to vary between machines dependent on many uncertain factors – sound card parameters, battery vs ac power, sound system setup, colour of your backdrop etc.

Other than this we tend to keep the networking tech to a minimum and use our ears and scribbled drawn scores (sometimes made from stones) to share any other musical data.

scheme bricks 2

A new version of scheme bricks is under way, planned to be tested out with slub on the Mozilla Fest Party, then taken across the Atlantic for some more livecoding action in Mexico City! New things include blocks with depth – cosmetic for the moment, but I plan to prototype some new ideas based on this, separately zoom-able code blocks, and most importantly it’s a complete rewrite into functional R5RS Scheme for portability – it should now be relatively simple to get it on android via Nomadic which uses Tinyscheme.

Using the new temporal recursion, the code produced is much less monolithic. Massively tall structures resulting from plugging together sequences during long performances were a bit of an issue before, but splitting the code into a multitude of functions (which can be shrunk and put in the “background”) seems to be a far easier way of working so far.