Category Archives: Flotsam

Tangible livecoding tests in the wild, and material as type in programming

Last week I took the flotsam tangible livecoding system to my programming tutoring lesson for some first tests with the real experts. To provide some background, we started a while back with Raspberry Pi, messing around with the Minecraft API and python and we’ve recently moved on to laptops and pygame. I arrived with the system set up for the Minecraft building language, and we gave it 10 minutes or so before resuming normal activities – although it did get a bit more use during natural breaks in the lesson. Here’s a recent pic of the weaving l-system setup:

tangibleweaves

First impressions were that it was immediately playful, most of the blocks were eagerly removed and examined before we’d turned the thing on. One problem with this was that the chalk symbols rubbed off very quickly! A similarity with the Scratch programming language was also picked up on fairly soon.

A big issue was getting the connectors the right way round – this is not easy as the blocks are circular with little indication of which way is ‘up’. This could be fixed by altering the shape or cutting a little notch – learning how to manipulate them is part of the point, but right now it’s too difficult. This will also be a big problem in livecoding performance situations.

Using Minecraft, the screen was still a distraction. The connection between what’s happening in the 3D world and with the physical blocks is not obvious enough – even with the LED indicators. Also the mouse and keyboard need to be plugged in so we can move the camera around and see stuff, leading to a few too many things going on. I’m not sure how this can be solved regarding Minecraft, but indicates that a musical approach (with no screen or any other peripherals needed other than speakers) is the way to go.

Overall there is huge potential to think much more about the touch/feel of the blocks. Lots of these problems can be solved by using different shapes for different symbols (removing the need for chalk) with a clear orientation. Using different materials to provide textures and ‘feel’ in order to represent different sounds, or in terms of weaving, using the actual yarn material to represent itself is a huge area to explore.

materials

In the photo above, I’m using thick and thin blue yarn wrapped around the blocks that represent them, and tinfoil for l-system rule symbols (X, Y and Z) – this is a kind of material based type indication. Interestingly the yarn feels very different even though it looks the same in the photo. In use this results in much less checking of the block when you pick them up, as your fingers tell you what they are.

dremelling

Learning about thinking and weaving in Leeds and Sheffield

The second week of intense work on weavingcodes/codingweaves took place in the north of England, and began with a talk at the New Mechanics Institute show and tell meeting in Leeds. This was a good place to pick up from the previous week in Denmark as it included a talk on pixel art from the early days to contemporary forms that had many similarities with our slides.

IMG_20141020_182924

The first half of the week was spent in Leeds University at the Interdisciplinary Centre for Scientific Research in Music where we met Tim Ingold who’s books (e.g. Making: Anthropology, Archaeology, Art and Architecture) have been a big influence on the project. One of the things we discussed with Tim was the cyberspace myth – the idea that computing is intangible and exists in some “other world” (or in a “cloud” somewhere) which is increasingly problematic. The need to highlight the physical basis of computation is one that this project is well placed to address – and if we consider weaving to be a form of computation, then it may be able to inspire approaches to programming that have at least three thousand years of history behind them. Tim also discussed knowledge as movement, his understanding of tacit knowledge – that which is otherwise unwritten or unspoken. Tacit knowledge is fluid and exists underneath a counterpart, called ‘articulated knowledge’ which is fixed by definitions. Culturally we have a bias towards articulated knowledge, and the idea of an unspoken knowledge is something which I realise I am trying to deal with in my teaching activities, but I’ve never been able to suitably describe it.

On day 2 we met with Leslie Downes – a retired professional weaver, who’s woven structures have gone into space, been used in jet engines and bullet proof vests.

Alex and Ellen talking with Leslie Downes IMG_20141021_094720

There are many places where woven structures are superior to other kinds of manufacturing, but at the same time there is a lack of knowledge about weaving technology in engineering in general. Industrial fabric weaving has focused on increasing speed and production output while the kind of looms needed for materials like carbon fibre need to focus on precision. Leslie talked of occasions where he had to return to using hand looms to try out prototypes and how simulations of fibre interactions in weaving exist, but are unnecessary and deficient compared with prototyping with the actual materials. In weaving, as in computing, older tends to mean less restricted, as well as slower. New developments add to, rather than eclipse older ones.

The main feeling I had from Leslie was that his approach to technology came from a combination of an ability to reason about material based on a deep understanding of the capabilities of the looms themselves, how they can be adapted for each job (e.g. a project where he had to cut down the middle of a huge loom to remove a metre or so) mixed with experience in working with people to understand their needs. This generalist approach to knowledge is so lacking in society that it becomes highly regarded.

For the second part of the week we moved to Sheffield, and had breakfast with Luigina Ciolfi researcher in Human-Centred Computing. Lui’s input was both valuable and timely, as she pointed out that one of the key elements of interest is that nature of our collaboration, rather than any products we may or may not come up with. As such we need to be careful that we take the time to document our process and decision making.

After breakfast we headed into the woods – to a temporary base in The J G Graves Woodland Discovery Centre to shift into plotting and planning mode.

In the woods looking at the kinds of weaves used in antiquity

Emma Cocker joined us again, and helped to restrengthen the kairotic and poetic roots of our thinking and we were visited by the lovebytes crew to discuss the use of weaving and livecoding for teaching children. I set up the flotsam tangible programming prototype for it’s first ever use by anyone other than me. This involved a bit of code review/pair programming with Ellen in order to fix bugs in the weave generating algorithm.

Ellen programming tangibly

We face challenges with our collaboration in both directions between our practices. On the one hand we have normal software decisions, is what we make going to be online or offline? Do we prioritise flexibility or accessibility for our choices in platforms and languages? How does what we do connect best with Ellen’s existing programming experience?

What we have decided is to work on a series of prototypes rather than focus on a monolithic development. The task then it so make sure we use common language for concepts across the project (perhaps using the concepts in the ancient greek as a grounding influence). Similarly, using protocols and formats to share common things between experiments – and fitting with existing programs and archives already in use where practical.

In the other direction, much of our discussion revolves around clarification of weaving practicalities and explanations – both of weaving in general and concepts and styles in use in antiquity. Both Alex and I need to start weaving if we are going to have a subtle enough understanding of the material we are dealing with. To this effect Alex already has a commission for a warp weighted loom in the works and I need to warp up the Harris loom.

We also have some fairly concrete projects to think about over the coming months, for example providing a good live-codable explanation of weaving to replace Ellen’s previous windows program as well as looking into the possibilities of using tangible programming in an art installation.

Future plans

Flotsam: A prototype screenless livecoding language

Two languages are working with Flotsam, the new name for the prototype screenless tangible programming language I’ve been building (which comes from the fact it’s largely made from driftwood). It’s somehow already been featured on the Adafruit blog!

The circuit seems to be fully debugged now, with short circuits fixed – which took a little while and more than a little frustration :) The Raspberry Pi python code is currently on the weavingcodes repository (more on this project on the kairotic site), and the first language is a declarative style L system for describing weave structure and pattern with yarn width and colour. The LEDs indicate that the evaluation happens simultaneously, as this is a functional language. The blocks represent blue and pink yarn in two widths, with rules to produce the warp/weft sequence based on the rows the blocks are positioned on:

The weaving simulation is written in pygame (which I’ve been using lately for teaching), and is deliberately designed to make alternative weave structures than those possible with Jacquard looms by including yarn properties. The version in the video is plain weave, but more complex structures can be defined as below – in the same way as Alex’s gibber software:

star

This is a completely different language for building shapes in Minecraft, and is an imperative, stack based language for driving a turtle in 3D space. Eventually (when I’ve manufactured a few more programming blocks) it will be possible to change Minecraft block materials and react to player actions. The LEDs indicate here the more sequential evaluation of this Forth like language:

All that’s needed to switch languages is to redraw the symbols with chalk and run a different script. It won’t be truly screenless until I write a musical language for it, which is obviously coming soon…

IMG_20141016_142021

A screenless programming language: putting it all together

More background on this project here. This is the finished Raspberry Pi interface board, most of the circuitry here is an LED driver to light up one of the 32 programming blocks as they are being scanned, or in any pattern required depending on the needs of the particular programming language in use. The small IC is an old 74LS02 NOR gate I’m misusing as a NOT gate (which I couldn’t find in my supplies). I hard wired one of the gate’s inputs to ground so the other one gets inverted. I need this in order to switch between the two 16 way multiplexers to get 32 outputs, by alternating their enable pins.

IMG_20141002_084650

This time I remembered to include decoupling capacitors (thanks to Julian). Here is the completed wiring for the block input plugs, with everything connected up. I’m new to this scale of building, so really pretty happy it seems to be working:

IMG_20140927_155501

The same thing, the right way up with a single programming block plugged in, and the Raspberry Pi in the foreground. The first time it’s starting to look like something:

IMG_20140927_162218

I’m also starting to document the project, as it will be released as open source hardware, here’s an unfinished schematic. This is attempting to be complete, but it’s not really as complex as it looks – I’ll work on a block diagram that makes it easier to describe:

unfinished-circuit

Screenless programming language: assembling the plug panel

Never seen so many VGA ports, ironic perhaps. It’s great that it’s transparent, but I’m regretting the material choice for the panel – acrylic is horrible to work with and leaves behind lots of microplastics pollution I have to carefully collect up. Lesson learnt for next time, lots more soldering to do now – then more testing with the Raspberry Pi. Following on from the previous post, the instruction blocks will plug in to these 9 pin plugs.

IMG_20140924_150824

IMG_20140924_165531

Building a screenless programming language for the Raspberry Pi #6

Following on from the last in this mini series, the giant 16 byte multiplexer is finished and tested – lots of head scratching due to shorts and wonky solder joints (it’s amazing how sensitive CMOS logic is, interference on accidental free pins causes strange effects) and I’ve replaced the breadboard with a new interface board between the Pi and the multiplexer.

IMG_20140914_163350

Now it’s time to think more about the ‘front end’ – here are some prototype programming blocks made from the ‘holes’ from a hole saw (I’m chopping up bits of driftwood). These include a central drill hole, which we can use for an indicator LED.

IMG_20140914_155709

This is the minimal circuitry needed for each programming block – it’s just a 4 bit identification code (reconfigurable via jumpers) and the LED, mounted around the 7 pin VGA style plug.

IMG_20140914_155742

Here it’s mounted in the block after a bit of hollowing out. To be fully kid-proof I’ll probably eventually set the internals in epoxy glue.

IMG_20140914_162353

Testing the plug/socket wiring on the breadboard. The plan is to paint the top surface of the programming block in blackboard paint so different instructions and languages can be tested and built with the same blocks. It also intriguingly makes abstraction possible, which is usually a problem with non-text based approaches, but this needs more thinking about.

IMG_20140916_080606

Here it is with another unenclosed block attached to the main board, and via the new interface to the Raspberry Pi. The interface will need a bit more circuitry to drive the LEDs – which will be able to be lit in different ways depending on the needs of the language, perhaps sequential in some cases, more dependant on logic in others. Also, different shaped blocks or different coloured LEDs may have different meanings in a language too.

Screenless programming language #5

Update after the last post. After many hours soldering 75% of the board is complete! The Pi can now address a whopping 12 bytes of data.

IMG_20140912_085034

After thinking about it for a while, and checking with the languages I’m planning to use – I’ve decided to double the number of instruction ‘slots’ by halving the bits to 4. This doesn’t actually change the core hardware at all – I’ll just scan two at a time and split them in software (part of this project has been to see how much easier it is to change software than hardware). Some of the instructions can span two addresses if needed, but this makes more efficient use of the resources. The ports on the left of the board are where the plugs for the programming ‘interface’ will come in – each one will now split to two locations.

Here’s my test GPIO software running on the Pi, constant testing has been the only way to stay sane on this project. Eventually I could replace the Pi with a microcontroller for some applications, but the Pi has been a great way to ease the prototype process.

IMG_20140910_092642

Screen-less programming language (#3)

IMG_20140903_131424

Since the last update I’ve connected the Raspberry Pi to the board, and after a bit of debugging I have a python script that polls bytes (more accurately 4 bit ‘nibbles’) via the GPIO ports from 16 addresses. I didn’t blow up the Pi, although I did cause hard resets a couple of times with wandering connections. Now comes the mega wiring phase.

Some tangible programming language research (#2)

Following on from yesterday’s post – here’s a random selection of some (publicly accessible) tangible programming research.

A braille tangible programming instruction

The Algobrick language in use
The Algobrick language in use