<?xml version='1.0' encoding='UTF-8'?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-7686436</atom:id><lastBuildDate>Thu, 26 Mar 2026 00:17:14 +0000</lastBuildDate><category>generative art</category><category>theory</category><category>visualisation</category><category>dataesthetics</category><category>data</category><category>materiality</category><category>audiovisual</category><category>canberra</category><category>processing</category><category>exhibition</category><category>projects</category><category>performance</category><category>opensource</category><category>music</category><category>transmateriality</category><category>aesthetics</category><category>arrays</category><category>australia</category><category>code</category><category>fabrication</category><category>readings</category><category>advertising</category><category>art</category><category>artificial life</category><category>conference</category><category>critique</category><category>hardware</category><category>philosophy</category><category>review</category><category>synaesthesia</category><category>3d</category><category>architecture</category><category>digital design</category><category>education</category><category>emergence</category><category>inframedia</category><category>interview</category><category>livecoding</category><category>models</category><category>motion graphics</category><category>multiplicity</category><category>perception</category><category>presence</category><category>science</category><category>sound</category><category>video</category><category>artist</category><category>biology</category><category>cellular automata</category><category>china</category><category>climatechange</category><category>computation</category><category>dvd</category><category>gaming</category><category>generative design</category><category>landscape</category><category>morphogenesis</category><category>neuroaesthetics</category><category>photography</category><category>social software</category><category>specificity</category><category>systems art</category><category>urban</category><category>AI</category><category>art history</category><category>avantgarde</category><category>books</category><category>census</category><category>cinema</category><category>cybernetics</category><category>embodiment</category><category>environment</category><category>glitch</category><category>growth</category><category>heritage</category><category>information</category><category>jewelry</category><category>kids</category><category>mdd</category><category>meta</category><category>nature</category><category>neo-baroque</category><category>networks</category><category>projectionmapping</category><category>protocomputing</category><category>research</category><category>screen</category><category>screensaver</category><category>software art</category><category>space</category><category>symbiosis</category><category>synchresis</category><category>virtuosity</category><category>voronoi</category><title>(the teeming void)</title><description>generative &amp; data aesthetics</description><link>http://teemingvoid.blogspot.com/</link><managingEditor>noreply@blogger.com (Mitchell)</managingEditor><generator>Blogger</generator><openSearch:totalResults>76</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6699463592504741981</guid><pubDate>Thu, 04 Dec 2014 02:57:00 +0000</pubDate><atom:updated>2014-12-04T14:01:34.471+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">generative art</category><category domain="http://www.blogger.com/atom/ns#">heritage</category><category domain="http://www.blogger.com/atom/ns#">projects</category><title>Generative Heritage - notes on Succession</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMkY3BICLbN9Zu9Jwhzem5O6rtqzyTME-MbUHIzhZW06GZfCECmhyDbq1SI21ITEyhRWlqcI30JKYICzKHb6T9p_S6Pxo1ds-KazQQCk3r2Fkm632ITc8D_i5MrZO6c5aloRzuQQ/s1600/succession-bird-2.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMkY3BICLbN9Zu9Jwhzem5O6rtqzyTME-MbUHIzhZW06GZfCECmhyDbq1SI21ITEyhRWlqcI30JKYICzKHb6T9p_S6Pxo1ds-KazQQCk3r2Fkm632ITc8D_i5MrZO6c5aloRzuQQ/s1600/succession-bird-2.jpg&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
I spent three months this year on sabbatical at Culture Lab, Newcastle University (UK). It was a privelege to spend time in such a vibrant research lab, as well as to get to know the city of Newcastle. One of the projects to come out of my visit is &lt;i&gt;&lt;a href=&quot;http://mtchl.net/succession&quot; target=&quot;_blank&quot;&gt;Succession&lt;/a&gt;&lt;/i&gt;, an experiment in generative digital heritage that uses Newcastle and its history to think about industrialisation, global capital, our shared pasts and potential futures. Personally, it brings together two strands of my work that have been separate until now - on generative systems and digital cultural collections. Hence you&#39;ll also find this cross-posted over on &lt;a href=&quot;http://visiblearchive.blogspot.com/&quot; target=&quot;_blank&quot;&gt;The Visible Archive&lt;/a&gt;. Here, some notes and documentation on the work, some musings on generative and computational heritage.&lt;br /&gt;
&lt;br /&gt;
Much of my recent work with digital cultural collections has worked to create rich representations of these ever-expanding datasets. A key thread has been an interest in the complexity of these collections; the multitudes they contain, their wealth of potential meaning as complex, interrelated wholes, rather than simply respositories of individual resources. Visualisation can provide a macroscopic view of this complexity, but it can be just as vivid when sampled at a micro scale. &lt;a href=&quot;http://discontents.com.au/&quot; target=&quot;_blank&quot;&gt;Tim Sherratt&#39;s&lt;/a&gt; &lt;a href=&quot;http://twitter.com/trovenewsbot&quot; target=&quot;_blank&quot;&gt;Trove News Bot&lt;/a&gt; tweets digitised newspaper articles in response to the day&#39;s news headlines, creating little juxapositions, timely sparks of meaning that can be pithy, funny, or provocative. Trove News Bot appropriates the twitter bot - the joking-but-deadly-serious computational voice of our age - and adapts it to work with the digital archive. We could call this generative heritage; using computational processes to create new artefacts (and meanings) from historical material.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Succession&lt;/i&gt; applies this generative approach to the digital heritage of Newcastle Upon Tyne. Newcastle has a rich industrial heritage; it played a major role in the Industrial Revolution that began in Britain and went on to remake global civilisation. Today Newcastle is a post-industrial or &lt;a href=&quot;http://www.theguardian.com/business/2011/nov/16/why-britain-doesnt-make-things-manufacturing&quot; target=&quot;_blank&quot;&gt;de-industrialised&lt;/a&gt; city: coal, steel and shipbuilding have given way to service industries: education, retail, entertainment and tourism. As an outsider exploring the city I was struck by the mixture of pre-modern, industrial and post-industrial eras in the fabric of the city. Different (often inconsistent) patterns of life, work and economy are accreted in layers as the city continues the everyday process of adaptation, experimentation with the possible; working out what comes next.&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;http://mtchl.net/succession/#/saved/1414563648337&quot; target=&quot;_blank&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAwof68U36yKFadd0xzCVJ9FhAJQxkocyjVdV7u7IACXl7RiayWue2szwtV7wDWH_S6Hpw71TSHB1U6967WV7GFMAsNyC3EotoGg3y2NlHLt7wwfqTwAefRXtuoZ9jUsG8Mrrn6g/s1600/sponsor-tunnel.jpg&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
The city, like the digital archive, is a multitude; an unthinkably complex matrix of people, things, systems, narratives. Newcastle - more than many other cities - also speaks to the expansive dynamics of industrialisation, globalisation, extractive industry, fossil fuels; the whole modern trajectory that has brought us to our current predicament. This seems to be both urgent and unthinkable - or perhaps, unsayable. How can we speak back to this complexity; how can we make in a way that responds to this tangled, expansive mess? Here generative techniques offer a way to synthesise complexity and create multitudes, formations that might portray the city as it was, or hint at what it could be. Automatic juxtaposition and remix create nonsense but also, occasionally, glimmers of a new sense, or at least a texture or sensation that emerges from a random constellation of images, sources and contexts. &lt;i&gt;Succession&lt;/i&gt; requires us to piece together fragments of history; and this is a work of imagination, as &lt;a href=&quot;http://www.rossgibson.com.au/&quot; target=&quot;_blank&quot;&gt;Ross Gibson&lt;/a&gt; &lt;a href=&quot;http://www.transformationsjournal.org/journal/issue_13/article_01.shtml&quot; target=&quot;_blank&quot;&gt;writes&lt;/a&gt;, framing his own work of generative heritage (with Kate Richards, &lt;a href=&quot;http://www.lifeafterwartime.com/&quot; target=&quot;_blank&quot;&gt;Life After Wartime&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;blockquote class=&quot;tr_bq&quot;&gt;
Our parlous states need imagination. We need to propose “what if” scenarios that help us account for what has happened in our habitat so that we can then better envisage what might happen. We need to apprehend the past. Otherwise, we won’t be able to align ourselves to historical momentum. Without doing this we won’t be able to divine the continuous tendencies that are making us as they persist out of the past into the present.
&lt;/blockquote&gt;
&lt;br /&gt;
In practical terms, the work is based on a corpus of around two thousand images sourced from the &lt;a href=&quot;http://www.flickr.com/commons&quot; target=&quot;_blank&quot;&gt;Flickr Commons&lt;/a&gt;. Most come from the (wonderful) &lt;a href=&quot;https://www.flickr.com/photos/twm_news/&quot; target=&quot;_blank&quot;&gt;Tyne and Wear Archives and Museums&lt;/a&gt; collection; many more from the &lt;a href=&quot;https://www.flickr.com/photos/internetarchivebookimages/&quot; target=&quot;_blank&quot;&gt;Internet Archive Books&lt;/a&gt; collection, with a smattering of others from UK and international institutions. &lt;i&gt;Succession&lt;/i&gt; uses these ingredients to generate new digital &quot;fossils&quot;; composite images assembled in the browser using HTML Canvas. This generative process is extremely simple: pick five sources at random, and place them in the frame using some semi-random rules for positioning, compositing and repetition. Opacity is kept low, so that the sources blend and merge. The visual process often obscures the source images - they end up buried, &amp;nbsp;cropped or indistinguishable, squashed like fossil strata. But at the same time the source items are preserved and presented in context, so each composite retains references to its sources and their attendant contexts. Composites can be saved, acquiring an ID and permalink; the images in this post show some of my favourites, but there are over a hundred to sift through already.&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://mtchl.net/succession/#/saved/1416262913843&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVoRgAfWdOyPdeZGIoGhdADUKBArYi7DZejzV08EePU20TRtMH_4ciqn12snfFKOhg9eF713n5ZC_Ng8JpXyhElI0qCpBFj_MzLvMZJGEZQeFu7rfB495J2GlvTwi2R84yholdCw/s1600/rhododendron.jpg&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
As a generative system this is, in formal terms, incredibly simple. It&#39;s essentially a combinatorial process, in that each composite consists of five elements from a set of around two thousand. Yet already this adds up to 2.5 x 10^15 unique combinations - it would take eight million years to see them all, at one per second. Compositing and layout parameters are random within constraints - so this simple machine can produce an immense variety of unique results; I&#39;m still surprised and delighted by the fossils people discover (or generate). But this computational variety is also strongly shaped by the human creative choices involved in making the work. This is what Bill Seaman (combinatorial media artist par excellence) &lt;a href=&quot;http://projects.visualstudies.duke.edu/billseaman/textsOulipo.php&quot; target=&quot;_blank&quot;&gt;calls&lt;/a&gt; &quot;authored space&quot; - a domain of potential that is expansive but never arbitrary. The corpus reflects a handful of coherent themes, seasoned with generous sprinklings of the lateral and miscellaneous; the aim is, in Seaman&#39;s &lt;a href=&quot;http://www.fondation-langlois.org/html/e/page.php?NumPage=392&quot; target=&quot;_blank&quot;&gt;words&lt;/a&gt;, a kind of &quot;resonant unfixity.&quot; Also the corpus and the compositing process work in tandem; for example compositor treats the largely monochrome line-art and engravings of the Internet Archive material differently to other (largely photographic) sources. The generative machine is programmed in part by the textures and qualities of its material.&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://mtchl.net/succession/#/saved/1417181237433&quot; target=&quot;_blank&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcPzr3q3DmXy2nWqDNe43Wz07j709jq5bDeUDgG5QVY08QTlIUnb8ciIjwHjauj8_d-aP3DvJrTcvrSdiR94G7p4B2VPQ9cvEXbn_CkhLN1Qe9Ni2Xoa3tmuftxj13WqRY4CHAEA/s1600/bede-kelly.jpg&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;
&lt;br /&gt;
The Internet Archive book images are interesting on several fronts; for one, they are an amazing demonstration of the power of computational processes for generating and describing large collections (like 2.6 million items large). Given the right kind of source material, this computational leverage changes the logic of collections completely. When adding and describing items is expensive, it makes sense to be selective, and publish only what is most &quot;significant&quot;. Automation makes it possible to simply publish everything - for who&#39;s to say (really) what is significant, or how it might one day be significant? In &lt;i&gt;Succession&lt;/i&gt; the Internet Archive material plays a crucial role. The line art and diagrams - many from obscure publications like the &lt;a href=&quot;https://archive.org/stream/transactions21nort/transactions21nort#page/n6/mode/1up&quot; target=&quot;_blank&quot;&gt;Transactions of the North of England Institute of Mining and Mechanical Engineers&lt;/a&gt;&amp;nbsp;- offer evocative fragments of the machinery of mid-nineteenth century industrialisation.&lt;br /&gt;
&lt;br /&gt;
As for generative digital heritage, it&#39;s a fairly open-ended proposal. What happens when we turn algorithms loose on our digital culture with makerly, synthetic, speculative or poetic intention? There are some pretty solid precedents in the digital humanities for these approaches; Schnapp and Presner call for a &quot;generative&quot; DH in their 2009 &lt;a href=&quot;http://jeffreyschnapp.com/wp-content/uploads/2011/10/Manifesto_V2.pdf&quot; target=&quot;_blank&quot;&gt;manifesto&lt;/a&gt;. Before that Drucker and Nowviskie &lt;a href=&quot;http://www2.iath.virginia.edu/time/reports/Speculative_Computing_Fx.doc&quot; target=&quot;_blank&quot;&gt;outlined&lt;/a&gt; a &quot;speculative computing&quot; with a strongly generative flavour. Gibson and Richards&#39; &lt;i&gt;Life After Wartime&lt;/i&gt; is an early exemplar of generative heritage in the digital arts. More recently we&#39;ve seen the rise of massive online collections, web-scale computing, and a proliferation of cultural, critical and creative bots, not to mention projects like &lt;a href=&quot;https://github.com/dariusk/NaNoGenMo-2014&quot; target=&quot;_blank&quot;&gt;#NaNoGenMo&lt;/a&gt;. If there is such a thing as generative digital heritage, then now&#39;s the time.&lt;br /&gt;
&lt;/div&gt;
</description><link>http://teemingvoid.blogspot.com/2014/12/generative-heritage-notes-on-succession.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMkY3BICLbN9Zu9Jwhzem5O6rtqzyTME-MbUHIzhZW06GZfCECmhyDbq1SI21ITEyhRWlqcI30JKYICzKHb6T9p_S6Pxo1ds-KazQQCk3r2Fkm632ITc8D_i5MrZO6c5aloRzuQQ/s72-c/succession-bird-2.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-2553731241957254295</guid><pubDate>Sat, 08 Jun 2013 01:24:00 +0000</pubDate><atom:updated>2013-06-08T11:24:03.897+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">artist</category><category domain="http://www.blogger.com/atom/ns#">computation</category><category domain="http://www.blogger.com/atom/ns#">emergence</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">interview</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">protocomputing</category><title>Proto-Computing - an Interview with Ralf Baecker</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;div class=&quot;p1&quot;&gt;
At &lt;a href=&quot;http://code2012.wikidot.com/&quot; target=&quot;_blank&quot;&gt;CODE2012&lt;/a&gt; I presented a paper on &quot;programmable matter&quot; and the proto-computational work of &lt;a href=&quot;http://www.rlfbckr.org/&quot; target=&quot;_blank&quot;&gt;Ralf Baecker&lt;/a&gt; and &lt;a href=&quot;http://1010.co.uk/&quot; target=&quot;_blank&quot;&gt;Martin Howse&lt;/a&gt; - part of a long-running project on digital materiality. My sources included interviews with the artists, which I will be publishing here. Ralf Baecker&#39;s 2009&amp;nbsp;&lt;i&gt;&lt;a href=&quot;http://www.rlfbckr.org/work/the_conversation&quot; target=&quot;_blank&quot;&gt;The Conversation&lt;/a&gt;&lt;/i&gt; is a complex physical network, woven from solenoids - electro-mechanical &quot;bits&quot; or binary switches. It was one of the works that started me thinking about this notion of the proto-computational - where artists seem to be stripping digital computing down to its raw materials, only to rebuild it as something weirder. &lt;i&gt;&lt;a href=&quot;http://www.rlfbckr.org/work/irrational_computing&quot; target=&quot;_blank&quot;&gt;Irrational Computing&lt;/a&gt; &lt;/i&gt;(2012)&amp;nbsp;- which crafts a &quot;computer&quot; more like a modular synth made from crystals and wires - takes this approach further. Here Baecker begins by responding to this notion of proto-computing.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;310&quot; mozallowfullscreen=&quot;&quot; src=&quot;http://player.vimeo.com/video/37443273?title=0&amp;amp;byline=0&amp;amp;portrait=0&amp;amp;color=ffffff&quot; webkitallowfullscreen=&quot;&quot; width=&quot;550&quot;&gt;&lt;/iframe&gt;

&lt;br /&gt;
&lt;div class=&quot;p1&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;p1&quot;&gt;
MW: &lt;i&gt;In your work, especially Irrational Computing, we seem to see some of the primal, material elements of digital computing. But this &quot;proto&quot; computing is also quite unfamiliar - it is chaotic, complex and emergent, we can&#39;t control or &quot;program&quot; it, and it is hard to identify familiar elements such as memory vs processor. So it seems that your work is not only deconstructing computing - revealing its components - but also reconstructing it in a strange new form. Would you agree?&lt;/i&gt;&lt;/div&gt;
&lt;div class=&quot;p2&quot;&gt;
RB: It took me a long time to adopt the term &quot;proto-computing&quot;. I don&#39;t mean proto in a historical or chronological sense; it is more about its state of development. I imagine a device that refers to the raw material dimension of our everyday digital machinery. Something that suddenly appears due to the interaction of matter. What I had in mind was for instance the &lt;a href=&quot;http://oklo.curtin.edu.au/&quot; target=&quot;_blank&quot;&gt;natural nuclear fission reactor&lt;/a&gt; in Oklo, Gabon that was discovered in 1972. A conglomerate of minerals in a rock formation formed the conditions for a functioning nuclear reactor, all by chance.&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;p1&quot;&gt;
Computation is a cultural and not a natural phenomenon; it includes several hundred years of knowledge and cultural technics, these days all compressed into a microscopic form (the CPU). In the 18th century the mechanical tradition of automata and symbolic/mathematical thinking merged into the first calculating and astronomical devices. Also the combinatoric/hermeneutic tradition (e.g. &lt;a href=&quot;http://en.wikipedia.org/wiki/Athanasius_Kircher&quot; target=&quot;_blank&quot;&gt;Athanasius Kircher&lt;/a&gt; and &lt;a href=&quot;http://en.wikipedia.org/wiki/Ramon_Llull&quot; target=&quot;_blank&quot;&gt;Ramon Llull&lt;/a&gt;) is very influential to me. These automatons/concepts were philosophical and epistemological. They were dialogic devices that let us think further, much against our current utilitarian use of technology. Generative utopia.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://dl.dropboxusercontent.com/u/5622512/irratonal-computing-schematik-011_EN.pdf&quot; style=&quot;margin-left: auto; margin-right: auto;&quot; target=&quot;_blank&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQkIQgbzC39F29eSL1cTEJGyp2GKIYhGcGNTKdq6d5fMLivynKjWA3FbeYlrdwxdt9rtJPWj7xxlys51XRPi8kB7wusCTgbaVZ6kdJjOQkaA0qhQYqfuVRO6glaV0HXBSmtYOLqw/s800/irratonal-computing-schematik-011_EN.png&quot; width=&quot;550&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;Schematic of &lt;i&gt;Irrational Computing&lt;/i&gt;&amp;nbsp;courtesy of the artist - click for PDF&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div class=&quot;p1&quot;&gt;
&lt;br /&gt;
MW:&amp;nbsp;&lt;i&gt;Your work stages a fusion of sound, light and material. In Irrational Computing for example we both see and hear the activity of the crystals in the SiC module. Similarly in The Conversation, the solenoids act as both mechanical / symbolic components and sound generators. So there is a strong sense of the unity of the audible and the visual - their shared material origins. (This is unlike conventional audiovisual media for example where the relation between sound and image is highly constructed). It seems that there is a sense of a kind of material continuum or spectrum here, binding electricity, light, sound, and matter together?&lt;/i&gt;&lt;/div&gt;
&lt;div class=&quot;p2&quot;&gt;
RB: My first contact with art or media art came through net art, software art and generative art. I was totally fascinated by it. I started programming generative systems for installations and audiovisual performances. I like a lot of the early screen based computer graphics/animation stuff. The pure reduction to wireframes, simple geometric shapes. I had the feeling that in this case concept and representation almost touch each other. But I got lost working with universial machines (Turing machines). With &lt;i&gt;Rechnender Raum&lt;/i&gt; I started to do some kind of subjective reappropriation of the digital. So I started to build my very own non-universal devices. &lt;i&gt;Rechnender Raum &lt;/i&gt;could also be read as a kinetic interpretation of a cellular automaton algorithm. Even if the Turing machine is a theoretical machine it feels very plastic to me. It a metaphorical machine that shows the conceptual relation of space and time. Computers are basically transposers between space and time, even without seeing the actual outcome of a simulation. I like to expose the hidden structures. They are more appealing to me than the image on the screen.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;309&quot; mozallowfullscreen=&quot;&quot; src=&quot;http://player.vimeo.com/video/10346429?title=0&amp;amp;byline=0&amp;amp;portrait=0&amp;amp;color=ffffff&quot; webkitallowfullscreen=&quot;&quot; width=&quot;550&quot;&gt;&lt;/iframe&gt;

&lt;br /&gt;
&lt;div class=&quot;p1&quot;&gt;
&lt;br /&gt;
MW:&amp;nbsp;&lt;i&gt;There is a theme of complex but insular networks in your work. In The Conversation this is very clear - a network of internal relationships, seeking a dynamic equilibrium. Similarly in Irrational Computing, modules like the phase locked loop have this insular complexity. Can you discuss this a little bit? This tendency reminds me of notions of self-referentiality, for example in the writing of Hofstadter, where recursion and self-reference are both logical paradoxes (as in Godel&#39;s theorem) and key attributes of consciousness. Your introverted networks have a strong generative character - where complex dynamics emerge from a tightly constrained set of elements and relationships.&lt;/i&gt;&lt;/div&gt;
&lt;div class=&quot;p2&quot;&gt;
RB: Sure, I&#39;m fascinated by this kind of emergent processes, and how they appear on different scales. But I find it always difficult to use the attribute consciousness. I think these kind of chaotic attractors have a beauty on their own. Regardless how closed these systems look, they are always influenced by its environment. The perfect example for me is the flame of a candle. A very dynamic complex process communicating with its environment, that generates the dynamics.&lt;br /&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;399&quot; mozallowfullscreen=&quot;&quot; src=&quot;http://player.vimeo.com/video/10345852?title=0&amp;amp;byline=0&amp;amp;portrait=0&amp;amp;color=ffffff&quot; webkitallowfullscreen=&quot;&quot; width=&quot;550&quot;&gt;&lt;/iframe&gt;

&lt;br /&gt;
&lt;div class=&quot;p1&quot;&gt;
&lt;br /&gt;
MW: &lt;i&gt;You describe The Conversation as &quot;pataphysical&quot;, and mention the &quot;mystic&quot; and &quot;magic&quot; aspects of Irrational Computing. Can you say some more about this a aspect of your work? Is there a sort of romantic or poetic idea here, about what is beyond the rational, or is this about a more systematic alternative to how we understand the world?&lt;/i&gt;&lt;/div&gt;
&lt;div class=&quot;p2&quot;&gt;
RB: Yes, it refers to an other kind of thinking. A thinking that is anti &quot;cause and reaction&quot;. A thinking of hidden relations, connections and uncertainty. I like Claude Lévi-Strauss&#39; term &quot;&lt;a href=&quot;http://en.wikipedia.org/wiki/The_Savage_Mind&quot; target=&quot;_blank&quot;&gt;The Savage Mind&lt;/a&gt;&quot;.&lt;/div&gt;
&lt;/div&gt;
</description><link>http://teemingvoid.blogspot.com/2013/06/proto-computing-interview-with-ralf.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQkIQgbzC39F29eSL1cTEJGyp2GKIYhGcGNTKdq6d5fMLivynKjWA3FbeYlrdwxdt9rtJPWj7xxlys51XRPi8kB7wusCTgbaVZ6kdJjOQkaA0qhQYqfuVRO6glaV0HXBSmtYOLqw/s72-c/irratonal-computing-schematik-011_EN.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-715859732878746717</guid><pubDate>Tue, 04 Jun 2013 06:21:00 +0000</pubDate><atom:updated>2013-06-04T16:21:52.793+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">art</category><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">dataesthetics</category><category domain="http://www.blogger.com/atom/ns#">exhibition</category><title>Figuring Data (Datascape Catalog Essay)</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;i&gt;This essay was commissioned for the exhibition &lt;a href=&quot;http://www.ciprecinct.qut.edu.au/whatson/exhibitions/datascape.jsp&quot; target=&quot;_blank&quot;&gt;Datascape&lt;/a&gt;, at the Cube Gallery, QUT in April 2013. I should mention that since writing it I&#39;ve discovered that &lt;a href=&quot;http://blprnt.com/&quot; target=&quot;_blank&quot;&gt;Jer Thorp&lt;/a&gt; was way ahead of me on to the &lt;a href=&quot;http://blogs.hbr.org/cs/2012/11/data_humans_and_the_new_oil.html&quot; target=&quot;_blank&quot;&gt;new oil&lt;/a&gt; thing.&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
“Data is the new oil” - Ann Hummer, Hummer-Winblad Venture Partners (&lt;a href=&quot;http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/&quot; target=&quot;_blank&quot;&gt;source&lt;/a&gt;)&lt;br /&gt;
&lt;br /&gt;
In the swirling chaos of twenty-first century capitalism, everybody wants to know what’s
next. “Data is the new oil” is a pithy little announcement. It reminds us how
we got here, powered by the long energetic boom of fossil fuels, now entering
its closing stages. it announces a successor, a new wealth (and just in time).
But in drawing the analogy, it also constructs data in a certain way; as a sort
of amorphous but precious &lt;i&gt;stuff&lt;/i&gt;, a resource for exploitation, and a sort
of promising abundance. Similarly &lt;i&gt;The Economist &lt;/i&gt;trumpeted the “&lt;a href=&quot;http://www.economist.com/node/15579717&quot; target=&quot;_blank&quot;&gt;Data Deluge&lt;/a&gt;” on their February 2010 cover: a businessman catches falling data in an
upside-down umbrella, funnelling it to water a growing flower whose leaves are
hundred dollar bills.&lt;br /&gt;
&lt;a href=&quot;http://media.economist.com/images/20100227/201009LDD001.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;225&quot; src=&quot;http://media.economist.com/images/20100227/201009LDD001.jpg&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;br /&gt;
We need not (and should not) accept this analogy; but it demonstrates how data is
figured, or constructed, in our culture. Our everyday life and culture is
traced, tangled and enabled by digital flows. We produce and consume data as
never before. But what exactly &lt;i&gt;is&lt;/i&gt; this data? What can it do, and what
can &lt;i&gt;we&lt;/i&gt; do with it? Who owns or controls it? How can we understand,
appreciate, or even &lt;i&gt;sense&lt;/i&gt; it? The construction of data as a cultural
actor is vital because data itself is so abstract, so hard to pin down. We
ought not leave it to the captains of industry, and their upside-down
umbrellas. In Datascape we see artists working with data, applying and
diverting it for their own ends, as well as offering their own figurations of
its potentials and limits. In a culture increasingly built on data, these works
provide moments of cultural introspection, reflections on this abstract stuff
that is our new social medium.&lt;br /&gt;&lt;br /&gt;
Google, Facebook, Twitter and the rest make us - their users - into data. This makes us
anxious about privacy and surveillance, but perhaps a more interesting question
is what it’s like to &lt;i&gt;be&lt;/i&gt; data. If we are all data subjects now, then what
is data subjectivity? Jordan Lane’s &lt;i&gt;&lt;a href=&quot;http://dialogue.media-culture.org.au/students/jordan-lane&quot; target=&quot;_blank&quot;&gt;Digital Native Archive&lt;/a&gt; &lt;/i&gt;imagines a
new bureaucratic archive for the data subject, and immediately comes to the
question of mortality. If we are data, and data can be faithfully preserved,
are we now immortal? Or are we, instead, dead forever, entombed in a
rationalised hierarchy of metadata, request protocols and archival record
formats? Christopher Baker’s &lt;i&gt;&lt;a href=&quot;http://christopherbaker.net/projects/mymap/&quot; target=&quot;_blank&quot;&gt;My Map&lt;/a&gt;&amp;nbsp;&lt;/i&gt;(below)&lt;i&gt;&amp;nbsp;&lt;/i&gt;shows us what it might be to take
charge of a personal archive, with a tool that reveals the patterns and
relationships in email correspondence. This self-portrait suggests that one of
the challenges of data subjectivity is simply knowing oneself: the scale of our
personal data exceeds our grasp.&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;http://christopherbaker.net/wp-content/uploads/2008/09/mymap-rotated_featured.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;300&quot; src=&quot;http://christopherbaker.net/wp-content/uploads/2008/09/mymap-rotated_featured.jpg&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;

In two of the most prominent data art works from the mid 2000s, we mine these personal
archives en masse. &lt;a href=&quot;http://www.flong.com/&quot; target=&quot;_blank&quot;&gt;Golan Levin’s&lt;/a&gt; &lt;i&gt;&lt;a href=&quot;http://artport.whitney.org/commissions/thedumpster/dumpster.shtml&quot; target=&quot;_blank&quot;&gt;The Dumpster&lt;/a&gt;&lt;/i&gt; and Sep Kamvar and Jonathan Harris’ &lt;i&gt;&lt;a href=&quot;http://wefeelfine.org/&quot; target=&quot;_blank&quot;&gt;We Feel Fine&lt;/a&gt;&lt;/i&gt; scour the internet for “feelings” that are compiled into
datasets, and in turn staged as dynamic visualisations. In turning our digital
selves into swarming dots and bouncing balls, the artists animate us as members
of a teeming throng. Data here is in part a new form of social realism, a way
to represent the complex texture of life in the crowd; but these works also ask
us to reflect on the limits of data-subjectivity. Can the intensity of our
inner lives really be represented in cool, abstract data? Are we all so much
alike? &lt;a href=&quot;http://www.aaronkoblin.com/&quot; target=&quot;_blank&quot;&gt;Aaron Koblin’s&lt;/a&gt; &lt;i&gt;&lt;a href=&quot;http://www.aaronkoblin.com/work/thesheepmarket/&quot; target=&quot;_blank&quot;&gt;Sheep Market&lt;/a&gt; &lt;/i&gt;answers both yes and no; for we can see here both the comical diversity of the crowd (and its sheep avatars), and
the uniformity that digital systems encourage.&lt;br /&gt;&lt;br /&gt;
The pathos of this contrast, between the coolness of the digital and the warm,
messy intensity of humankind, emerges again in Luke du Bois’ &lt;i&gt;&lt;a href=&quot;http://turbulence.org/Works/harddata/&quot; target=&quot;_blank&quot;&gt;Hard Data&lt;/a&gt;&lt;/i&gt;,
where the tolls of war unfold as stark lists and map references. Du Bois’
soundtrack, generated from the same source data, acts as an emotional mediator,
trying to return some of the tragic importance that the data fails to convey.
Du Bois’ work pivots between the data-subject and what we might call the
data-world. For if the world, too, is now data, then what might that feel like?
How do we approach such a world?&lt;br /&gt;
&lt;a href=&quot;http://memory.org/point.b/windmap_may21.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img height=&quot;306&quot; src=&quot;http://memory.org/point.b/windmap_may21.png&quot; style=&quot;border: none;&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;

&lt;br /&gt;
In many works here the weather - a complex (and increasingly uncooperative) material
flux - is a sort of proxy for the data-world: a field that is both easy to
measure, and difficult to grasp. In Miebach’s &lt;i&gt;&lt;a href=&quot;http://nathaliemiebach.com/weatherscores.html&quot; target=&quot;_blank&quot;&gt;Weather Scores&lt;/a&gt;&lt;/i&gt;, Viegas and
Wattenberg’s &lt;i&gt;&lt;a href=&quot;http://hint.fm/wind/&quot; target=&quot;_blank&quot;&gt;Wind Map&lt;/a&gt; &lt;/i&gt;(above), and my own &lt;i&gt;&lt;a href=&quot;http://mtchl.net/measuring-cup/&quot; target=&quot;_blank&quot;&gt;Measuring Cup&lt;/a&gt;,&lt;/i&gt; weather data is
a source of aesthetic richness, as well as a pointer to the world beyond, the
world that data traces. The weather - so much part of our everyday sensations -
is abstracted here into numbers and symbols, only to be remade in new sensual
forms. What if we could see the wind across an entire continent? Or hold a
hundred years of temperature? Or hear the tides as music?&lt;br /&gt;&lt;br /&gt;
Here we get a glimpse of an alternative figuration of data itself. Rather than some
kind of precious (but immaterial) stuff, or fuel for market speculation, data
here is a relationship, a link between one part of the world with another, and
a trace that can be endlessly reshaped. Of course, that trace is imperfect; a
mediated pointer, not a pure reproduction. So Viegas and Wattenberg &lt;a href=&quot;http://hint.fm/wind&quot; target=&quot;_blank&quot;&gt;issue a disclaimer&lt;/a&gt; for their &lt;i&gt;Wind Map&lt;/i&gt;: this is just an “art project”, they say;
we &quot;can&#39;t make any guarantees about the correctness of the data or our software.” Yet
that connection remains; and art here plays the role that it always has. It
transforms our understanding of the world, by representing it anew.&lt;/div&gt;</description><link>http://teemingvoid.blogspot.com/2013/06/figuring-data-datascape-catalog-essay.html</link><author>noreply@blogger.com (Mitchell)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-9008702823327585190</guid><pubDate>Tue, 07 Feb 2012 02:47:00 +0000</pubDate><atom:updated>2013-03-01T09:44:44.404+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">fabrication</category><category domain="http://www.blogger.com/atom/ns#">generative art</category><category domain="http://www.blogger.com/atom/ns#">models</category><category domain="http://www.blogger.com/atom/ns#">networks</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><category domain="http://www.blogger.com/atom/ns#">visualisation</category><title>Local Colour: Smaller World Network</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
Back in September I showed a little work called Local Colour at ISEA 2011. This project continues my thinking about generative systems, materiality and fabrication. It&#39;s a work in two parts: the first is a group of laser-cut cardboard bowls, made from reclaimed produce boxes - you can see more on &lt;a href=&quot;http://www.flickr.com/photos/mtchl/6833205913/in/set-72157626328027536/&quot; target=&quot;_blank&quot;&gt;Flickr&lt;/a&gt;, and read the theoretical back-story in the &lt;a href=&quot;http://isea2011.sabanciuniv.edu/paper/local-colour-and-networked-specificity&quot; target=&quot;_blank&quot;&gt;ISEA paper&lt;/a&gt;. Here I want to briefly document the second element, a sort of network diagram realised as a vinyl-cut transfer. The diagram was created using a simple generative system, initially coded Processing - it&#39;s embedded below in Processing.js form (reload the page to generate a new diagram).&lt;br /&gt;
&lt;br /&gt;
&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;a href=&quot;http://www.flickr.com/photos/mtchl/6833205913/&quot; title=&quot;Local Colour at ISEA 2011 by mtchl, on Flickr&quot;&gt;&lt;img alt=&quot;Local Colour at ISEA 2011&quot; height=&quot;500&quot; src=&quot;http://farm8.staticflickr.com/7151/6833205913_f21c9ff2e3.jpg&quot; width=&quot;415&quot; /&gt;&lt;/a&gt;
&lt;br /&gt;
Network diagrams are one of the most powerful visual tropes in contemporary digital culture. Drawing on the credibility of &lt;a href=&quot;http://en.wikipedia.org/wiki/Network_science&quot; target=&quot;_blank&quot;&gt;network science&lt;/a&gt; they promise a paradigm that can be used to visualise everything from social networks to transport and biological systems. I love how they oscillate between expansive significance and diagrammatic emptiness. In this work I was curious to play with some of the conventions of &lt;a href=&quot;http://en.wikipedia.org/wiki/Small-world_network&quot; target=&quot;_blank&quot;&gt;small world&lt;/a&gt; or scale-free networks. A leading theory about how these networks forms involves &lt;a href=&quot;http://arxiv.org/pdf/cond-mat/9910332.pdf&quot; target=&quot;_blank&quot;&gt;preferential attachment&lt;/a&gt;: put simply it states that nodes entering a network will prefer to connect to those nodes that already have the most connections. In visualising the resulting networks, graph layout processes (such as force direction) use the connectivity between nodes to reposition the nodes themselves; location is determined by the network topology.&lt;br /&gt;
&lt;br /&gt;
&lt;iframe frameborder=&quot;0&quot; height=&quot;400&quot; id=&quot;frame1&quot; name=&quot;frame1&quot; scrolling=&quot;no&quot; src=&quot;http://mtchl.net/localcol_network/index.html&quot; width=&quot;570&quot;&gt;&lt;/iframe&gt;
&lt;br /&gt;
&lt;br /&gt;
This process takes the standard small-world-network model and changes a few basic things. First, it assigns nodes a fixed position in space. Second, it uses that position to shape the connection process: here, as in the standard model, nodes prefer to connect to those with lots of existing connections. But distance also matters: connecting to a close node is &quot;cheaper&quot; than connecting to a distant one. And nodes have a &quot;budget&quot; - an upper limit on how far their connection can reach. These hacks result in a network which has some small world attributes - &quot;hubs&quot; and &quot;clusters&quot; of high connectivity - but where connectivity is moderated by proximity. Finally, this diagram visualises a change in one parameter of the model, as the distance budget decreases steadily from left to right. It could be a utopian progression towards a relocalised future, or the breakdown or dissolution of the networks we inhabit (networks in which distance remains, for the time being, cheap enough to neglect).&lt;br /&gt;
&lt;br /&gt;
The process running here generates the diagram through a gradual process of optimisation. Beginning with 600 nodes placed randomly (but not too close to any other), each node is initially assigned a random partner to link to. Then they begin randomly choosing new partners, looking for one with a lower cost - and cost is a factor of both distance and connectivity. The Processing source code is &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/localcol_network/small_world_network_js.pde&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/div&gt;
&lt;/div&gt;</description><link>http://teemingvoid.blogspot.com/2012/02/local-colour-smaller-world-network.html</link><author>noreply@blogger.com (Mitchell)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-4039204924430166517</guid><pubDate>Mon, 09 Jan 2012 01:24:00 +0000</pubDate><atom:updated>2012-01-09T12:26:15.670+11:00</atom:updated><title>An Interview with Paul Prudence (for Neural 40)</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
Yet another blog &quot;paste&quot; to provide some semblance of life around here. &lt;a href=&quot;http://dataisnature.com/&quot; target=&quot;_blank&quot;&gt;Paul Prudence&lt;/a&gt; recently interviewed me for &lt;a href=&quot;http://www.neural.it/art/2011/11/neural_40_the_generative_unexp.phtml&quot; target=&quot;_blank&quot;&gt;Neural 40: The Generative Unexpected&lt;/a&gt;, ranging over generative art (utopian and otherwise), cross modal AV and data visualisation among other things. Thanks to Paul for some thoughtful questions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;PP: It might be argued that some of the main themes infused in 
generative art are those to do with a kind of techno-utopianism and 
futurism. Have you come across any generative artworks that deal with 
dystopian themes or have a sense of anachronism about them?
More importantly are the technologies and software used in creating 
these artworks inherently defining their aesthetics?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
It&#39;s true that there&#39;s a flavour of the techno-utopian to a lot of 
digital generative art, especially in the online digital scene. The 
founding principle of generative art is, inescapably, the generative 
capacity of its own system, so perhaps it is optimistic
by definition? Online culture - or the realtime social media flow of 
projects, memes and links that we tend to bathe in - is also 
techno-utopian at its core, still strongly influenced by the West-Coast 
startup culture of the companies involved. But with a bit
of digging some more diversity emerges; the work of my friend&amp;nbsp;&lt;a href=&quot;http://www.csse.monash.edu.au/%7Ejonmc/main.html&quot; target=&quot;_blank&quot;&gt;Jon McCormack&lt;/a&gt; for
 example, is highly reflective about the
nature / technology relationship - though it sometimes conceals its 
ambivalence under a very beautiful surface. Another Australian artist - &lt;a href=&quot;http://teemingvoid.blogspot.com/2007/11/murray-mckeich-generative-gothic.html&quot; target=&quot;_blank&quot;&gt;Murray McKeich&lt;/a&gt; - makes work that is both anachronistic and dystopian, like 
his pZombies, gruesome avatars for generative agency composited from 
scanned rubbish.&lt;br /&gt;
&lt;br /&gt;
On the other hand the flipside of techno-utopia is real richness and 
generative excess - the ability of formal systems to reveal terrains of 
sublime complexity. At best this &quot;maximalist&quot; strand of generative 
practice can induce a state of wonder, little chinks
of access to the unthinkable complexity of the real material world.&lt;br /&gt;
&lt;br /&gt;
Do the technologies define aesthetics? They certainly shape the 
aesthetics powerfully - but at least now the field of technology is more
 open and malleable for artists than ever before. It might be that the 
most important new works in this field are coding
platforms or communities, rather than art or design projects. Processing
 &lt;a href=&quot;http://processing.org/discourse/yabb2/YaBB.pl?num=1116842288&quot;&gt;won a Golden Nica&lt;/a&gt;, after all. But in this field monolithic 
&quot;technologies&quot; are increasingly breaking down - Processing for example 
is very influential, and there is certainly a Processing
&quot;look&quot;, but with a new framework or library appearing every other week, 
we can&#39;t blame technology for limited diversity in the field.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;PP: Much generative art is concerned with certain kinds of abstraction 
and systematised multiplicity of form without a framework of 
proposition, resolution and conclusion. Do you think there is any room 
for a sense of narrative in generative art? Could you
give me examples of generative artworks that deal with narrative 
successfully?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
I would argue that every generative artwork involves a framework of 
proposition, resolution and conclusion. It is the formal and procedural 
structure of the generative system that creates the work: a set of 
entities, attributes, relationships, processs, rules,
constraints, and visualisations (more &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/papers/SystemStories.pdf&quot;&gt;here&lt;/a&gt;). The problem, for the way generative art
 is both made and received, is that that system is often hard to get at -
 it&#39;s an abstract thing, which the artist may or may not describe or 
publish. A lot of work in the digital generative
scene operates in an image culture where &quot;look&quot; is valued over process 
or concept. So although it&#39;s sometimes hard to access, I would argue 
that there is often a narrative inside even the most &quot;retinal&quot; 
generative art - it&#39;s the narrative of the system. Sometimes
it&#39;s fairly clear - for example &lt;a href=&quot;http://www.coplanar.org/&quot;&gt;Brandon Morse&#39;s&lt;/a&gt; wonderful procedural 
animations of collapsing structures (also another dystopian work!). For 
me Morse&#39;s work is wonderfully poignant because it works by resemblance -
 it reminds us of real things collapsing -
but it also works by metonymy, referring to the idealised world of 
computer graphics and simulation; so it seems like the simulation itself
 is collapsing as well (below: &lt;i&gt;Achilles&lt;/i&gt; (2009) - &lt;a href=&quot;http://www.flickr.com/photos/transphormetic/5234724678/&quot; target=&quot;_blank&quot;&gt;photo&lt;/a&gt; by Paul Prudence).&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZuzaap_b0u6kR4hoYIP38G6O8KoAytrZnkPsFfU6a0rGPG0C70S9jxTh7iF6lJq-ISimlCgOdtItSeuwyTRyBAVQQxpqE4uaSwsdxTm2N-2ZQNhCLSjzAx6iXMmXv6BrYAEzyvQ/s1600/morse_achilles_500.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZuzaap_b0u6kR4hoYIP38G6O8KoAytrZnkPsFfU6a0rGPG0C70S9jxTh7iF6lJq-ISimlCgOdtItSeuwyTRyBAVQQxpqE4uaSwsdxTm2N-2ZQNhCLSjzAx6iXMmXv6BrYAEzyvQ/s1600/morse_achilles_500.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;i&gt;PP: Each year we see different algorithms come into fashion as tools 
for the generative artist. Perlin Noise, Circle Packing, Voronoi, 
Reaction-Diffusion and Sub-divisioning algorithms are good examples. How
 important is it for an artwork to hide traces of
the software and algorithm that was used to generate it it? Can you 
predict what the next big algorithm might be? Or do you see any new 
potential in an old or overlooked algorithm?&lt;/i&gt; &lt;br /&gt;
&lt;br /&gt;
If you need to hide the traces of your algorithm, change your algorithm.
 I too am fascinated by the algo-memetic fashion parade that moves 
through digital design and generative art. This relates to the question 
of look vs system; these systems seem to reproduce
using their appearance as a sort of lure - it&#39;s a bit like sexual 
selection in a memetic ecology, survival of the prettiest. As a result 
people seem to apply them without any understanding or interest in the 
system or process. I&amp;nbsp;&lt;a href=&quot;http://teemingvoid.blogspot.com/2010/08/uniform-diversity-space-filling-and.html&quot; target=&quot;_blank&quot;&gt;wrote&lt;/a&gt;&amp;nbsp;last
year
 about the Voronoi algorithm along these lines. So algo-fashions will 
come and go, but for me the most rewarding work is always a result of 
deep engagement with the generative system - taking a system and hacking
 it into something else entirely, or deriving
new systems.&amp;nbsp;&lt;a href=&quot;http://notnot.home.xs4all.nl/&quot; target=&quot;_blank&quot;&gt;Erwin Driessens and Maria Verstappen&lt;/a&gt;&amp;nbsp;for example have a long track record of inventing algorithms that you can&#39;t just grab off
the shelf - their&amp;nbsp;&lt;a href=&quot;http://notnot.home.xs4all.nl/breed/Breed.html&quot; target=&quot;_blank&quot;&gt;Breed&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href=&quot;http://notnot.home.xs4all.nl/ima/IMAtraveller.html&quot; target=&quot;_blank&quot;&gt;Ima Traveller&lt;/a&gt;&amp;nbsp;works are sort of mutant cellular automata - but really they don&#39;t fit any clear template.&amp;nbsp;&lt;a href=&quot;http://n-e-r-v-o-u-s.com/&quot; target=&quot;_blank&quot;&gt;Nervous System&lt;/a&gt;&amp;nbsp;also
 implement new systems:
they go to the scientific literature in biology, or even run their own 
physical trials, and implement models from scratch. There aren&#39;t many 
designers currently with the ability to do that. &amp;nbsp;&lt;a href=&quot;http://www.jonathanmccabe.com/&quot; target=&quot;_blank&quot;&gt;Jonathan McCabe&lt;/a&gt;&amp;nbsp;is another good example of this; his multi-scale Turing 
patterns (below) are a genius hack of a very old algorithm. Jonathan&#39;s Origami 
Butterfly process is completely new (and equally distinctive).&lt;br /&gt;
&lt;br /&gt;
So there isn&#39;t a Platonic shelf somewhere stocked with generative 
algorithms for designers to select from. The space of potential 
generative systems is unimaginably massive. Make one up, or at least 
hack an existing one into something else. Even very simple
changes to existing systems can be very productive. For years I have 
been playing with systems based on Murray Eden&#39;s growth model - perhaps 
the simplest (and first) ever model of biological growth. There&#39;s much 
more to explore.&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;http://www.flickr.com/photos/jonathanmccabe/5732971115/&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot; title=&quot;20110518c by jonathanmccabe, on Flickr&quot;&gt;&lt;img alt=&quot;20110518c&quot; border=&quot;0&quot; height=&quot;500&quot; src=&quot;http://farm6.staticflickr.com/5230/5732971115_945d31df55.jpg&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;i&gt;PP: What is the role of serendipity and non-determinism in the formulation of a successful generative artwork?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
When teaching generative art my colleague &lt;a href=&quot;http://hingstonbrook.com/tim/&quot;&gt;Tim Brook&lt;/a&gt; initially bans his 
students from using randomness. I don&#39;t do the same, but I can see the 
logic of it: randomness adds meaningless variation. Used directly, it&#39;s 
just that - meaningless variation that can
give a false impression of richness. But it can be very handy - for 
example when exploring the range of outcomes of a complex system, 
randomising its parameters can throw up useful samples of the generative
 space of that system. Again it&#39;s about understanding
the system. Serendipity is another thing; I think most generative 
artists work hard to cultivate serendipity, to entice systems into a 
state where pleasant surprises emerge. Many artists hand-pick 
&quot;candidates&quot; from large populations of generated works - seeking
out those serendipitous moments. Although variation is fundamental to 
generative work, it&#39;s interesting to observe reactions to &lt;a href=&quot;http://writtenimages.net/&quot; target=&quot;_blank&quot;&gt;Written Images&lt;/a&gt;,
where each volume is a unique variant of the collected works, with no 
opportunity for artists to pick favourites. Not having final control 
over each artefact is still a bit scary (for me at least).&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;PP: In your &lt;/i&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/sets/72157604494499057/&quot; target=&quot;_blank&quot;&gt;Watching The Sky&lt;/a&gt;&lt;i&gt; piece there is almost a tendency to study 
the image in a forensic manner, to try and decode the work, and to find 
environmental patterns in relation to patterns in the work. This method 
of analysis is in almost direct contrast to
the usual manner in which a data visualisation might be constructed,
where an artist decides on a specific representational system beforehand to create clarity and make a point. Perhaps you could 
comment a bit more on how data visualisation might move
forward in this respect.&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
I am drawing on other work here - especially the early work of&amp;nbsp;&lt;a href=&quot;http://jevbratt.com/&quot; target=&quot;_blank&quot;&gt;Lisa Jevbratt&lt;/a&gt;, like her classic&amp;nbsp;&lt;a href=&quot;http://jevbratt.com/1_to_1/&quot; target=&quot;_blank&quot;&gt;1:1&lt;/a&gt;.
Jevbratt outlines a sort of data-mysticism, a view of data as a 
reservoir of unknown potential, and shows fine-grained patterns without 
concern for &quot;readability&quot;. In Watching the Sky (and related work) I just
 use images as a data source; this is a simple ploy
to introduce richness by working with rich, unstructured data - and data
 with a complex (but legible) relationship to the world. That work has 
certainly shaped my thinking on visualisation. Maintaining the 
&quot;unstructured&quot; complexity of the image as a data source
- rather than reducing it to statistical features - is a great way to 
provide contextual cues. The&amp;nbsp;&lt;a href=&quot;http://creative.canberra.edu.au/cex/&quot; target=&quot;_blank&quot;&gt;commonsExplorer&lt;/a&gt;&amp;nbsp;project
 I did with
Sam Hinton - a visual explorer for Flickr Commons streams - uses tiny 
cropped &quot;core samples&quot; that offer telltale clues about the source 
images.&lt;br /&gt;
&lt;br /&gt;
The other idea at work here (and in Jevbratt&#39;s work) is a sense of data 
as (a) material; as something with texture or grain that can be felt as 
much as analysed. I have experimented with making these ideas literal in
 data-form projects like&amp;nbsp;&lt;a href=&quot;http://teemingvoid.blogspot.com/2009/10/weather-bracelet-3d-printed-data.html&quot; target=&quot;_blank&quot;&gt;Weather Bracelet&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href=&quot;http://teemingvoid.blogspot.com/2010/06/measuring-cup.html&quot; target=&quot;_blank&quot;&gt;Measuring Cup&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimcfrslUwnLSoVK0YTLTIdNtqfxnafMESpllw7fzbBIS7385KWa9Y9z3bNbYffwUwji4t1Mj37m30QSsYyDI_d-91T5BerFkmtVi7K2Cn3sboxeXIZ-bSSR0FTrKD3smHJxs2mlw/s1600/robin_fox_-_backscatter_volta1_-_image_courtesy_the_artist2.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimcfrslUwnLSoVK0YTLTIdNtqfxnafMESpllw7fzbBIS7385KWa9Y9z3bNbYffwUwji4t1Mj37m30QSsYyDI_d-91T5BerFkmtVi7K2Cn3sboxeXIZ-bSSR0FTrKD3smHJxs2mlw/s400/robin_fox_-_backscatter_volta1_-_image_courtesy_the_artist2.jpg&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;i&gt;PP: In one of your papers you discuss &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/10/synesthesia-and-cross-modality-in.html&quot;&gt;synaesthesia and cross-modality&lt;/a&gt; in
 contemporary audio visuals. It seems that an important criteria for a 
successful synaesthetic artworks is in meaningful, metaphorical or 
conceptual cross-wiring of sound and video - and
not just a mechanical translation between the two. What other criteria 
are important in a successful cross-modal artwork?
&lt;/i&gt;
&lt;br /&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div&gt;
Cross-modal or &quot;coupled&quot; audiovisuals exemplify one of the key 
questions of digital media - we could call it the mapping problem. If 
the basic materials of the work are digital - that is, abstract patterns
 that can travel through any number of different
substrates - then how do we make them perceivable? Or, how do we choose a
 mapping, a way of making data available to perception? Manovich&amp;nbsp;&lt;a href=&quot;http://www.manovich.net/DOCS/data_art_2.doc&quot; target=&quot;_blank&quot;&gt;calls&lt;/a&gt;&amp;nbsp;this
the
 &quot;built-in existential angst&quot; of digital media. So of course there are 
an infinity of possible ways to connect sound and image - either mapping
 one into the other, or generating both from some common data source. I 
actually like mechanical or automatic mappings.
Because they are stable and consistent they let us soak in the 
relationship, the map itself; and these automatic maps are often quite 
subtle and fine-grained, compared to more composed or intentional 
relationships. In&amp;nbsp;&lt;a href=&quot;http://www.myspace.com/fox_robin&quot; target=&quot;_blank&quot;&gt;Robin Fox&lt;/a&gt;&#39;s work for example a simple (polar) oscilloscope display creates
 images from audio signals - but Fox explores the mapping in depth, 
working out how to &quot;play&quot; it, reverse-engineering the audio signal to 
create images and revealing surprising correspondences (above: image via &lt;a href=&quot;http://notquitecritics.com/2010/09/22/oscilloscopes-and-laser-beams/&quot; target=&quot;_blank&quot;&gt;Not-Quite-Critics&lt;/a&gt;).
Of course automatic mappings can be incredibly boring - how many 
modified graphic equaliser visualisations do we need to see - but I 
think this is often because the mapping is filtered through too many 
abstractions and interventions; it becomes a set of parameters.&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;
&lt;i&gt;PP: There has been a huge influence of generative art in recent years on
 traditional drawing techniques such as painting and sculpture. In 
reverse direction, what ways, if any, can generative artists learn from 
traditional plastic arts?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
The link there for me is a sense of &quot;procedurality&quot; or &quot;processuality&quot;. 
In Casey Reas&#39; work we can see a strong relationship between 
computational and non-computational procedures such as those of Sol Le 
Witt. In teaching programming to designers, I have students
write and execute a Le Witt style procedure, with pencil and paper. 
Digital generative systems are just formal procedures, executed by 
machines. Treating processes as human-executable helps unpack the black 
boxes of generative systems mentioned earlier, and
hopefully reveal them as contingent and hackable. Otherwise: the joy of 
materiality. Generative art and design covets the lush tangibility of 
traditional media; and with the wave of interest in fabrication we are 
seeing ever more generative work realised in
&quot;off-screen&quot; forms. The challenge then, for pasty code-artist types, is 
to match the craft skills of hands-on makers in realising the work.&lt;br /&gt;
&lt;i&gt;&lt;br /&gt;

PP: What early interests did you have that might have lead you to your current path as an artist and academic in this field?&amp;nbsp;&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Music - which I don&#39;t do much of any more, but it was a big part of my 
world for a long time. Music (or Western music anyway) is systematised 
and symbolic, but also immediate and affective. That combination has 
always interested me. Reading &lt;a href=&quot;http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach&quot;&gt;Gödel, Escher, Bach&lt;/a&gt;
- as well as lots of popular science stuff on complex systems - was 
influential. I was playing around with computers from around the time of
 the Apple II; later I convinced my father to buy an Amiga 1000, 
ostensibly to be used in his architecture business.
It didn&#39;t ever do much architecture but I used it to make lots of bad 
graphics and music. Also I grew up in an outer suburb, surrounded by 
wild bushland; I&#39;m a romantic nature boy at heart.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;PP: Can you tell me a bit about how the dual role of essayist/writer and
 artist works in your situation. The dialectical relationship must 
create a certain amount of self-reflexivity on both sides?&lt;/i&gt;&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;
Writing is fundamentally another kind of making - when it works, text 
and ideas are a pretty heady medium. So to some extent it&#39;s all 
practice, or at least speculation, experimentation, thinking of various 
sorts. When it works best, the practical work can trial
or extend the writing, and the writing can contextualise, interpret and 
unpack the art work. &quot;Practice led research&quot; works for me as an approach
 - especially if you don&#39;t split art-making and writing along neat 
practice / theory lines.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;PP: Can you tell me about any projects you have planned for the future,
 any new books in the pipeline or art projects in progress?&lt;/i&gt;&lt;br /&gt;
&lt;br /&gt;
Since 2008 I&#39;ve been researching and developing interactive 
visualisations of cultural collections datasets, working with partners 
including the National Archives of Australia and most recently the 
National Gallery of Australia. The work is challenging and
rewarding; I enjoy the way data vis can span the poetic and the prosaic,
 and the immersive richness of large data sets. That line of work has 
been pulling me away from &quot;art&quot;, which is fine with me - I generally 
find the edges and interfaces around creative
digital culture and practice more interesting than the portion of it 
inside gallery walls. But the writing is also ticking over, mostly on 
digital materiality (or&amp;nbsp;&lt;a href=&quot;https://ucemail.canberra.edu.au/owa/redir.aspx?C=a2e1931195824f529c5d9394993e791a&amp;amp;URL=http%3a%2f%2fteemingvoid.blogspot.com%2fsearch%2flabel%2ftransmateriality&quot; target=&quot;_blank&quot;&gt;transmateriality&lt;/a&gt;)
and the aesthetics of computational art and design. There&#39;s a new book in there somewhere, I hope.
&lt;/div&gt;
&lt;/div&gt;</description><link>http://teemingvoid.blogspot.com/2012/01/interview-with-paul-prudence-for-neural.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZuzaap_b0u6kR4hoYIP38G6O8KoAytrZnkPsFfU6a0rGPG0C70S9jxTh7iF6lJq-ISimlCgOdtItSeuwyTRyBAVQQxpqE4uaSwsdxTm2N-2ZQNhCLSjzAx6iXMmXv6BrYAEzyvQ/s72-c/morse_achilles_500.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-8535565814381236885</guid><pubDate>Sun, 10 Apr 2011 03:31:00 +0000</pubDate><atom:updated>2011-04-10T15:00:31.282+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">arrays</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">projectionmapping</category><category domain="http://www.blogger.com/atom/ns#">screen</category><category domain="http://www.blogger.com/atom/ns#">theory</category><category domain="http://www.blogger.com/atom/ns#">transmateriality</category><title>After the Screen: Array Aesthetics and Transmateriality</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;At the risk of some sort of blog-will-eat-itself situation, I&#39;m posting this paper, presented at &lt;a href=&quot;http://blogs.unsw.edu.au/tiic/&quot;&gt;TIIC&lt;/a&gt; last November&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;, which includes several threads developed here previously - arrays, transmateriality, and the work of HC Gilje. There are some new bits too however, on screens, projection mapping, and lots of tasty examples of a putative &quot;post-screen&quot; practice.&lt;/span&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;&lt;br /&gt;1. Glowing Rectangles&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;For all the diversity of the contemporary media ecology - network, broadcast, games, mobile - one technical form is entirely dominant. Screens are everywhere, at every scale, in every context. As well as the archetypal &quot;big&quot; and &quot;small&quot; screens of cinema and television we are now familiar with pocket- and book-sized screens, public screens as advertising or signage, urban screens at architectural scales. As satirical news site The Onion &lt;a href=&quot;http://www.theonion.com/articles/report-90-of-waking-hours-spent-staring-at-glowing,2747/&quot;&gt;observes&lt;/a&gt;, we &quot;spend the vast majority of each day staring at, interacting with, and deriving satisfaction from glowing rectangles.&quot;&lt;br /&gt;&lt;br /&gt;Formally and technically these screens vary - in size and aspect ratio, display technology, spatiotemporal limits, and so on. They are united however in two basic attributes, which are something like the contract of the screen. First, the screen operates as a mediating substrate for its content - the screen itself recedes in favor of its hosted image. The screen is self-effacing (though never of course absent or invisible). This tendency is clearly evident in screen design and technology; we prize screens that are slight and bright - those that best make themselves disappear. Apple&#39;s &quot;Retina&quot; display technology &lt;a href=&quot;http://www.apple.com/iphone/features/retina-display.html&quot;&gt;claims&lt;/a&gt; to have passed an important perceptual threshhold of self-effacement, attaining a spatial density so high that individual pixels are indistinguishable to the naked eye (below - image &lt;a href=&quot;http://prometheus.med.utah.edu/%7Ebwjones/2010/06/apple-retina-display/&quot;&gt;Bryan Jones&lt;/a&gt;).&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgumB2vpnIBnwQXcYCQb1SAijkkyJSo91CYe_qkajexxp1alPb9F6yn00uFG9h8sCfCKOlVqLc9Bzj1YMBf4f8VbdIFnMHmtde9yX3nr1-U8mT7OtaJPr7ZnrYyKmR03DueEJLDFQ/s1600/Retina-Display-stack-Display.jpg&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 267px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgumB2vpnIBnwQXcYCQb1SAijkkyJSo91CYe_qkajexxp1alPb9F6yn00uFG9h8sCfCKOlVqLc9Bzj1YMBf4f8VbdIFnMHmtde9yX3nr1-U8mT7OtaJPr7ZnrYyKmR03DueEJLDFQ/s400/Retina-Display-stack-Display.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5593794058247474722&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The second key attribute of contemporary digital screens is their tendency to generality. The self-effacing substrate of the screen is increasingly a general-purpose substrate - unlinked to any specific content type; equally capable of displaying anything - text, image, web site, video, or word-processor.  This attribute is coupled of course to the generality of networked computing; since the era of multimedia the computer screen has led the way in modeling itself as a container for anything (just as the computer models itself a &quot;machine for anything&quot;). The past decade has simply seen this general-purpose container proliferate across scales and contexts, ushering us into the era of glowing rectangles.&lt;br /&gt;&lt;br /&gt;However over the past decade in design and the media arts, a wave of practice has appeared which as this paper will argue, resists the dominance of the glowing rectangle. Given the near-total cultural saturation of the screen, this is unsurprising, given the ongoing cultural dance of fringe and mainstream in which this practice participates. This is not simply a story of resistance however. In proposing and describing two particular strains of &quot;post-screen&quot; practice, this paper aims firstly to outline the shared terms of their relationship with the screen, and in the process develop a more detailed sense of these conceptual devices of generality, outlined above, and its opposite, specificity. Secondly, and more briefly, it outlines a theorisation of this practice, invoking transmateriality, an account of the paradoxical materiality of (especially digital) media, and Gumbrecht&#39;s notion of presence.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;2. Arrays&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;During  the opening ceremony of the 2008 Beijing Olympics, a huge grid of  drummers assembled in the stadium, each standing before a large square &lt;i&gt;fou&lt;/i&gt;  drum, a traditional Chinese instrument. Each drum was augmented  with white LEDs mounted on its surface, triggered with each drum stroke.  The drummers formed a vast array of discrete audiovisual elements,  precisely choreographed in the style of these spectaculars. Human  pixels, but coarse and resolutely human; at one point the drummers  desynchronised entirely, forming a thunderous grid of flickering light.  In a ceremony created for the (broadcast) screen - to the infamous  extent of splicing computer-animated fireworks into its telecast in  place of real ones - the drummers were a moment of involution. Their  array echoed all the other, more conventionally self-effacing screens  threaded through the event; but it also inverted some of their key  attributes. Firstly its substrate, instead of receding behind &quot;content&quot;,  came forward; if anything substrate and content were one and the same.  Secondly, while this array nods towards the generality of the screen in  its choreographed patterns - which like the patterns on a screen could  be &quot;anything at all&quot; - it veers strongly in the opposite direction,  towards the here and now, what I will call &lt;a style=&quot;font-family: georgia;&quot; href=&quot;http://teemingvoid.blogspot.com/2009/10/right-here-right-now-hc-giljes-networks.html&quot;&gt;&lt;i&gt;specificity&lt;/i&gt;&lt;/a&gt;. As I &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/08/array-aesthetics-olympic-edition.html&quot;&gt;argued&lt;/a&gt; at the time, the  poetics of this array rely on the specificity of its elements - the  drummers, drums, and their solid-state illumination - rather than the  patterns that play across it.&lt;br /&gt;&lt;br /&gt;&lt;a style=&quot;font-family: georgia;&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW7LYGVkW6enwkB4nu6WC4CNKP0as92XDK_thjzRKqjiFQexdrlSW1ItrV1Q-y73NcuJRVCExSuKKVKgAkhgL28t0sMydw5Bo2P3zWMqNyH9zml_7_CB9wp2uwDhERtJ0iTeJF9A/s400/oly7.jpg&quot;&gt;&lt;img style=&quot;display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 230px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW7LYGVkW6enwkB4nu6WC4CNKP0as92XDK_thjzRKqjiFQexdrlSW1ItrV1Q-y73NcuJRVCExSuKKVKgAkhgL28t0sMydw5Bo2P3zWMqNyH9zml_7_CB9wp2uwDhERtJ0iTeJF9A/s400/oly7.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The  drummers are one popular example of a formal trope we can find  throughout media arts and design practice over the past decade. Daniel Rozin&#39;s 1999&lt;i&gt; &lt;a href=&quot;http://www.smoothware.com/danny/woodenmirror.html&quot;&gt;Wooden Mirror&lt;/a&gt;&lt;/i&gt; is one of the earlier examples. &lt;i&gt;Wooden Mirror&lt;/i&gt;  is an array of square wooden tiles embedded in a large octagonal frame,  along with a bundle of custom electronics. The tiles are fitted with  servomotors, so that each one can tilt up and down on its horizontal  axis. As its angle to the light changes, each tile appears brighter or  darker. Rozin wires up the array to a videocamera, to complete the  mirror circuit: the brightness of pixels in the incoming image drives  the angle of the tiles. Given the overtly visual logic of the work, it&#39;s  interesting that its sound is equally striking: the wooden tiles  clatter like mechanical rainfall, sonifying the rate of change of the  image; as the image becomes still, the clatter dies off to a low  twitching. Again, this array emphasises the material presence of its  substrate. The tonal &quot;generality&quot; of the wooden mirror is functional  enough to be familiar, but the coarse mechanical clattering of these  pixels makes them inescapably specific.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe title=&quot;YouTube video player&quot; src=&quot;http://www.youtube.com/embed/i-G54kVrhbE&quot;  allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;390&quot; width=&quot;480&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;Rozin has made many similar mirrors; notable is &lt;i&gt;&lt;a href=&quot;http://www.smoothware.com/danny/newtrashmirror.html&quot;&gt;Trash Mirror&lt;/a&gt; &lt;/i&gt;(2001)  where the individual elements - irregularly shaped pieces of rubbish -  are packed into a freeform mosaic. This array moves one more step  away from the homogeneous generality of the digital screen. Here the  elements are irregular in size and shape, but also carry their own  specific textures and colours. In &lt;a href=&quot;http://www.smoothware.com/danny/mirrorsmirror.html&quot;&gt;&lt;i&gt;Mirrors Mirror&lt;/i&gt;&lt;/a&gt;(2008) the regular grid returns, but the array elements are themselved  replaced by mirrors; as these tilt they reflect different parts of the  environment. Here the location of the tonal &quot;content&quot; in the array  is, like the image source, deferred to the environment. In a familiar  digital screen, image elements are luminous modules whose colour value  is independent and absolute. In Rozin&#39;s &lt;i&gt;Wooden Mirror&lt;/i&gt; that value becomes relative - tonality is based on self-shading, which depends on the lighting of the work. In &lt;i&gt;Mirrors Mirror&lt;/i&gt; this relativity is multiplied; each element will reflect a different  portion of the environment, depending on both its angle and the  viewpoint of the observer.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe title=&quot;YouTube video player&quot; src=&quot;http://www.youtube.com/embed/zEepuIzOjXc&quot; allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;311&quot; width=&quot;500&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;In many cases these media art arrays depart from the two-dimensional grid entirely. Robert Henke and Christopher Bauder&#39;s &lt;a href=&quot;http://www.monolake.de/concerts/atom.html&quot;&gt;&lt;i&gt;ATOM&lt;/i&gt;&lt;/a&gt;  (2007-8) (above) is an eight-by-eight grid of white helium balloons, each one  fitted with LED illumination and tethered to a computer-controlled winch. The grid becomes a mobile, configurable light-form, tightly  coupled with Henke&#39;s electronic soundtrack in live performance. This  array lowers its resolution drastically, and limits its generality in  one dimension (monochrome elements), but extends its reach (literally)  into a third axis. ART+COM&#39;s 2008 &lt;a href=&quot;http://www.artcom.de/en/projects/project/detail/kinetic-sculpture/&quot;&gt;kinetic sculpture&lt;/a&gt; at the BMW museum  uses a similar configuration, but a higher &quot;resolution&quot; - in this case 714 metal spheres are suspended from motorised cables, forming a smoothly undulating matrix - a sort of programmed corporate ballet. &lt;a href=&quot;http://troika.uk.com/cloud&quot;&gt;&lt;i&gt;Cloud&lt;/i&gt;&lt;/a&gt; (2008), a sculpture in Heathrow airport by London art and design firm  Troika, illustrates another permutation: here a 2d array forms the skin  of a large three-dimensional sculptural form. In this case the elements  are electromagnetic flip-dots - components often used in airport signage  before it was overtaken by glowing rectangles. As in Rozin&#39;s &lt;i&gt;Mirrors&lt;/i&gt;, Troika consciously exploit the materiality, gestural character and the  sound of these retro-pixels. rAndom International&#39;s 2010 &lt;a href=&quot;http://www.random-international.com/designer-of-the-future-2010/&quot;&gt;&lt;i&gt;Swarm Light&lt;/i&gt;&lt;/a&gt; demonstrates a &quot;saturated&quot; 3d array. The work consists of three  cubic arrays of white LED lights, each ten elements per side; these  cubic volumes host a flowing, flickering &quot;swarm&quot; of sound-responsive  agents which traverse the space, brightening or dimming the array as they move.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe src=&quot;http://player.vimeo.com/video/12525044&quot; frameborder=&quot;0&quot; height=&quot;225&quot; width=&quot;400&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;The  work of British designers &lt;a href=&quot;http://www.uva.co.uk/&quot;&gt;United Visual Artists&lt;/a&gt; offers a useful  longitudinal study in post-screen imaging; in particular their work  addresses one of the central technical players in this field, LED  lighting. UVA&#39;s first &lt;a href=&quot;http://www.uva.co.uk/work/massive-attack#/1&quot;&gt;project&lt;/a&gt; involved a huge LED array that formed the  stage set of Massive Attack&#39;s 100th Window tour. Unlike more  screenful video backdrops, this low-res grid had an inescapable  presence, hung directly behind the band and looming over the stage.  Rather than an image machine, UVA treat the grid as a luminous  dot-matrix for the twitching alphanumeric characters of real-time data.  In subsequent work UVA develop this approach in a number of directions,  but digitally articulated light - enabled by the LED - is a recurring  theme. In &lt;a href=&quot;http://www.uva.co.uk/work/monolith#/0&quot;&gt;&lt;i&gt;Monolith&lt;/i&gt;&lt;/a&gt; (2006) UVA use a pair of large, full-colour  LED screens, but treat them as a dynamic light source rather than a  substrate for images; subtle gradients and washes of colour spill over  the audience and into the installation environment, coupled with  generated sound. In &lt;a href=&quot;http://www.uva.co.uk/work/volume#/0&quot;&gt;&lt;i&gt;Volume&lt;/i&gt;&lt;/a&gt; (2006), another installation  piece, the array elements are long vertical LED strips, again treated as  generators of pattern, colour and sound; the work forms an interactive  field as each element responds to nearby activity. In the context  of this steady dismemberment of the screen, UVA&#39;s later work &lt;i&gt;&lt;a href=&quot;http://www.uva.co.uk/work/speed-of-light#/0&quot;&gt;The Speed of Light&lt;/a&gt; &lt;/i&gt;is  notable in that it leaves LED arrays aside entirely. Instead it  uses installed lasers manipulated into dynamic, walk-in calligraphy, as  if light had been finally prised away from its digital substrate, and  turned loose in the environment.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjetgFTulEr5Vd5ItG-yy0bPzGYEgW84TlODfFs4-b9-YQhacwrRSVb3V-m9VddxjF7k3cWLwiT4SQ1_45RcZTCMn4j0WryYLq0cc6DFg6zeCMSU8DcXD_T5buWOt-iBbnhLt8X9g/s1600/uva_sol.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 500px; height: 333px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjetgFTulEr5Vd5ItG-yy0bPzGYEgW84TlODfFs4-b9-YQhacwrRSVb3V-m9VddxjF7k3cWLwiT4SQ1_45RcZTCMn4j0WryYLq0cc6DFg6zeCMSU8DcXD_T5buWOt-iBbnhLt8X9g/s400/uva_sol.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5593803301867811858&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Beyond  their formal similarities, these arrays share some core approaches and  contexts which provide a coherent portrait of a sort of post-screen  practice. These works adopt one key feature of the screen - the  &quot;generality&quot; of an articulated substrate - but trade it off to varying  extents for more &quot;specificity&quot; - exploiting the local, particular  materiality of the work and its environment. This specificity is also  technological, reflecting a practice that crafts hard- and software into  idiosyncratic configurations, rather than using off-the-shelf  infrastructure. Light is a strong theme, in particular the solid-state,  digitally addressable light of the LED (essentially a free-floating  pixel). However the optical in these arrays is always tightly coupled  with other modalities, especially sound, which is either a cherished  byproduct of the array mechanism (as in Rozin&#39;s &lt;i&gt;Mirrors&lt;/i&gt; and Troika&#39;s &lt;i&gt;Cloud&lt;/i&gt;) or generated by the array elements themselves (as in the drummers and UVA&#39;s &lt;i&gt;Volume&lt;/i&gt;).  A quality of liveness is linked with the turn to specificity and  being-in-the-environment; from the &quot;live data&quot; of UVA&#39;s Massive Attack  show, to the live interaction and generation of their later  installations, to the live video driving Rozin&#39;s &lt;i&gt;Mirrors&lt;/i&gt;.  Performance and temporary installation are the dominant forms here -  emphasising the intensified moment, rather than the any-time of static  content.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;3. Projection Mapping and Extruded Light&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;In one sense these arrays present a disintegration of the screen - they pull its elements apart and embed them in the environment. In another strain of media arts practice, something like the converse occurs, though with what I will argue are similar interests and agendas. In this approach screen-like technologies are used intact, rather than decomposed; but their function and their relationship to the environment is transformed. These works reverse-engineer the digital image, exploiting its digital (general) malleability in order to fit it to a specific environment.&lt;br /&gt;&lt;br /&gt;The work of Norwegian artist &lt;a href=&quot;http://hcgilje.wordpress.com/&quot;&gt;HC Gilje&lt;/a&gt; illustrates one trajectory of this second post-screen approach. Gilje&#39;s work from the late 90s was in live digital video, with his ensemble &lt;a href=&quot;http://www.retnull.com/242pilots/&quot;&gt;242.pilots&lt;/a&gt;. This practice was linked to the burgeoning activity in experimental electronic music at the time; here again, performance, improvisation and the intensified moment - what Gilje &lt;a href=&quot;http://www.bek.no/%7Ehc/text_html/getreal_txt.htm&quot;&gt;calls&lt;/a&gt; an &quot;extended now&quot; - are central concerns, though the work is strongly screen-focused in its results . In Gilje&#39;s work over the following decade, he demonstrates another path towards the post-screen. Gilje&#39;s nodio (2005-) is a custom software system for distributing video content across collections of linked &quot;nodes&quot;. In &lt;a href=&quot;https://hcgilje.wordpress.com/2007/04/29/drifter/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;drifter&lt;/span&gt;&lt;/a&gt; (2006) these nodes are manifest as a ring of twelve screens which form a linked audiovisual interspace. With &lt;a href=&quot;https://hcgilje.wordpress.com/2007/04/29/dense/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;dense&lt;/span&gt;&lt;/a&gt; (2007) these nodes take on a more sculptural presence - hanging strips of fabric illuminated from both sides with a tailored video-projection. Here Gilje adapts the screen technology of the video projector to a sculptural environment, pushing it one step away from image and towards illumination. The work also depends on a specific material surface - the translucent weave of the fabric enables the double-sided layering of pattern.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe src=&quot;http://player.vimeo.com/video/2048269&quot; frameborder=&quot;0&quot; height=&quot;321&quot; width=&quot;400&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;a href=&quot;https://hcgilje.wordpress.com/2008/10/31/shift-v2-relief-projection-installation/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;shift&lt;/span&gt;&lt;/a&gt; (2008) (above) develops this approach: a technique known as projection mapping, in which the projected image is reverse-engineered to fit a specific surface. In shift Gilje&#39;s nodes are simple rectangular boxes, constructed from plywood. Using more custom software, the artist illuminates a cluster of these boxes with precisely mapped projected images. The coupled sound emanates from speakers housed in each box, so the objects are again audiovisual (and acoustically distinct) nodes; Gilje composes material for this environment in search of what he &lt;a href=&quot;https://hcgilje.wordpress.com/2007/04/29/nodio-1st-generation/&quot;&gt;terms&lt;/a&gt; &quot;audiovisual powerchords&quot; - moments of intense juxtaposition and interplay.  In &lt;a href=&quot;http://hcgilje.wordpress.com/2009/10/14/blink/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;blink&lt;/span&gt;&lt;/a&gt; (2009) Gilje dispenses with the boxes, instead treating the bare installation space.  Simple, geometric elements - angular lines and bands of tone and colour - are reflected and modulated by the space itself, diffusing from irregular polished floorboards and painted walls. The work plays the room with articulated light, carefully matched to its geometry in way that heightens our awareness of the interplay of space, light and materials.&lt;br /&gt;&lt;br /&gt;Projection mapping has recently flourished in &quot;visualist&quot; practice across art, design and performance contexts; trompe-l&#39;oeil architectural facades are one popular genre, manipulating the built environment by rendering it with a tailored skin of articulated light (see for example Urbanscreen&#39;s &lt;a href=&quot;http://vimeo.com/5595869&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Kubik 555&lt;/span&gt;&lt;/a&gt;). German designers Grosse 8 and Lichtfront &lt;a href=&quot;http://vimeo.com/9697015&quot;&gt;demonstrate&lt;/a&gt; a logical extension of the technique, using multiple projectors to create an &quot;augmented sculpture&quot; in the round.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe src=&quot;http://player.vimeo.com/video/3114617?title=0&amp;amp;byline=0&amp;amp;portrait=0&amp;amp;color=ecf000&quot; frameborder=&quot;0&quot; height=&quot;170&quot; width=&quot;400&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;Another notable example is &lt;span style=&quot;font-style: italic;&quot;&gt;Scintillation&lt;/span&gt; (2009) (above) by Xavier Chassaing, a digital stop-motion film in which projection mapping is used to layer a domestic environment with luminous swirls of  particles, igniting the petals of an orchid and tracing the curves of a moulded plaster cornice [24]. As in Gilje&#39;s &lt;span style=&quot;font-style: italic;&quot;&gt;blink&lt;/span&gt;, &lt;span style=&quot;font-style: italic;&quot;&gt;Scintillation&lt;/span&gt; emphasises the ambience of the projected light - reflections and diffusions are heightened by hand-held macro cinematography, artfully producing an impression of material texture. But in the process it raises some interesting problems for our analytical premise - a shift from the screenful image to something more live and specific. For &lt;span style=&quot;font-style: italic;&quot;&gt;Scintillation&lt;/span&gt; is absolutely a work of filmmaking; here projection mapping - the tailored materialisation of the image - is deployed as a technique for producing generalisable, substrate-independent image content.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;text-align:center&quot;&gt;&lt;iframe src=&quot;http://player.vimeo.com/video/14958082?portrait=0&quot; frameborder=&quot;0&quot; height=&quot;225&quot; width=&quot;400&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;The final example in this survey addresses the same tension. In their recent short film &lt;a href=&quot;http://berglondon.com/blog/2010/09/14/magic-ipad-light-painting/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Making Future Magic&lt;/span&gt;&lt;/a&gt; (above), London design agency Berg give an ingenious demonstration of both the material turn of post-screen imaging, and its recuperation as image content. Berg developed an animation technique combining multiple-exposure stop-motion with a hand-held source of articulated light - specifically the glowing rectangle of the moment, Apple&#39;s iPad. 3d forms are digitally modelled and animated, then decomposed into sequences of 2d slices. These slices are then replayed into the environment, and thus recomposed into 3d forms, by moving an iPad screen over successive still frame exposures. As Berg term it, this is &quot;extruded light&quot; - as in UVA&#39;s latest work, it&#39;s as if light itself has been unpinned from its substrate. The results are a beguiling combination of loose, organic light painting with simple 3d geometry and DSLR imaging. As Berg &lt;a href=&quot;http://berglondon.com/blog/2010/09/14/magic-ipad-light-painting/&quot;&gt;frame&lt;/a&gt; the work, it fits entirely within the post-screen turn proposed here. Responding to a brief around &quot;a magical version of future media&quot;, Berg are &quot;exploring how surfaces and screens look and work in the world ... finding playful uses for the increasingly ubiquitous ‘glowing rectangles’ ...&quot;.  Again the material embeddedness of this articulated light is emphasised - the way it reflects from puddles and diffuses through foliage. Screen as object in the world, rather than window to somewhere else. As in &lt;span style=&quot;font-style: italic;&quot;&gt;Scintillation&lt;/span&gt; however the inescapable irony is that the outcomes of this work are entirely bound up with screenful images - with the generalising infrastructures and distribution pipelines of social image sharing, print-on-demand and networked video.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;4. Transmateriality and Presence Culture&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;To recap briefly: the ubiquitous digital screen is characterised by both generality - an ability to display any content at all - and self-effacing slightness - it tries to make itself disappear as a neutral substrate for content. In contrast to these tendencies this paper describes two distinct but parallel strains of &quot;post-screen&quot; practice in the media arts and design. Arrays mimic the grid configuration of the screen, but lower its resolution and emphasise the material presence of the array elements - their local and individual specificity is balanced with their malleable generality (their ability to carry anything-at-all). Projection mapping and &quot;extruded light&quot; practices also emphasise specificity, materiality and a local, performative being-in-the-world, but they do so by different means - exploiting the malleability of the digital screen (and the computational representations it hosts) in order to make it intensely site-specific. To the extent that they both adapt and resist the attributes of our familiar glowing rectangles, we could describe these practices as post-screen, but this &quot;post&quot; is nothing like a conscious critique, let alone a revolutionary break. However hard they may pull towards specificity and local materiality, they are readily - by design or necessity - recaptured as screen fodder.&lt;br /&gt;&lt;br /&gt;Both these post-screen tendencies and their screenful recuperation can be usefully framed through the notion of &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/03/notes-on-transmateriality.html&quot;&gt;transmateriality&lt;/a&gt;, a concept that attempts to capture a fundamental duality in digital (and other) media: they are everywhere and always material, yet often function as if they are immaterial. In a transmaterial view media always operate as local material instances (this is their aspect of specificity) yet retain the ability to hold specificity at bay - resisting the contingencies of flux - to create a functional generalisation in which this pixel is the same as that one, the email I send is the same as the one you receive, and one node on the network is much the same as any other.&lt;br /&gt;&lt;br /&gt;In the glowing rectangle paradigm functional generality is entirely dominant. The work considered here, on the other hand, revels more in the pleasures and practices of specificity - the clatter of servo-actuated wood or the play of light on this particular wall. In their push towards liveness (of interaction or data), performativity, their integration of sound, and their emphasis on evanescent materiality, these works evoke what Hans Ulrich Gumbrecht would &lt;a href=&quot;http://teemingvoid.blogspot.com/2007/10/notes-on-gumbrechts-production-of.html&quot;&gt;call&lt;/a&gt; &quot;presence culture&quot; - that mode of apprehending the world which is characterised by fleeting but intense moments of being, and a sense of being part of the world of things, rather than outside it, looking in. Gumbrecht constructs presence in opposition to a dominant &quot;meaning culture&quot;, in which the essence of material things can be obtained only through interpretation. Gumbrecht describes the relationship between these poles as one of dynamic oscillation. &quot;Presence phenomena&quot; become &quot;effects of&quot; presence, &quot;because we can only encounter them within a culture that is predominantly a meaning culture. ... [T]hey are necessarily surrounded by, wrapped into, and perhaps even mediated by clouds and cushions of meaning&quot;.&lt;br /&gt;&lt;br /&gt;In exactly the same way we find an inevitable oscillation here between screen and post-screen. We can align the screen with generality and meaning culture, and the post-screen with specificity and presence culture; but here too the post-screen is evanescent and elusive, instead existing largely within the dominant screen culture. However this is not to discount the utopian aspirations of a post-screen practice, which might instead be located through the perspective of transmateriality. For in echoing the screen, or in literally bending it to the local, present and specific, these works operate as reminders of the ubiquitous and everyday materiality of our media, of the fact that depite appearances, every glowing rectangle is already local and specific. If that specificity is latent, then these works demonstrate practical strategies for making it explicit; from hardware hacking to modular LEDs and custom software, they participate in what might be called &quot;&lt;a href=&quot;http://teemingvoid.blogspot.com/2009/01/transduction-transmateriality-and.html&quot;&gt;expanded computing&lt;/a&gt;&quot;, using the malleability of digital media to reactivate its presence - and thus our presence, too - in the world of things.</description><link>http://teemingvoid.blogspot.com/2011/04/after-screen-array-aesthetics-and.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgumB2vpnIBnwQXcYCQb1SAijkkyJSo91CYe_qkajexxp1alPb9F6yn00uFG9h8sCfCKOlVqLc9Bzj1YMBf4f8VbdIFnMHmtde9yX3nr1-U8mT7OtaJPr7ZnrYyKmR03DueEJLDFQ/s72-c/Retina-Display-stack-Display.jpg" height="72" width="72"/><thr:total>3</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-5374254058967075280</guid><pubDate>Sat, 12 Mar 2011 05:37:00 +0000</pubDate><atom:updated>2011-03-12T20:59:10.029+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">digital design</category><category domain="http://www.blogger.com/atom/ns#">generative design</category><title>Dynamic Design - Three Systems</title><description>&lt;div style=&quot;text-align: left;&quot;&gt;After far too long, some vaguely formed thoughts on dynamic design, after some converging links and conversations in the last few days. One of these is the new MIT Media Lab &lt;a href=&quot;http://www.thegreeneyl.com/mit-media-lab-identity-1&quot;&gt;identity&lt;/a&gt; from &lt;a href=&quot;http://www.thegreeneyl.com/&quot;&gt;The Green Eyl&lt;/a&gt;. It&#39;s nice work, but also seems like a new high-water mark for generative or dynamic graphic design.&lt;/div&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;iframe src=&quot;http://player.vimeo.com/video/20488585?portrait=0&quot; width=&quot;500&quot; height=&quot;281&quot; frameborder=&quot;0&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;br /&gt;&lt;div&gt;In this approach graphic design goes &quot;meta&quot;: from controlling a set of visual relationships, to controlling a &lt;i&gt;system for generating&lt;/i&gt; visual relationships. As in other generative forms, there&#39;s a payoff in the multiplicity of the results - one logo? try 40,000 variants! But more interesting I think is a change in the locus of design, where design happens. To see one of these new logos is to appreciate its colour, form and typography; to see a dozen is to begin to appreciate the variety and coherence of relationships the designers have created. But to engage with the work fully - for example, if you&#39;re a Media Lab person, to generate your own personal variant - is to understand that it&#39;s not a logo, or even a family of logos, but a dynamic &quot;identity system&quot;. And because this is a logo, any instance of it comes to signify not only the client, but the dynamic system, or to be more specific, a quality of &quot;dynamic systemness.&quot; What better brand value for the Media Lab? &lt;/div&gt;&lt;div&gt;&lt;a href=&quot;http://farm6.static.flickr.com/5057/5514561530_818985677a.jpg&quot;&gt;&lt;img src=&quot;http://farm6.static.flickr.com/5057/5514561530_818985677a.jpg&quot; border=&quot;0&quot; alt=&quot;&quot; style=&quot;display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; border:none; text-align: center; cursor: pointer; width: 321px; height: 500px; &quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;There is also an aspect of something like performance here. Instead of an imprint or copy, the logo becomes a performance of its system (signifying that system in the process). In discussing this with my friend &lt;a href=&quot;http://twitter.com/gravitron&quot;&gt;Geoff Hinchcliffe&lt;/a&gt; the other day, he pointed out that this is really nothing new for graphic design. Any book jacket design is inevitably a performance of the genre (or system) that is &quot;book jacket&quot;. Graphic forms like book covers are often highly constrained and rule-driven, just like this new-fangled dynamic design. Geoff&#39;s own &lt;a href=&quot;http://www.flickr.com/photos/twittermodernclassics/&quot;&gt;Twitter Modern Classics&lt;/a&gt; demonstrates this beautifully, rendering tweets through the design templates of Penguin&#39;s iconic paperbacks. If cover design is a set of rules, it&#39;s no surprise a computer can execute them so effectively. Here dynamic design is a poetic strategy, a way to strike sparks of joy and surprise from the collision of form and content.&lt;br/&gt;&lt;/div&gt;&lt;br /&gt;&lt;div&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-yOyCg9mzdltQG_rkTaeBOhA2owTMBwYmbuKJST24PUlOKUeztu_0RFyOLvrBC3TtVxFvaMaH5QI16Ja3aF2v_vJwaTbm4SuWRTM16n1y3UgItaNFP9hV7gfOCqkQvLUx-rcWNg/s1600/namegen.png&quot;&gt;&lt;img src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-yOyCg9mzdltQG_rkTaeBOhA2owTMBwYmbuKJST24PUlOKUeztu_0RFyOLvrBC3TtVxFvaMaH5QI16Ja3aF2v_vJwaTbm4SuWRTM16n1y3UgItaNFP9hV7gfOCqkQvLUx-rcWNg/s400/namegen.png&quot; border=&quot;0&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5583084588904005938&quot; style=&quot;display: block; margin-top: 0px; margin-right: auto; margin-bottom: 10px; margin-left: auto; text-align: center; cursor: pointer; width: 400px; height: 345px; &quot; /&gt;&lt;/a&gt;&lt;br/&gt;&lt;/div&gt;&lt;div&gt;The final example comes by way of &lt;a href=&quot;http://nevolution.typepad.com/&quot;&gt;Daniel Neville&lt;/a&gt;, another designer with an interest in dynamic identity systems (or &lt;a href=&quot;http://nevolution.typepad.com/theories/2010/06/thesis_introduction.html&quot;&gt;relational design&lt;/a&gt;). In fact the &lt;a href=&quot;http://www.lastappetite.com/melbourne-restaurant-names/&quot;&gt;Melbourne Restaurant Name Generator&lt;/a&gt; is not really design at all. If anything it&#39;s something like generative satire, in the same genre that can turn out &lt;a href=&quot;http://www.elsewhere.org/journal/bandname/&quot;&gt;band names&lt;/a&gt; or even whole &lt;a href=&quot;http://pdos.csail.mit.edu/scigen/&quot;&gt;computer science papers&lt;/a&gt;. The Melbourne Restaurant thing works for me because it is such acute satire: from the recycled decor to the uber-limited menu and the obsession with bicycles, it just nails a whole urban scene. As a piece of generative satire it works by both portraying its target as formulaic - as&lt;i&gt; nothing but a system&lt;/i&gt; - while also milking the absurd juxtapositions that its own system generates. It seems to cleave a complex thing at its joints, revealing underlying elements and relationships. Maybe there&#39;s something here for dynamic graphic design? &lt;/div&gt;</description><link>http://teemingvoid.blogspot.com/2011/03/dynamic-design-three-systems.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm6.static.flickr.com/5057/5514561530_818985677a_t.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-3596722647927229062</guid><pubDate>Fri, 13 Aug 2010 01:04:00 +0000</pubDate><atom:updated>2010-08-16T14:13:56.976+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">architecture</category><category domain="http://www.blogger.com/atom/ns#">critique</category><category domain="http://www.blogger.com/atom/ns#">generative design</category><category domain="http://www.blogger.com/atom/ns#">models</category><category domain="http://www.blogger.com/atom/ns#">space</category><category domain="http://www.blogger.com/atom/ns#">voronoi</category><title>Uniform Diversity: Space-Filling and the Voronoi diagram</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;This post is a short excerpt from a paper recently published in &lt;/span&gt;&lt;a href=&quot;http://www.informaworld.com/smpp/title%7Edb=all%7Econtent=g925294108&quot;&gt;Architectural Theory Review&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; 15(2) - a special issue on architecture and geometry with lots of good (Australian) stuff. My paper (&lt;a href=&quot;http://dl.dropbox.com/u/5622512/spacefilling.pdf&quot;&gt;pdf&lt;/a&gt;) is a critical look at space-filling geometry in generative design. It touches on several things already blogged - the Water Cube and &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://teemingvoid.blogspot.com/2008/08/links-tangents-grids-and-foam-olympic.html&quot;&gt;ideal foams&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;, and some  &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;generative&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://teemingvoid.blogspot.com/2009/01/jcsmr-curls.html&quot;&gt;projects&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; that use self-limiting growth. This excerpt looks at the Voronoi diagram as a space-filling process.&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Coloured_Voronoi_2D.svg/500px-Coloured_Voronoi_2D.svg.png&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 450px; height: 450px;&quot; src=&quot;http://upload.wikimedia.org/wikipedia/commons/thumb/2/20/Coloured_Voronoi_2D.svg/500px-Coloured_Voronoi_2D.svg.png&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;p&gt;The &lt;a href=&quot;http://en.wikipedia.org/wiki/Voronoi_diagram&quot;&gt;Voronoi diagram&lt;/a&gt; has become a ubiquitous motif in recent generative architecture and design. It, too, can be usefully read as a space-filling model. In formal terms, a Voronoi diagram is a way of dividing up space into regions so that, for a given set of sites within that space, each region contains all points in the space that are closer to one site than any other. The result is also foam-like, but as a model the Voronoi diagram has attributes quite different to the ideal Kelvin or &lt;a href=&quot;http://en.wikipedia.org/wiki/Weaire%E2%80%93Phelan_structure&quot;&gt;Weaire Phelan&lt;/a&gt; foams.&lt;/p&gt;  &lt;p&gt;Firstly, while the formal model is again based on a strict set of conditions (in this case proximity) it works with an arbitrary input — the given sites —rather than defining a regular structure. The Voronoi is thus a procedural geometric structure in a way that the ideal foams are not: its structure emerges through the application of a specific process or algorithm to a given set of inputs. In this way, the specific spatial relations between neighbouring cells depend on, and emerge locally from, the given spatial relations of the specified sites. This trait also gives the Voronoi model a kind of malleability; sites can be added, removed, or moved, and the spatial structure readily adapts&lt;/p&gt;  &lt;p&gt;Again we can read off the attributes of the Voronoi as a model in this way. It is multiplicitous, but in a different way to the grid-like uniformity of the foam models. In this case, the multiplicity can, in fact, be irregular: the sites can be positioned anywhere within a given space. However, this does not amount to much, in terms of heterogeneity: while the sites can be positioned arbitrarily, the procedure, and the relation between sites that it encodes, is entirely uniform. Each site, taken as a formal entity, is identical to every other; this is a kind of uniform diversity. Like the foam models, the Voronoi diagram treats space as indefinite and extensive: it can go on forever; its only practical limit being the computational resources required to calculate the diagram. The model itself has no way of defining an edge or bound. Finally, the variability of the Voronoi can be phrased another way, as arbitrariness; in other words, that there is no inherent reason for a given site to be where it is. There is nothing internal to the model that can generate that differentiation.&lt;/p&gt;&lt;p&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3maldJJjoCQu8m8CpNS7d3PMVXH4al6j6tVPmx63CV3NJl8lp9TIdsdIPdIQHUQprDD8B1sSPk3_3SmwuLO4dncdeRy6qpwBhBgU3Qch82qzPDJmgKRYRPS3Hfs6QCTgQEoyIdw/s1600/newson_voronoi_shelf.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 297px; border: medium none;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3maldJJjoCQu8m8CpNS7d3PMVXH4al6j6tVPmx63CV3NJl8lp9TIdsdIPdIQHUQprDD8B1sSPk3_3SmwuLO4dncdeRy6qpwBhBgU3Qch82qzPDJmgKRYRPS3Hfs6QCTgQEoyIdw/s400/newson_voronoi_shelf.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5504703862521181010&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;br /&gt;In Marc Newson&#39;s &lt;i&gt;Voronoi Shelf,&lt;/i&gt; for example (above), we see a characteristically organic variety: a range of cell sizes and shapes, different wall thicknesses, all in an agreeable state of harmony. The form gives an impression of inherent logic. It is as if the harmony of the relationships between the cell sites assures us that there must be a reason for them to be as they are. This is unsurprising, given our familiarity with, and aesthetic attunement to, naturally occurring structures that resemble these cells. The visual signature carries an association of organic logic: but in formal fact the cell sites are arbitrary, that is to say, designed. There is no necessary relation of one to another, only (we can but assume) a designer&#39;s choice, which is concealed by an appearance, much as the surface of the Water Cube &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/08/links-tangents-grids-and-foam-olympic.html&quot;&gt;conceals&lt;/a&gt; the regularity of its foam model.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs8Rd8tH8NiMq5yflTsxvdBtb2CkbUX4inVtZyIfIKC-MCMVMnme4yEWd6CLzDgXdkUwdIHpgUfSyyfS-wBxwTdH-uslrbm7bQJUAXZezIpOrh73rtUSU95aIHo6ePqYl6rg9q4g/s1600/gourdakis_algo_body.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 300px; border: medium none;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs8Rd8tH8NiMq5yflTsxvdBtb2CkbUX4inVtZyIfIKC-MCMVMnme4yEWd6CLzDgXdkUwdIHpgUfSyyfS-wBxwTdH-uslrbm7bQJUAXZezIpOrh73rtUSU95aIHo6ePqYl6rg9q4g/s400/gourdakis_algo_body.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5504706036921313426&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Conversely, some designers directly address the arbitrary input to the Voronoi diagram, treating it as an opportunity and exploiting the malleability of the model. As Dimitris Gourdoukis &lt;a href=&quot;http://object-e.blogspot.com/2007/08/voronoi-study-part-3-algorithmic-body.html&quot;&gt;writes&lt;/a&gt;, &quot;the problem of deciding on the initial set of points is, I think, one of the most interesting in relation to voronoi diagrams.&quot; In Gourdoukis&#39; &lt;i&gt;Algorithmic Body&lt;/i&gt; project (above), the locations of the Voronoi sites are specified by a second generative system, a cellular automaton; here the Voronoi acts as a geometric filter, interpreting and interpolating one set of spatial data into another. In Marc Fornes&#39; &lt;a href=&quot;http://tvmny.blogspot.com/2007/01/070122polytop.html&quot;&gt;&lt;i&gt;POLYTOP&lt;/i&gt;&lt;/a&gt;, the designer proposes a mass-customised product in which customers can design the point cloud that drives the Voronoi geometry; here a problem of arbitrary choice is turned into a feature, towards uniqueness and specificity.&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOFEQz8JkP89usND7kdUT00uYdFmlc56uT4hiTi5-HJm5gOONUl1CMnVRydJ4r76ImUtjQi_Wmd35uwjJNzivIq-KXmkxmD1pPrnsbBno4LROvIF0RbeezQLJOwbMnUyXRsXwM9Q/s1600/fornes_polytop.jpeg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 400px; height: 332px; border: medium none;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOFEQz8JkP89usND7kdUT00uYdFmlc56uT4hiTi5-HJm5gOONUl1CMnVRydJ4r76ImUtjQi_Wmd35uwjJNzivIq-KXmkxmD1pPrnsbBno4LROvIF0RbeezQLJOwbMnUyXRsXwM9Q/s400/fornes_polytop.jpeg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5504706929359537874&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;</description><link>http://teemingvoid.blogspot.com/2010/08/uniform-diversity-space-filling-and.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3maldJJjoCQu8m8CpNS7d3PMVXH4al6j6tVPmx63CV3NJl8lp9TIdsdIPdIQHUQprDD8B1sSPk3_3SmwuLO4dncdeRy6qpwBhBgU3Qch82qzPDJmgKRYRPS3Hfs6QCTgQEoyIdw/s72-c/newson_voronoi_shelf.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-4591874358219036010</guid><pubDate>Sun, 06 Jun 2010 07:31:00 +0000</pubDate><atom:updated>2010-06-07T16:44:50.880+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">3d</category><category domain="http://www.blogger.com/atom/ns#">climatechange</category><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">dataesthetics</category><category domain="http://www.blogger.com/atom/ns#">exhibition</category><category domain="http://www.blogger.com/atom/ns#">fabrication</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><title>Measuring Cup</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;Measuring Cup &lt;/span&gt;is a little dataform project I&#39;ve been working on this year. It&#39;s currently showing in &lt;a href=&quot;http://www.arttech.com.au/insideout&quot;&gt;Inside Out&lt;/a&gt;, an exhibition of rapid-prototyped miniatures at &lt;a href=&quot;http://www.object.com.au/&quot;&gt;Object&lt;/a&gt; gallery, Sydney.&lt;br /&gt;&lt;br /&gt;This form presents 150 years of Sydney temperature data in a little cup-shaped object about 6cm high. The data comes from the UK Met Office&#39;s HadCRUT &lt;a href=&quot;http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html&quot;&gt;subset&lt;/a&gt;, released earlier this year; for Sydney it contains monthly average temperatures back to 1859.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm3.static.flickr.com/2725/4333605279_bb43711531.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 375px; height: 500px;&quot; src=&quot;http://farm3.static.flickr.com/2725/4333605279_bb43711531.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The structure of the form is pretty straightforward. Each horizontal layer of the form is a single year of data; these layers are stacked chronologically bottom to top - so 1859 is at the base, 2009 at the lip. The profile of each layer is basically a radial line graph of the monthly data for that year. Months are ordered clockwise around a full circle, and the data controls the radius of the form at each month. The result is a sort of squashed ovoid, with a flat spot where winter is (July, here in the South).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm5.static.flickr.com/4039/4334340750_8c78742eef.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 500px; height: 375px;&quot; src=&quot;http://farm5.static.flickr.com/4039/4334340750_8c78742eef.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The data is smoothed using a moving average - each data point is the average of the past five years data for that month. I did this mainly for aesthetic reasons, because the raw year-to-year variations made the form angular and jittery. While I was reluctant to do anything to the raw values, moving average smoothing is often applied to this sort of data (though as always the devil is in the &lt;a href=&quot;http://www.climate4you.com/DataSmoothing.htm&quot;&gt;detail&lt;/a&gt;).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm5.static.flickr.com/4033/4334347218_b493878496.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 500px; height: 375px;&quot; src=&quot;http://farm5.static.flickr.com/4033/4334347218_b493878496.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The punchline really only works when you hold it in your hand. The cup has a lip - like any good cup, it expands slightly towards the rim. It fits nicely in the hand. But this lip is, of course, the product of the warming trend of recent decades. So there&#39;s a moment of haptic tension there, between ergonomic (human centred) pleasure and the evidence of how our human-centredness is playing out for the planet as a whole.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm3.static.flickr.com/2735/4307860580_30c42044b3.jpg&quot;&gt;&lt;img style=&quot;display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 500px; height: 489px;&quot; src=&quot;http://farm3.static.flickr.com/2735/4307860580_30c42044b3.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The form was generated using &lt;a href=&quot;http://processing.org/&quot;&gt;Processing&lt;/a&gt;, exported to STL via &lt;a href=&quot;http://labelle.spacekit.ca/supercad/&quot;&gt;superCAD&lt;/a&gt;, then cleaned up in &lt;a href=&quot;http://meshlab.sourceforge.net/&quot;&gt;Meshlab&lt;/a&gt;. The render above was done in Blender - it shows the shallow tick marks on the inside surface that mark out 25-year intervals. Overall the process was pretty similar to that for the &lt;a href=&quot;http://teemingvoid.blogspot.com/2009/10/weather-bracelet-3d-printed-data.html&quot;&gt;Weather Bracelet&lt;/a&gt;. One interesting difference in this case is that consistently formatted global data is readily available, so it should be relatively easy to make a configurator that will let you print a Cup from your local data.</description><link>http://teemingvoid.blogspot.com/2010/06/measuring-cup.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm3.static.flickr.com/2725/4333605279_bb43711531_t.jpg" height="72" width="72"/><thr:total>2</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-906141746443216156</guid><pubDate>Wed, 19 May 2010 01:20:00 +0000</pubDate><atom:updated>2016-10-22T14:34:36.503+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">advertising</category><category domain="http://www.blogger.com/atom/ns#">critique</category><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">dataesthetics</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">motion graphics</category><title>This is Data? Arguing with Data Baby</title><description>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
These IBM commercials are gorgeous, lavish examples of modern motion  graphics from &lt;a href=&quot;http://motiontheory.com/&quot;&gt;Motion Theory&lt;/a&gt;. Like some of the agency&#39;s earlier &lt;a href=&quot;http://dev.motiontheory.com/nikegolf/&quot; id=&quot;ia5-&quot; title=&quot;work&quot;&gt;work&lt;/a&gt;,  and a handful of &lt;a href=&quot;http://teemingvoid.blogspot.com/2007/04/procedurally-hip-generative-motion.html&quot; id=&quot;kh59&quot; title=&quot;other&quot;&gt;other&lt;/a&gt; &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/07/radioheads-data-melancholy.html&quot; id=&quot;jkf4&quot; title=&quot;examples&quot;&gt;examples&lt;/a&gt; noted here, these ads show how  code-literate design (could we call it the &lt;a href=&quot;http://processing.org/&quot; id=&quot;em5v&quot; title=&quot;P&quot;&gt;P&lt;/a&gt; factor?) is  transforming this field. For all those reasons, I love this work; but it  also really bothers me. I&#39;ll try to explain.&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;iframe src=&quot;https://player.vimeo.com/video/40036840?title=0&amp;byline=0&amp;portrait=0&quot; width=&quot;640&quot; height=&quot;360&quot; frameborder=&quot;0&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;&lt;a href=&quot;https://vimeo.com/40036840&quot;&gt;IBM - Data Baby&lt;/a&gt; from &lt;a href=&quot;https://vimeo.com/stanchypants&quot;&gt;John Stanch&lt;/a&gt; on &lt;a href=&quot;https://vimeo.com&quot;&gt;Vimeo&lt;/a&gt;.&lt;/p&gt;
&lt;/div&gt;
&lt;br /&gt;
The opening  line of this voiceover says it all, really. &lt;i&gt;This is data&lt;/i&gt;. Making  that call - defining what data is - is a powerful cultural gesture right  now, because as I&#39;ve argued before data as an idea or a figure is both  highly charged and strangely abstract. It makes a lot of sense for a  corporation like IBM to stake a claim on data; this stuff is somehow  both blessing and curse, precious and ubiquitous, immaterial and  material. IBM promises here to help with the wrangling, but also, most  powerfully, to show us what data is.&lt;br /&gt;
&lt;br /&gt;
So, what is data here? In  these commercials data is first and foremost &lt;i&gt;material&lt;/i&gt;. It is a  physical stuff. In &lt;i&gt;Data Baby&lt;/i&gt; it wraps a little infant like some  kind of luminescent placenta, drifting away into the air, thrown off in  shimmering waves as the child breathes. In &lt;span style=&quot;font-style: italic;&quot;&gt;Data Energy&lt;/span&gt; it trails like a  cloud behind a tram, and spins with the blades of a wind turbine. A lot  of the (beautiful) animation work here has been devoted to simulating  behaviour, making this colorful, abstract stuff seem to be tightly  embedded in the world with us. What that means is both coupling it  tightly to real objects, and supplying it with immanent dynamics -  making it drift, disperse or twirl.&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;iframe src=&quot;https://player.vimeo.com/video/11224684&quot; width=&quot;640&quot; height=&quot;360&quot; frameborder=&quot;0&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;
&lt;p&gt;&lt;a href=&quot;https://vimeo.com/11224684&quot;&gt;IBM Data Energy&lt;/a&gt; from &lt;a href=&quot;https://vimeo.com/michaelsuarez&quot;&gt;Michael Suarez&lt;/a&gt; on &lt;a href=&quot;https://vimeo.com&quot;&gt;Vimeo&lt;/a&gt;.&lt;/p&gt;&lt;/div&gt;
&lt;br /&gt;
The second interesting  property of data here - related to the first - is that it just exists.  Look again at &lt;span style=&quot;font-style: italic;&quot;&gt;Data Baby&lt;/span&gt;, and note that there is no visible sign of this  data being gathered (or rather, made). No oxygen saturation meter, no  wires, no tubes, no electrodes. Not a transducer in sight. Not until the  closing wide shot do we even see a computer. (This is fascinating in  itself; IBM (or their ad agency) gets it that the computer is no longer  the right image, or metaphor, for &quot;information technology&quot;. Neither is  the network; now it&#39;s immanent, abundant data.) In other words data here  is not gathered, measured, stored or transmitted - or not that we can  see. It just is, and it seems to be inherent in the objects it refers  to; &lt;span style=&quot;font-style: italic;&quot;&gt;Data Baby&lt;/span&gt; is &quot;generating&quot; data as easily as breathing.&lt;br /&gt;
&lt;br /&gt;
Completing  this visual data-portrait are some other related themes: data is  multiplicitous and plentiful, it&#39;s diverse (many colours and shapes) but  ultimately harmonious and beautiful - in &lt;span style=&quot;font-style: italic;&quot;&gt;Data Transportation&lt;/span&gt; it looks  like an urban-scale 3d Kandinsky painting.&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;iframe frameborder=&quot;0&quot; height=&quot;270&quot; src=&quot;https://www.dailymotion.com/embed/video/xdao3d?theme=none&amp;amp;wmode=transparent&quot; width=&quot;480&quot;&gt;&lt;/iframe&gt;&lt;br /&gt;&lt;/div&gt;
&lt;br /&gt;
Several things  bother me about this portrayal. The first is the same is the reason I  love it: it&#39;s powerfully, seductively beautiful, and this amplifies all  my other reservations. The vision of data as material, in the world, is  also incredibly seductive; my concern is that we get such pleasure from  seeing these rich dynamics play out - that the motes wafting from Data  Baby&#39;s skin seem so right - that we overlook the gaps in the narrative.  This vision of material data is also frustrating because it has all the  ingredients of a far more interesting idea: data is material, or at  least it depends on material substrates, but the relationship between  data and matter is just that, a relationship, not an identity. Data  depends on stuff; always in it, and moving &lt;a href=&quot;http://teemingvoid.blogspot.com/search/label/transmateriality&quot;&gt;transmaterially&lt;/a&gt; through it,  but it is precisely &lt;span style=&quot;font-weight: bold;&quot;&gt;not&lt;/span&gt; stuff in itself.&lt;br /&gt;
&lt;br /&gt;
You could say that I&#39;m  quibbling about metaphors here, and you&#39;d be right, but metaphors are  crucially important because they shape what we think data is, and what  it does. Related to data as stuff is this second attribute; data that  just is, in the same way that matter is neither created or destroyed,  but just exists. This is crucially, maybe dangerously wrong. Data does  not just happen; it is created in specific and deliberate ways. It is  generated by sensors, not babies; and those sensors are designed to  measure specific parameters for specific reasons, at certain rates, with  certain resolutions. Or more correctly: it is gathered by people, for  specific reasons, with a certain view of the world in mind, a certain  concept of what the problem or the subject is. The people use the  sensors, to gather the data, to measure a certain chosen aspect of the  world.&lt;br /&gt;
&lt;br /&gt;
If we come to accept that data just is, it&#39;s too easy to  forget that it reflects a specific set of contexts, contingencies and  choices, and that crucially, these could be (and maybe should be)  different. Accepting data shaped by someone else&#39;s choices is a tacit  acceptance of their view of the world, their notion of what is  interesting or important or valid. Data is not inherent or intrinsic in  anything: it is constructed, and if we are going to work intelligently  with data we must remember that it can always be constructed some other  way.&lt;br /&gt;
&lt;br /&gt;
Collapsing the real, complex, human / social / technological  processes around data into a cloud of wafting particles is a brilliant  piece of visual rhetoric; it&#39;s a powerful and beautiful story, but it&#39;s  full of holes. If IBM is right - and I think they probably are - about  the dawning age of data everywhere, then we need more than a sort of  corporate-sponsored data mythology. We need real, broad-based, practical and critical data  skills and &lt;a href=&quot;http://www.smashingmagazine.com/2010/05/10/imagine-a-pie-chart-stomping-on-an-infographic-forever/&quot;&gt;literacies&lt;/a&gt;, an understanding of how to make data and do  things with it.&lt;/div&gt;
</description><link>http://teemingvoid.blogspot.com/2010/05/this-is-data-arguing-with-data-baby.html</link><author>noreply@blogger.com (Mitchell)</author><thr:total>15</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-1601539905742399851</guid><pubDate>Fri, 26 Mar 2010 21:26:00 +0000</pubDate><atom:updated>2010-05-21T19:29:24.613+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">social software</category><category domain="http://www.blogger.com/atom/ns#">visualisation</category><title>commonsExplorer</title><description>A quick bit of cross-promotion. The &lt;a href=&quot;http://creative.canberra.edu.au/cex&quot;&gt;commonsExplorer&lt;/a&gt; is an experimental &quot;big picture&quot; browser for &lt;a href=&quot;http://www.flickr.com/commons/&quot;&gt;Flickr Commons&lt;/a&gt; collections - &lt;a href=&quot;http://meetpi.edublogs.org/&quot;&gt;Sam Hinton&lt;/a&gt; and I started working on it for &lt;a href=&quot;http://mashupaustralia.org/&quot;&gt;MashupAustralia&lt;/a&gt; months ago, and it&#39;s finally ready. Read some &lt;a href=&quot;http://visiblearchive.blogspot.com/2010/03/commonsexplorer.html&quot;&gt;background&lt;/a&gt; over on the Visible Archive blog, or &lt;a href=&quot;http://creative.canberra.edu.au/cex&quot;&gt;download the app&lt;/a&gt; and try it out.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/4437599492/&quot; title=&quot;commonsExplorer 1.0 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm5.static.flickr.com/4065/4437599492_d0915b79a6.jpg&quot; alt=&quot;commonsExplorer 1.0&quot; style=&quot;border: 0px none ; display: block; text-align: center;&quot; height=&quot;374&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;</description><link>http://teemingvoid.blogspot.com/2010/03/commonsexplorer.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm5.static.flickr.com/4065/4437599492_d0915b79a6_t.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-463386188942952402</guid><pubDate>Fri, 11 Dec 2009 22:51:00 +0000</pubDate><atom:updated>2009-12-12T12:42:32.760+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">canberra</category><category domain="http://www.blogger.com/atom/ns#">climatechange</category><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">opensource</category><title>Data Walks - a #climatedata proposal</title><description>In response to the UK Met Office&#39;s recent data &lt;a href=&quot;http://www.metoffice.gov.uk/climatechange/science/monitoring/subsets.html&quot;&gt;release&lt;/a&gt; and Manuel Lima&#39;s &lt;a href=&quot;http://www.visualcomplexity.com/vc/blog/?p=706&quot;&gt;call&lt;/a&gt; for visualisations, there&#39;s been a flurry of &lt;a href=&quot;http://search.twitter.com/search?q=%23climatedata&quot;&gt;#climatedata&lt;/a&gt; activity in the last couple of days, including some revealing &lt;a href=&quot;http://eagereyes.org/data/interactively-explore-climate-data&quot;&gt;visualisations&lt;/a&gt;. Though I&#39;m looking forward to playing with the data myself, this isn&#39;t a post about visualisation. It&#39;s a simpler proposal for a way to make the data tangible.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFQp-Ds9LL2a_n1rLey1RYVqO2Cre5lsGnk1MvJK7Zff_PEkB5W7BwKP1zxIepx7UqA8Wr3VkTlSWgbyKGplRSPQW0nFzmkQVNOQ-SA6sCNxUccQmtmZ87nH6cjgcWTYE5Ee_I9Q/s1600-h/climate_data_walk.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 450px; height: 266px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFQp-Ds9LL2a_n1rLey1RYVqO2Cre5lsGnk1MvJK7Zff_PEkB5W7BwKP1zxIepx7UqA8Wr3VkTlSWgbyKGplRSPQW0nFzmkQVNOQ-SA6sCNxUccQmtmZ87nH6cjgcWTYE5Ee_I9Q/s400/climate_data_walk.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5414128793597115122&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Global warming is ultimately a question about change in a single measurement - temperature - over time. One way or another, it can be boiled down to a line graph. How best to make that line tangible? Visualisation is great, but how else could we feel those changes, especially over time? One way would be to walk the data. We could make a kind of giant line graph, in the form of a path or road, then walk from 1850 to 2009. According to the Met Office&#39;s &lt;a href=&quot;http://www.metoffice.gov.uk/climatechange/science/monitoring/data-graphic.GIF&quot;&gt;graph&lt;/a&gt; - remixed above with a &lt;a href=&quot;http://www.flickr.com/photos/mrsteven/2589069782/&quot;&gt;picture&lt;/a&gt; of my local landscape - this would be a fairly undulating journey, but the last half especially would be a distinct and noticeable climb. Building this path at a walkable scale seems like hard work though. It would be much easier to use the paths we already have. So, here&#39;s a recipe for a #climatedata walk:&lt;br /&gt;&lt;ol&gt;&lt;li&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Make a graph.&lt;/span&gt; There are all kinds of options here. The Met Office graph shows global difference from a long-term (1961-1990) average. You could for example use local data only, or use raw average temperatures rather than difference from average. You would also need to select a year range from the data - want to walk the whole century or just post-WW2? All the data choices should be made clear to any walkers.&lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Fit to landscape.&lt;/span&gt; This is the tricky part. The idea would be to find a walkable route with changes in elevation that fit your line graph well. Finding a perfect fit will be very difficult, but finding an OK fit should be possible. This will involve some scaling questions: how long will the walk be, and how much elevation will it cover? Accessibility, ergonomics, experience design, affect - lots of juicy design decisions here. One crude but easy fitting procedure would be to begin with a route, find its elevation profile, then scale the graph to fit the start and end points of the graph to the route start and end, then note the points where the path and the graph intersect. Maybe some GIS / maps people could help with software tools here for route finding and fitting? &lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Tick marks.&lt;/span&gt; Walk the route and mark it out in order to make the whole thing legible. Mark out years or decades, as well as temperature variation (elevation). One option for paths with an imperfect fit, would be to notate the &lt;span style=&quot;font-style: italic;&quot;&gt;difference&lt;/span&gt; between the path and the graph at certain points, as well as points where the path and the graph intersect. &lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Walk.&lt;/span&gt; Again you can imagine many ways to do this, ranging from big organised public walks, to smaller private ones. Of course walking often leads to talking - and in a different way to, say, looking at a graph. &lt;/li&gt;&lt;/ol&gt;I should emphasise that I haven&#39;t even tried this, yet, but I hope to - Canberrans, if you&#39;re interested in helping organise a walk here, let me know. Wherever you are, if you do try it, let me know - also feel free to adapt / refine / repurpose the procedure. Could be fun, even informative - at the very least, you&#39;ll walk up a hill.</description><link>http://teemingvoid.blogspot.com/2009/12/data-walks-climatedata-proposal.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFQp-Ds9LL2a_n1rLey1RYVqO2Cre5lsGnk1MvJK7Zff_PEkB5W7BwKP1zxIepx7UqA8Wr3VkTlSWgbyKGplRSPQW0nFzmkQVNOQ-SA6sCNxUccQmtmZ87nH6cjgcWTYE5Ee_I9Q/s72-c/climate_data_walk.jpg" height="72" width="72"/><thr:total>5</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-8676875442389288220</guid><pubDate>Sat, 05 Dec 2009 05:28:00 +0000</pubDate><atom:updated>2009-12-06T14:24:55.355+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">digital design</category><category domain="http://www.blogger.com/atom/ns#">education</category><category domain="http://www.blogger.com/atom/ns#">mdd</category><category domain="http://www.blogger.com/atom/ns#">opensource</category><title>Readings in Digital Design</title><description>&lt;span&gt;The &lt;a href=&quot;http://www.canberra.edu.au/faculties/arts-design/digital-design&quot;&gt;Master of Digital Design&lt;/a&gt; launched this year with an introductory unit which featured UC alumni &lt;a href=&quot;http://supermanoeuvre.com/&quot;&gt;Supermanoeuvre&lt;/a&gt;, and turned out some &lt;a href=&quot;http://www.flickr.com/photos/mdigitaldesign/sets/72157622759333277/&quot;&gt;great work&lt;/a&gt;. Next year it ramps up, with more units and more students - very exciting. I&#39;m currently preparing &quot;Readings in Digital Design&quot;, a history and theory unit that presents some key concepts in this nascent, multidisciplinary field (or meta-field). While developing the unit I&#39;ve also been thinking about how to make the whole course &quot;open&quot; in the broadest sense - accessible, transparent, connective, collaborative. There&#39;s a tangle of technical and institutional issues here which I have no single solution to, so in the meantime I&#39;ll take a &quot;small pieces loosely joined&quot; approach&lt;/span&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt; - &lt;/span&gt;&lt;span&gt;this post is the first of those small pieces - the draft reading list at the core of the new unit.&lt;/span&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;span&gt;&lt;br /&gt;&lt;/span&gt;&lt;span&gt;The list attempts to sample the breadth of digital design practices and approaches - so it spans cyberculture, architecture, product design, interaction design, and media art. It also mixes historical sources, academic&lt;/span&gt;&lt;span&gt; articles, blog posts and web video, for the same reason, to give a sense of the range of contexts and discourses at work here.&lt;/span&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt; &lt;/span&gt;&lt;span&gt;With the exception of a couple of firewalled papers (thanks Wiley and ACM), all the sources are freely available online.&lt;br /&gt;&lt;br /&gt;Feedback very welcome, as well as additions or gap-plugging - especially on open source in digital design, and tangible / physical computing. Please reuse / remix also, and let me know if you do - call it Creative Commons &lt;a href=&quot;http://creativecommons.org/licenses/by-nc-sa/2.0/&quot;&gt;by-nc-sa&lt;/a&gt;.&lt;br /&gt;&lt;/span&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Readings in Digital Design - &lt;a href=&quot;http://www.canberra.edu.au/faculties/arts-design/digital-design&quot;&gt;Master of Digital Design&lt;/a&gt; 2010&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;/span&gt;&lt;br /&gt;Being Digital&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Horswill, Ian. “&lt;a href=&quot;http://www.cs.northwestern.edu/%7Eian/What%20is%20computation.pdf&quot;&gt;What is Computation?&lt;/a&gt;,” 2007.&lt;/li&gt;&lt;li&gt;&lt;span&gt;&lt;span&gt;Palfreman, Jon. “&lt;a href=&quot;http://video.google.com/videoplay?docid=-7927021653651541860#&quot;&gt;Giant Brains.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;The Machine that Changed the World.&lt;/span&gt; WGBH Boston, 1992. &lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;&lt;span&gt;Rheingold, Howard. “&lt;a href=&quot;http://www.rheingold.com/texts/tft/2.html&quot;&gt;The First Programmer Was a Lady.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;Tools for thought.&lt;/span&gt; Cambridge, Mass.: MIT Press, 2000.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Pre/Histories of Digital Design&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;&lt;span&gt;&lt;span&gt;Kay, Alan. “&lt;a href=&quot;http://www.newmediareader.com/book_samples/nmr-26-kay.pdf&quot;&gt;Personal Dynamic Media.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;The New Media Reader&lt;/span&gt;, edited by Noah Wardrip-Fruin and Nick Montfort. Cambridge, Mass.: MIT Press, 2003.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;&lt;span&gt;  &lt;a href=&quot;http://www.youtube.com/watch?v=USyoT_Ha_bA&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Ivan Sutherland : Sketchpad Demo (1/2)&lt;/span&gt;&lt;/a&gt;, 2007.&lt;br /&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span&gt;&lt;span&gt;Mark, Earl, Mark Gross, and Gabriela Goldschmidt. “&lt;a href=&quot;http://code.arc.cmu.edu/lab/upload/ecaade2008_069.content.0.pdf&quot;&gt;A Perspective on Computer Aided Design after Four Decades.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;eCAADe 2008: education in computer aided architectural design in europe annual conference&lt;/span&gt;, 2008.   &lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Networks&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Rheingold, Howard. “&lt;a href=&quot;http://www.rheingold.com/texts/tft/14.html&quot;&gt;Xanadu, Network Culture, and Beyond.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;In Tools for thought.&lt;/span&gt; Cambridge, Mass.: MIT Press, 2000.&lt;/li&gt;&lt;li&gt;O&#39;Reilly, Tim. “&lt;a href=&quot;http://oreilly.com/web2/archive/what-is-web-20.html&quot;&gt;What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;O&#39;Reilly Media&lt;/span&gt;, September 30, 2005.&lt;/li&gt;&lt;li&gt;Burke, Anthony. “&lt;a href=&quot;http://www.offshorestudio.net/protocology/networkparadigms.pdf&quot;&gt;Network Paradigms.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;Network Practices,&lt;/span&gt; edited by Anthony Burke and Therese Tierney. Princeton Architectural Press, 2007.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Open Source&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Raymond, Eric Steven. “&lt;a href=&quot;http://www.catb.org/%7Eesr/writings/cathedral-bazaar/cathedral-bazaar/&quot;&gt;The Cathedral and the Bazaar,&lt;/a&gt;” 2000.&lt;/li&gt;&lt;li&gt;Lessig, Lawrence. “&lt;a href=&quot;http://video.google.com/videoplay?docid=7661663613180520595#.&quot;&gt;On Free, and the Differences between Culture and Code&lt;/a&gt;” presented at the 23rd Chaos Communication Congress, Berlin, 2006.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;http://www.three.org/openart&quot;&gt;The Open Art Network&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Designing with Data&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;“&lt;a href=&quot;http://www.wired.com/science/discoveries/magazine/16-07/pb_intro&quot;&gt;The Petabyte Age: Because More Isn&#39;t Just More - More is Different.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Wired&lt;/span&gt;, June 23, 2008.&lt;/li&gt;&lt;li&gt;Jones, Matt. “&lt;a href=&quot;http://www.slideshare.net/blackbeltjones/data-as-seductive-material-spring-summit-ume-march09&quot;&gt;Data as Seductive Material&lt;/a&gt;” presented at the Umeå Institute of Design Spring Summit, March 2009.&lt;/li&gt;&lt;li&gt;Armitage, Tom. “&lt;a href=&quot;http://berglondon.com/blog/2009/10/23/toiling-in-the-data-mines-what-data-exploration-feels-like/&quot;&gt;Toiling in the data-mines: what data exploration feels like.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;BERG Blog&lt;/span&gt;, October 23, 2009.&lt;/li&gt;&lt;li&gt;Whitelaw, Mitchell. “&lt;a href=&quot;http://journal.fibreculture.org/issue11/issue11_whitelaw.html&quot;&gt;Art Against Information: Case Studies in Data Practice.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Fibreculture&lt;/span&gt; 11 (2009).&lt;/li&gt;&lt;/ul&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Fab!&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Gershenfeld, Neil. &quot;&lt;a href=&quot;http://www.youtube.com/watch?v=5n-APFrlXDs&quot;&gt;The beckoning promise of personal fabrication,&lt;/a&gt;&quot; presented at &lt;span style=&quot;font-style: italic;&quot;&gt;TED&lt;/span&gt;, 2007.&lt;/li&gt;&lt;li&gt;Menges, Achim. “&lt;a href=&quot;http://www3.interscience.wiley.com/cgi-bin/fulltext/112653755/PDFSTART&quot;&gt;Manufacturing diversity.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Architectural Design&lt;/span&gt; 76, no. 2 (2006): 70-77.&lt;/li&gt;&lt;li&gt;Smith, Greg J. “&lt;a href=&quot;http://rhizome.org/editorial/2400&quot;&gt;Means of Production: Fabbing and Digital Art.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Rhizome&lt;/span&gt;, March 4, 2009.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Ubiquitous Computing and Urban Informatics&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Weiser, Mark. “&lt;a href=&quot;http://cim.mcgill.ca/%7Ejer/courses/hci/ref/weiser_reprint.pdf&quot;&gt;The computer for the 21st century.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Scientific American&lt;/span&gt; 256, no. 3 (1991): 66–75. Reprinted in  &lt;span style=&quot;font-style: italic;&quot;&gt;IEEE Pervasive Computing,&lt;/span&gt; January 2002. &lt;/li&gt;&lt;li&gt;Hill, Dan. “&lt;a href=&quot;http://www.cityofsound.com/blog/2008/02/the-street-as-p.html&quot;&gt;The street as platform.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;City of Sound&lt;/span&gt;, February 11, 2008.&lt;/li&gt;&lt;li&gt;Galloway, Anne. “&lt;a href=&quot;http://purselipsquarejaw.org/papers/galloway_culturalstudies_draft.pdf&quot;&gt;Resonances and Everyday Life: Ubiquitous Computing and the City,&lt;/a&gt;” 2003.&lt;/li&gt;&lt;li&gt;Greenfield, Adam. “&lt;a href=&quot;http://www.boxesandarrows.com/view/all_watched_over_by_machines_of_loving_grace_some_ethical_guidelines_for_user_experience_in_ubiquitous_computing_settings_1_&quot;&gt;All watched over by machines of loving grace: Some ethical guidelines for user experience in ubiquitous-computing settings.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Boxes and Arrows&lt;/span&gt;, December 1, 2004.&lt;/li&gt;&lt;li&gt;Haque, Usman. “&lt;a href=&quot;http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/&quot;&gt;Pachube, Patching the Planet: Interview with Usman Haque.&lt;/a&gt;”  Interview by Tish Shute, January 28, 2009.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Parametricism and its Discontents&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Schumacher, Patrik. “&lt;a href=&quot;http://www.patrikschumacher.com/Texts/Parametricism%20-%20A%20New%20Global%20Style%20for%20Architecture%20and%20Urban%20Design.html&quot;&gt;Parametricism  -  A New Global Style for Architecture and Urban Design,&lt;/a&gt;” 2008.&lt;/li&gt;&lt;li&gt;Jacob, Sam. “&lt;a href=&quot;http://www.strangeharvest.com/2008/12/the-ruins-of-the-future.php&quot;&gt;The Ruins of the Future.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Strange Harvest,&lt;/span&gt; December 5, 2008.&lt;/li&gt;&lt;li&gt;Love, Tim. “&lt;a href=&quot;http://places.designobserver.com/entry.html?entry=10757&quot;&gt;Between Mission Statement and Parametric Model.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Design Observer&lt;/span&gt;, May 11, 2009.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Tangible and Physical Computing&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Moggridge, Bill. “&lt;a href=&quot;http://www.designinginteractions.com/downloads/DesigningInteractions_8.pdf&quot;&gt;Multisensory and Multimedia.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;Designing Interactions&lt;/span&gt;. The MIT Press, 2007.&lt;/li&gt;&lt;li&gt;Igoe, Tom. “&lt;a href=&quot;http://www.tigoe.net/blog/category/physicalcomputing/176/&quot;&gt;Physical Computing’s Greatest Hits (and misses).&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;hello.&lt;/span&gt;, July 27, 2008.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Biomimicry, Complexity and Self-Organisation&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Weinstock, Michael. “&lt;a href=&quot;http://www3.interscience.wiley.com/cgi-bin/fulltext/112653729/PDFSTART&quot;&gt;Self-organisation and material constructions.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Architectural Design&lt;/span&gt; 76, no. 2 (2006): 34-41.&lt;/li&gt;&lt;li&gt;Bentley, Peter. “&lt;a href=&quot;http://www-misa.cs.ucl.ac.uk/staff/P.Bentley/BES6.pdf&quot;&gt;Climbing Through Complexity Ceilings.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;Network Practices&lt;/span&gt;, edited by Anthony Burke and Therese Tierney. Princeton Architectural Press, 2007.&lt;/li&gt;&lt;li&gt;Kaplinsky, J. “&lt;a href=&quot;http://www3.interscience.wiley.com/cgi-bin/fulltext/112637176/PDFSTART&quot;&gt;Biomimicry versus humanism.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Architectural Design&lt;/span&gt; 76, no. 1 (2006): 66-71.&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Redesigning Design&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Sanders, Elizabeth, and Pieter Jan Stappers. “&lt;a href=&quot;http://www.maketools.com/pdfs/CoCreation_Sanders_Stappers_08_preprint.pdf&quot;&gt;Co-creation and the new landscapes of design.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;CoDesign&lt;/span&gt; 4 (March 2008): 5-18.&lt;/li&gt;&lt;li&gt;Howe, Jeff P. “&lt;a href=&quot;http://www.wired.com/epicenter/2009/03/is-crowdsourcin/&quot;&gt;Is Crowdsourcing Evil? The Design Community Weighs In.&lt;/a&gt;” &lt;span style=&quot;font-style: italic;&quot;&gt;Epicenter | Wired.com&lt;/span&gt;, March 10, 2009.&lt;/li&gt;&lt;/ul&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Sustainable Digital?&lt;/span&gt;&lt;br /&gt;&lt;ul&gt;&lt;li&gt;Bonanni, Leonardo, Amanda Parkes, and Hiroshi Ishii. “&lt;a href=&quot;http://www.chi2008.org/altchisystem/submissions/submission_leonardo.bonanni_2.pdf&quot;&gt;Future craft: how digital media is transforming product design.&lt;/a&gt;” In&lt;span style=&quot;font-style: italic;&quot;&gt; CHI &#39;08 extended abstracts on Human factors in computing systems&lt;/span&gt;, 2553-2564. Florence, Italy: ACM, 2008.&lt;/li&gt;&lt;li&gt;DiSalvo, Carl, Kirsten Boehner, Nicholas A. Knouf, and Phoebe Sengers. “&lt;a href=&quot;http://portal.acm.org/citation.cfm?id=1518763&quot;&gt;Nourishing the ground for sustainable HCI: considerations from ecologically engaged art.&lt;/a&gt;” In &lt;span style=&quot;font-style: italic;&quot;&gt;Proceedings of the 27th international conference on Human factors in computing systems,&lt;/span&gt; 385-394. Boston, MA, USA: ACM, 2009.&lt;/li&gt;&lt;li&gt;Karsten Schmidt. “&lt;a href=&quot;http://toxi.co.uk/blog/2007/07/sustainablity-and-generative-design.htm&quot;&gt;Sustainablity and generative design.&lt;/a&gt;” toxi.in.process 22 Jul 2007.    &lt;/li&gt;&lt;/ul&gt;</description><link>http://teemingvoid.blogspot.com/2009/12/readings-in-digital-design.html</link><author>noreply@blogger.com (Mitchell)</author><thr:total>7</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-604064829759593954</guid><pubDate>Tue, 27 Oct 2009 05:09:00 +0000</pubDate><atom:updated>2009-11-26T07:17:52.422+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">audiovisual</category><category domain="http://www.blogger.com/atom/ns#">exhibition</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">specificity</category><category domain="http://www.blogger.com/atom/ns#">transmateriality</category><title>Right Here, Right Now - HC Gilje&#39;s Networks of Specificity</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;This essay was commissioned by &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://www.kunstsenter.no/&quot;&gt;Hordaland Kunstsenter&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; in Bergen, Norway, to coincide with HC Gilje&#39;s solo exhibition &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://www.kunstsenter.no/en/hc-gilje-blink/&quot;&gt;blink&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; (video below). It looks at Gilje&#39;s recent work - which spans audiovisual installation, performance, hardware, and networked forms - through the notion of specificity (developed earlier &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://teemingvoid.blogspot.com/2008/08/aspects-of-transmateriality-specificity.html&quot;&gt;here&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;).&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;400&quot; width=&quot;500&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=7066012&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=7066012&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;400&quot; width=&quot;500&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;The digital network, where we all spend ever more of our time, is a vast infrastructure of &lt;em&gt;generality&lt;/em&gt;. It deploys a system which is standardised, formally defined, highly structured, and internally consistent. If I send you an email, I do it trusting that the interlinked systems of hard- and software, the protocols for data encoding and transmission, the network switches and servers, will all hold together so that the email you receive is the same as the one I sent. Perhaps I&#39;m in Australia, and you are in Norway: we could say that the network &lt;em&gt;generalises&lt;/em&gt; our two points in space - for the network, they are the same. As I draft my email it exists as a pattern of voltages and magnetic flux inside my computer. To transmit that pattern effectively, the digital network must  erase or resist any local errors or inconsistencies that it might encounter along the way, so that it &lt;em&gt;does not matter &lt;/em&gt;if the pattern travels by optical fibre or copper, or in radio waves, or if a boat anchor cut through a cable near Indonesia. It does not matter that your computer is made of different atoms to mine. Those are &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/08/aspects-of-transmateriality-specificity.html&quot;&gt;&lt;em&gt;specificities&lt;/em&gt;&lt;/a&gt; - local, material events and instances. Digital culture, and networked space, absorbs specificities, compensates for them, rectifies them into generality. Wireless broadband and mobile computing make us into human nodes, bathing in shared connective protocols.&lt;br /&gt;&lt;br /&gt;The aesthetics of digital media flow from a related generality, where sound and image are encoded as fields of data. If a pixel is a number, an image is a grid of pixels, video a stream of images, and each of these numbers can take any value at all, then formally, an aesthetics of digital video is only a matter of finding the right values - fishing around in a space containing &lt;em&gt;all possible&lt;/em&gt; digital video. If digital media creates this generalised space, &lt;em&gt;anything at all&lt;/em&gt;, the media arts are faced with unavoidable questions: not only what to make - which values to choose, but how to choose them, and why?&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;401&quot; width=&quot;500&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=3333080&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=3333080&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;401&quot; width=&quot;500&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;HC Gilje&#39;s work arises from a moment when the anything-at-all of digital video was just opening up, thanks to a combination of new real-time tools, cheap computing power, and some key interdisciplinary influences. &lt;span&gt;Drawing on experimental sound and music, improvisation and performance became important solutions; working live in a specific situation, artists would gather, process, generate, and recombine material. In work from the late 1990s and early 2000s, from Gilje and his collaborators in &lt;a href=&quot;http://retnull.com/242pilots/&quot;&gt;242.pilots&lt;/a&gt;, as well as video ensembles such as &lt;a href=&quot;http://www.granularsynthesis.info/ns/index.php&quot;&gt;Granular Synthesis&lt;/a&gt; and Skot, the result is abstract and intense, a flow of layered digital texture. In performance it saturates the body and senses; big screens, big speakers. Instead of the narrative transport of cinema, which takes us somewhere else, this work creates  - and is created in - an intensified sense of presence, what Gilje &lt;a href=&quot;http://www.bek.no/%7Ehc/text_html/getreal_txt.htm&quot;&gt;calls&lt;/a&gt; an &quot;extended now&quot;. This methodology is vital; it focuses the open-ended generality of digital media in to a point: on &lt;em&gt;this&lt;/em&gt;, rather than &lt;em&gt;anything-at-all&lt;/em&gt;.&lt;br /&gt;&lt;br /&gt;This moment relies on a circuit, a close coupling between artist and media; data flows become experienced events - sounds and images - which in turn inform new data flows, and so on. Audience and performers share a digital-material situation. The &lt;em&gt;specificity&lt;/em&gt; of digital media comes forward; for of course these media are always specific, always local, always embodied; but that specificity is usually suppressed by the functional logic of generality. At the same time though, the processes underway here depend on exactly that generality, on the machine&#39;s ability to rapidly transform data and shift it between instantiations - from the voltages in video memory to the patterns of projected light.&lt;br /&gt;&lt;br /&gt;In &lt;a href=&quot;http://hcgilje.wordpress.com/2007/04/29/nodio-1st-generation/&quot;&gt;&lt;em&gt;nodio&lt;/em&gt;&lt;/a&gt; (2005-) Gilje creates a system of networked audiovisual nodes that process and share image material. Each node generates sound derived from its image, in a process of automatic translation. On one hand this translation is another demonstration of the abstract pliability of the digital - its ability to transform anything into anything (&lt;em&gt;generality&lt;/em&gt;); on the other, its tight audiovisual correpondences generate sparks of material intensity - real events, rather than digital effects (&lt;em&gt;specificity&lt;/em&gt;). With these distributed nodes Gilje deploys audiovisual materials in space, creating flows and juxtapositions that function as dynamic sculpture. Of course the formal model of &lt;em&gt;nodio&lt;/em&gt; echoes our most ubiquitous generalising paradigm: the network. Once again, the artist applies this digital tendency for generalisation in order to cultivate instances of specificity - the texture and sensation of the here and now.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;375&quot; width=&quot;500&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=3575068&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=3575068&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;375&quot; width=&quot;500&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;From &lt;a href=&quot;http://hcgilje.wordpress.com/2007/04/29/drifter/&quot;&gt;&lt;em&gt;drifter&lt;/em&gt;&lt;/a&gt; (2006) (above) to &lt;a href=&quot;http://hcgilje.wordpress.com/2007/04/29/dense/&quot;&gt;&lt;em&gt;dense&lt;/em&gt;&lt;/a&gt; (2006) and &lt;a href=&quot;http://hcgilje.wordpress.com/2008/10/31/shift-v2-relief-projection-installation/&quot;&gt;&lt;em&gt;shift&lt;/em&gt;&lt;/a&gt; (2008) (below), Gilje&#39;s audiovisual nodes map out a developing exploration of specificity. &lt;em&gt;drifter&lt;/em&gt; deploys standard computer hardware, formed into sculptural modules; in passing material between nodes Gilje begins to break the frame of the screen, creating an implicit inter-space. In &lt;em&gt;dense&lt;/em&gt;, the hardware moves out of the sculptural field, and the screen is further deconstructed. Instead of the frontal configuration of the cinema / computer, these suspended fabric strips are illuminated from both sides with a video &quot;weave&quot;. The familiar architecture of the screen as a blank (general-purpose) substrate containing or supporting image content, is reconfigured here; the specific materialities of screen and content overlap. Even more so in &lt;em&gt;shift&lt;/em&gt;, where the nodes are now wooden boxes, illuminated with precisely controlled video projections. As in earlier &lt;em&gt;nodio&lt;/em&gt; works sound and image are directly related. Here Gilje extends this fusion to the sculptural objects; each node is also its own speaker-box, so that the digital articulation of sound and image is realised, and grounded materially, in the nodes themselves.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;377&quot; width=&quot;500&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=1660580&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=1660580&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;377&quot; width=&quot;500&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;These works drive towards a spatial materialisation of audiovisuals: dynamic constellations of AV intensity, fields for what Gilje &lt;a href=&quot;http://hcgilje.wordpress.com/2007/04/29/nodio-1st-generation/&quot;&gt;calls&lt;/a&gt; &quot;audiovisual powerchords&quot;. The projectors, speakers and networks of the &lt;em&gt;nodio&lt;/em&gt; works present one means to this end, deploying existing media technologies. Again we find an interplay of generality and specificity, as Gilje adapts generalising systems - projectors, computers, networks - to realise materialised instances. The &lt;a href=&quot;http://hcgilje.wordpress.com/2008/09/04/wind-up-birds/&quot;&gt;&lt;em&gt;Wind-up Birds&lt;/em&gt;&lt;/a&gt; (2008) (below) represent another angle of approach; Gilje sets video aside, and creates materialised, local, sculpturally autonomous nodes from electronic and mechanical materials. In these robotic woodpeckers digital media and sculptural embodiment are further enmeshed. The birds communicate using digital radio, and their behaviour is programmed in a custom chip; but their sound is simply percussion - a mechanical switch, tapping on a specially constructed wooden slit-drum. Again this is specificity over generality: a loudspeaker is an acoustic shape-shifter, a technology which promises &lt;em&gt;any sound&lt;/em&gt;, in the same way that the screen promises &lt;em&gt;any image&lt;/em&gt;. By contrast the &lt;em&gt;Birds&lt;/em&gt; produce only one sound, &lt;em&gt;their sound&lt;/em&gt;, a specific conjunction of solenoid, timber and vibrating air.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;377&quot; width=&quot;500&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=1660414&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=1660414&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=00ADEF&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;377&quot; width=&quot;500&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;The &lt;em&gt;Birds&lt;/em&gt; will run for a month on their own batteries, strapped to trees, calling to each other and any other creatures nearby. These nodes are unplugged: they begin to come away from the technological support system of mains power and the shelter of the gallery or studio, and move out into the world. As in the artist&#39;s other work the engineering here is inseparable from the artistic agenda; the &lt;em&gt;Birds&lt;/em&gt; are in that sense a realisation of Gilje&#39;s spatial and formal aims, an autonomous constellation of intensities. But they also literally expand from there; where the &lt;em&gt;nodio&lt;/em&gt; works explore the composition of spaces within a network of intensities, the &lt;em&gt;Birds&lt;/em&gt; move outwards, creating points of intensity in the wild, and evoking a spatial alertness - a way of being in and listening to the world - that extends beyond the well-marked edges of an artwork. The &lt;em&gt;Birds&lt;/em&gt; are more like an experimental intervention, a digital-material overlay in a complex field of the living and non-living.&lt;br /&gt;&lt;br /&gt;Similarly the &lt;em&gt;Soundpockets&lt;/em&gt; works (both 2007) make small sonic interventions in urban spaces, pursuing local intensification and juxtaposition through directional soundbeams and micro-scale radio transmissions. Once again we find this interplay of the general - the anything-at-all of the digital - and the specific, the here and now. The &quot;extremely local radio stations&quot; of &lt;em&gt;&lt;a href=&quot;http://hcgilje.wordpress.com/2008/03/08/soundpocket-2-extremely-local-radio-stations/&quot;&gt;Soundpockets 2&lt;/a&gt;&lt;/em&gt; form a sort of folded juxtaposition of three layers: globalised network infrastructures and protocols, the traced or mediated locations of field recordings, and the specific time and place of the transmissions. Just as &lt;em&gt;&lt;a href=&quot;http://hcgilje.wordpress.com/2008/03/06/soundpocket-1/&quot;&gt;Soundpockets 1&lt;/a&gt;&lt;/em&gt; uses exotic soundbeam acoustics to perturb urban spaces, &lt;em&gt;Soundpockets 2&lt;/em&gt; shows how we can draw in technological infrastructures in order to reconfigure the real environment, creating flows and distributions that form intense moments of difference and specificity.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/hcgilje/1467603807/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 375px;&quot; src=&quot;http://farm2.static.flickr.com/1323/1467603807_ffbbbbd471_d.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;In this reading Gilje&#39;s work is partly critical. Pursuing specificity, and an intensified, material experience of the here and now, it pushes against the generalising tendencies of digital media. By the functional logic of the network, each node is formally identical, and must be effectively insulated from its environment. Ubiquitous computing promises us &quot;everyware&quot; - total connectivity, the complete interpenetration of the network and our lived environment [2]. But if the network is a generalising force, if it erases differences between places, what will life in &quot;everyware&quot; be like? Gilje&#39;s work suggests a utopian alternative: networks that are always local in time and space; nodes of right here, right now. Gilje&#39;s work strives for what Hans Gumbrecht &lt;a href=&quot;http://teemingvoid.blogspot.com/2007/10/notes-on-gumbrechts-production-of.html&quot;&gt;calls&lt;/a&gt; &quot;presence&quot;; a way of knowing the world that is characterised by intense moments of encounter or revelation - aesthetic experiences that place us in the world, and of it, rather than observing from the intellectual distance of interpretation.&lt;br /&gt;&lt;br /&gt;The beauty of Gilje&#39;s work though is that it not only suggests this prospect, but demonstrates it, makes it happen; and in that sense the work is constructive, rather than critical. In emphasising the specificity of media technologies, Gilje&#39;s work shows us a different way to frame those technologies; as always material, always in the world with us - a view I have called &lt;a href=&quot;http://teemingvoid.blogspot.com/search/label/transmateriality&quot;&gt;transmateriality&lt;/a&gt;.  As Matthew Kirschenbaum &lt;a href=&quot;http://www.otal.umd.edu/%7Emgk/blog/LeavesATrace.pdf&quot;&gt;writes&lt;/a&gt;, &quot;computers ... are material machines dedicated to propagating a behavioral illusion, or call it a working model, of immateriality.&quot; Gilje shows us both sides of this statement, the functional illusion - generality - and its material foundation - specificity. It shows us a way to reframe the network, too; as always local, always specific; a tangle of real flows and propagating patterns; and endless possible ways of reconnecting the world with itself. Finally Gilje shows us one crucial role for the artist, in this context: seeking out configurations that intensify, rather than dilute, our sense of being in the world.&lt;/span&gt;</description><link>http://teemingvoid.blogspot.com/2009/10/right-here-right-now-hc-giljes-networks.html</link><author>noreply@blogger.com (Mitchell)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-178513555211698228</guid><pubDate>Wed, 07 Oct 2009 02:00:00 +0000</pubDate><atom:updated>2009-10-09T08:26:46.105+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">fabrication</category><category domain="http://www.blogger.com/atom/ns#">jewelry</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">processing</category><title>Weather Bracelet - 3D Printed Data-Jewelry</title><description>Given my rantings about digital materiality and &lt;a href=&quot;http://teemingvoid.blogspot.com/2009/01/transduction-transmateriality-and.html&quot;&gt;transduction&lt;/a&gt;, fabrication is a fairly obvious topic of interest. I posted &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/12/fabricated-growth-forms-processing-to.html&quot;&gt;earlier&lt;/a&gt; about an experiment with laser-cut generative forms and &lt;a href=&quot;http://ponoko.com/&quot;&gt;Ponoko &lt;/a&gt;- more recently I&#39;ve been playing with 3d-printing via Shapeways, as well as trying out data-driven (or &quot;transduced&quot;) forms. This post covers technical documentation as well as some more abstract reflections on this project - creating a wearable data-object, based on 365 days of local (Canberra) weather data.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm4.static.flickr.com/3530/3911001063_c8aeec0ab9.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 375px;&quot; src=&quot;http://farm4.static.flickr.com/3530/3911001063_c8aeec0ab9.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;http://shapeways.com/&quot;&gt;Shapeways&lt;/a&gt; has good documentation on how to generate models using 3d-modelling software. Here I&#39;ll focus more on creating models using code-based approaches, and Processing specifically. The first challenge is simply building a 3d mesh. I began with this &lt;a href=&quot;http://workshop.evolutionzone.com/2007/04/18/code-3dmeshpde/&quot;&gt;code&lt;/a&gt; from Marius Watz, which introduces a useful process: first, we create a set of 3d points which define the form; then we draw those points using beginShape() and vertex().&lt;br /&gt;&lt;br /&gt;The radial form of the Weather Bracelet model shows how this works. The form consists of a single house-shaped slice, where the shape of each slice is based on temperature data from a single day. The width is static, the height of the peak is mapped to the daily maximum, and the height of the shoulder (or &quot;eave&quot;) is mapped to the daily minimum. To create the radial form, we simply make one slice per day of data, rotating each slice around a central point. As the diagram below shows, this gets us a ring of slices, but not a 3d-printable form. As in Watz&#39;s sketch, I store each of the vertices in the mesh an array - in this case I use an array of PVectors, since each PVector conveniently stores x,y and z coordinates. The array has 365 rows (one per day, for each slice) and 5 columns (one for each point in the slice). To make a 3d surface, we just work our way through the array, using beginShape(QUADS) to draw rectangular faces between the corresponding points on each of the slices.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLWUazODPT_Ky_t76q0hPBwmqBRBXGqHu_OEBDHUuZ4JT_BPplJboJQiCxcTKsM5OPkQ_lUsN2WKvUAiWYLTUsXYdztp83NYj5dHWfrAzLBDnq7z3Jm7hytFiYE17F4XznRRhg4w/s1600-h/weather_bracelet_slice_diag.png&quot;&gt;&lt;img style=&quot;border: 0px ; margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 312px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLWUazODPT_Ky_t76q0hPBwmqBRBXGqHu_OEBDHUuZ4JT_BPplJboJQiCxcTKsM5OPkQ_lUsN2WKvUAiWYLTUsXYdztp83NYj5dHWfrAzLBDnq7z3Jm7hytFiYE17F4XznRRhg4w/s400/weather_bracelet_slice_diag.png&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5389696592151446402&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;To save the geometry, I used Guillame laBelle&#39;s wonderful &lt;a href=&quot;http://labelle.spacekit.ca/supercad/&quot;&gt;SuperCad&lt;/a&gt; library to write an .obj file. I then opened this in &lt;a href=&quot;http://meshlab.sourceforge.net/&quot;&gt;MeshLab&lt;/a&gt;, another excellent open source tool for mesh cleaning and analysis. Because of the way we draw the mesh, it contains lots of duplicate vertex information; in MeshLab we can easily remove duplicate vertices and cut the file size by 50%. MeshLab is also great for showing things like problems with normals - faces that are oriented the wrong way. When generating a mesh with Processing, the &lt;span style=&quot;font-style: italic;&quot;&gt;order&lt;/span&gt; in which vertices are drawn determines which way the face is ... er, facing... according to the &lt;a href=&quot;http://www.schorsch.com/kbase/glossary/right_hand_rule.html&quot;&gt;right hand rule&lt;/a&gt;. Curl the fingers of your right hand, and stick up your thumb: if you order the vertices in the direction that your fingers are curling, the face normal will follow the direction of your thumb. Although Processing has a &lt;a href=&quot;http://processing.org/reference/normal_.html&quot;&gt;normal()&lt;/a&gt; function that is supposed to set the face normal, it doesn&#39;t seem to work with exported geometry. Anyhow, the right hand rule works, though it is guaranteed to make you look like a fool as you contort your arm to debug your mesh-building code.&lt;br /&gt;&lt;br /&gt;The next step in this process was integrating rainfall into the form. I experimented with presenting rainfall day-by-day, but the results were difficult to read; I eventually decided to use negative spaces - holes - to present rainfall aggregated into weeks. Because Shapeways charges by printed volume, this had the added attraction of making the model cheaper to print! The process here was to first generate the holes in Processing as cylindrical forms. Unlike the base mesh, each data point (cylinder) is a separate, simple form: this meant I could take a simpler approach to drawing the geometry. I wrote a function that would just generate a single cylinder, then using rotate() and scale() transformations made instances of that cylinder at the appropriate spots. Because I wanted the volume of each cylinder to map to rainfall, the radius of each cylinder is proportional to the &lt;span style=&quot;font-style: italic;&quot;&gt;square root&lt;/span&gt; of the aggregated weekly rainfall. As you can see in the grab below, the base mesh and the cylinders are drawn separately, but overlayed; they were also saved out as separate .obj files. The final step in the process was to bring both cleaned-up .obj files into &lt;a href=&quot;http://www.blender.org/&quot;&gt;Blender&lt;/a&gt; (more open source goodness) and run a Boolean operation to literally subtract the cylinders from the mesh. This took a while  - Blender was completely unresponsive for a good few minutes - but worked flawlessly.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOHfCSrt3iAw6bWpq0qr4aY7vlWSnuYAe3vJ3-oVKg0MbM2gLMdk_cXfTWklsjyNlMqKLb6zfV3KTbdHOyf8XBfMB0WHZ9e0OMdhLeAukEpLAPAQfyW8anOnsaeHZfs3olLtrouQ/s1600-h/weather_ring_52533.png&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; cursor: pointer; width: 350px; height: 340px;  display: block; &quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOHfCSrt3iAw6bWpq0qr4aY7vlWSnuYAe3vJ3-oVKg0MbM2gLMdk_cXfTWklsjyNlMqKLb6zfV3KTbdHOyf8XBfMB0WHZ9e0OMdhLeAukEpLAPAQfyW8anOnsaeHZfs3olLtrouQ/s400/weather_ring_52533.png&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5390093860409263314&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;   &lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3838229165/&quot;&gt;   &lt;img style=&quot;margin: 0px auto 10px; cursor: pointer; width: 349px; height: 313px;  display: block; &quot; src=&quot;http://farm3.static.flickr.com/2427/3838229165_3ce2ca9b03.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Finally, after checking the dimensions, exporting an STL file from MeshLab, and uploading to Shapeways, the waiting; then, the printed form. I ordered two prints, one in Shapeways&#39; &lt;a href=&quot;http://www.shapeways.com/materials/white_strong_flexible&quot;&gt;White, Strong and Flexible&lt;/a&gt; material, and the other in &lt;a href=&quot;http://www.shapeways.com/materials/transparent_detail&quot;&gt;Transparent Detail&lt;/a&gt;.  You can clearly see the difference between the materials in these photos. The very small holes tested the printing process in both materials; in the SWF print the smallest holes are completely closed; in the TD material they are open, but sometimes gummed up with residue from the printing process (which comes out readily enough). Overall I think the TD print is much more successful - I like the detail and the translucency of the material, as well as the cross-hatched &quot;grain&quot; that the printing process generates.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3911778124/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 375px;&quot; src=&quot;http://farm3.static.flickr.com/2534/3911778124_7f5654997a.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3910998129/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 359px;&quot; src=&quot;http://farm3.static.flickr.com/2566/3910998129_fe2f7b0255.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3911779102/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 375px; height: 500px;&quot; src=&quot;http://farm3.static.flickr.com/2544/3911779102_2a181010e9.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;So, a year of weather data, on your wrist - as a proof of concept the object works, but as a wearable and as a data-form it needs some refinement. As a bracelet it&#39;s just functional - the sizing is about right, but the sharp corners of the profile are scratchy against the skin. As a data-form, it could do with some simple reference points to make the data more readable - I&#39;m thinking of small tick-marks on the inner edge to indicate months, and perhaps some embossed text indicating the year and location. More post-processing work in Blender, I think.&lt;br /&gt;&lt;br /&gt;Another line of development is to do versions with other datasets - and hey, if you&#39;d like one for your city, get in touch. But that also raises some tricky questions of scaling and comparability. The data scaling in this form has been adjusted for this dataset; with another year&#39;s data, the same scaling might break the form - rain holes might eat into the temperature peaks, or overlap each other, for example. A single one-size-fits-all scaling would allow comparisons between datasets, but might make for less satisfying individual objects - and, finding that scaling requires more research.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3949216997/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 333px;&quot; src=&quot;http://farm4.static.flickr.com/3456/3949216997_5c972c1efa.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;What has been most enjoyable with this project, though, is the immediate reaction the object evokes in people. The significance and scale of the data it embodies, and its scale, seem to give it a sense of value - even preciousness - that has nothing to do with the cost of its production or the human effort involved. The bracelet makes weather data tangible, but also invites an intimate, tactile familiarity. People interpret the form with their fingers, recalling as they do the wet Spring, or that cold snap after the extreme heat of February; it mediates between memory and experience, and between public and private  - weather data becomes a sort of shared platform on which the personal is overlayed. The form also shows how the generalising infrastructures of computing and fabrication can be brought back to a highly specific, localised point. This for me is the most exciting aspect of digital fabrication and &quot;mass customisation&quot; - not more choice or user-driven design (which are all fine, but essentially more of the same, in terms of the consumer economy) - but the potential for objects that are intensely and specifically local.</description><link>http://teemingvoid.blogspot.com/2009/10/weather-bracelet-3d-printed-data.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm4.static.flickr.com/3530/3911001063_c8aeec0ab9_t.jpg" height="72" width="72"/><thr:total>14</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6069661115808376400</guid><pubDate>Sat, 22 Aug 2009 21:55:00 +0000</pubDate><atom:updated>2009-08-23T09:27:40.336+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">code</category><category domain="http://www.blogger.com/atom/ns#">opensource</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><title>Tiny Sketching</title><description>As a kind of test pattern to fill the current break in transmission, here are my contributions to &lt;a href=&quot;http://openprocessing.org/collections/rhizome.php&quot;&gt;Tiny Sketch&lt;/a&gt;, an &lt;a href=&quot;http://openprocessing.org/&quot;&gt;OpenProcessing&lt;/a&gt; / &lt;a href=&quot;http://rhizome.org/&quot;&gt;Rhizome&lt;/a&gt; competition (open until mid September) for Processing sketches under 200 characters.&lt;br /&gt;&lt;br /&gt;In &lt;a href=&quot;http://openprocessing.org/visuals/?visualID=3480&quot;&gt;Bit Sunset&lt;/a&gt; I just load the pixels[] array, pick a random block of pixels, and add a large number to their value. This process throws up some surprising results as the colour values gradually increase, then start pushing into the alpha bits of the ARGB integer; eventually, as it fills the alpha bits, it settles into a pallette of pinks and greens that are gradually smashed into pixel-dust.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://openprocessing.org/visuals/?visualID=3480&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 400px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrCcqS8KsFYdyHpHfv5KpDsH9z0Myk1y8hgTJ6nQffYdvzyqNgbdlbsmsknjNzIZl8ptW1P-XwqSKQB0fWcEy-IelT3B4AMlauxAebxTTAc61_bLc3W1LykpZbwfrEZYb3sjxQVg/s400/bitsunset102804.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5372912114497936338&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;http://openprocessing.org/visuals/?visualID=3496&quot;&gt;Albers Clock &lt;/a&gt;was an attempt to slow the pace of TinySketch even further; it visualises the current time in the form of an &lt;a href=&quot;http://images.google.com/images?q=albers+homage+to+the+square&amp;amp;oe=utf-8&amp;amp;rls=org.mozilla:en-US:official&amp;amp;client=firefox-a&amp;amp;um=1&amp;amp;ie=UTF-8&amp;amp;ei=nn6QSoSTBsOSkAW57eW7Cg&amp;amp;sa=X&amp;amp;oi=image_result_group&amp;amp;ct=title&amp;amp;resnum=4&quot;&gt;Albers&lt;/a&gt; square, with three colours, one each for hour, minute and second. I also like that it creates an image that is synchronous (within timezones, at least), unlike the asynchronous, individualised runtimes of most sketches.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://openprocessing.org/visuals/?visualID=3496&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 400px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhirMlRGvxFt8dFppFWaZsAnsjFd7pl47wwztCYVjltAW9IAVQfXiklG5h8EP4Cw-fySLil4YFc9jOIN78FiBZBb1UqEsfDc8kn_I1skmJM4GnMt72UBagdu119Izbyk2q9pqtKYw/s400/albers.png&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5372924746348235186&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;There are dozens of amazing sketches in this collection - it&#39;s a fascinating microcosm (in every sense) of the current Processing / generative  / code art scene. Given the tight constraints it&#39;s not surprising to see some demoscene virtuousity in the code - like Martin Schneider&#39;s &lt;a href=&quot;http://openprocessing.org/visuals/?visualID=3659&quot;&gt;Sandbox&lt;/a&gt;, a physical simulation painting app in 200 characters. There is also some classic software art conceptualism and reflexivity - like Jerome St Clair&#39;s Joy Division &lt;a href=&quot;http://openprocessing.org/visuals/?visualID=3767&quot;&gt;cover&lt;/a&gt; and Kyle MacDonald&#39;s &lt;a href=&quot;http://openprocessing.org/visuals/?visualID=3508&quot;&gt;Except&lt;/a&gt;. Great to see projects like this - and OpenProcessing itself - reviving applet culture in an open source, web2.0-flavoured way.</description><link>http://teemingvoid.blogspot.com/2009/08/tiny-sketching.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrCcqS8KsFYdyHpHfv5KpDsH9z0Myk1y8hgTJ6nQffYdvzyqNgbdlbsmsknjNzIZl8ptW1P-XwqSKQB0fWcEy-IelT3B4AMlauxAebxTTAc61_bLc3W1LykpZbwfrEZYb3sjxQVg/s72-c/bitsunset102804.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-1057889406219799207</guid><pubDate>Fri, 15 May 2009 04:44:00 +0000</pubDate><atom:updated>2009-05-17T15:23:12.216+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">data</category><category domain="http://www.blogger.com/atom/ns#">dataesthetics</category><category domain="http://www.blogger.com/atom/ns#">landscape</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><title>Landscape, Slow Data and Self-Revelation</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;This text was an invited contribution to &lt;/span&gt;&lt;a href=&quot;http://kerb17.blogspot.com/&quot;&gt;Kerb 17: Is Landscape Architecture Dead?&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; This looks like a rich volume with a sharp critical edge, and a swathe of interesting material spanning architecture, urbanism, art and landscape. Unfortunately my contribution was edited fairly severely; so here&#39;s the unabridged version. Redundancy warning for regular readers: there&#39;s a slight rehash of &lt;/span&gt;&lt;a href=&quot;http://teemingvoid.blogspot.com/2008/07/image-data-and-environment-notes-on.html&quot;&gt;Watching the Sky&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; in here; but afterwards there&#39;s fresh material on landscape / data projects by &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://www.xs4all.nl/%7Enotnot&quot;&gt;Driessens and Verstappen&lt;/a&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; and &lt;/span&gt;&lt;a style=&quot;font-style: italic;&quot; href=&quot;http://www.haque.co.uk/&quot;&gt;Usman Haque.&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Data is, we imagine, an immaterial thing; or at least ethereal, made of light and electricity, processed at superhuman speed, transmitted in real time. The everyday world we move in seems dense and slow by comparison. The landscape is slower again; thick, heavy and persistent. At the moment however those two domains, the fast lightness of data and the heavy slowness of the landscape, are urgently linked. We are faced with the prospect of momentous change in the landscape that is somehow both slow and fast; too slow for our real-time culture to grasp, and too fast for the living systems of the landscape to adapt to. This paper presents a handful of works that dwell in that disjunction, between landscape and data; not solving it at all, but at least forming links, complicating assmptions, and recasting the relationship between two terms that seem to neatly encapsulate our future.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/2430618126/in/set-72157604494499057/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; border:none; display: block; text-align: center; cursor: pointer; width: 500px; height: 147px;&quot; src=&quot;http://farm4.static.flickr.com/3101/2430618126_95d5298433.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;In &lt;a href=&quot;http://www.flickr.com/photos/mtchl/sets/72157604494499057/&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Watching the Sky&lt;/span&gt;&lt;/a&gt; a camera looks out my office window, at the sky and the landscape. A banal view over a university campus to a bushy ridge in Belconnen. The camera takes an image every three minutes; four hundred and eighty images in twenty four hours. Tethered to a computer, the camera records for weeks at a time; the computer accumulates thousands of images. I think of the images as data, traces of change in the world outside the office window. I visualise, or re-visualise, this image data in the simplest possible way; an automated process &quot;cuts&quot; a narrow vertical slit from the same location in each image, and compiles all these slits together (this is a digital imitation of an analog photographic technique known as &quot;slit-scan&quot;). In the rectangular visualisations the slices are tiled from left to right. In the radial visualisations slices are gradually rotated so that a twenty-four-hour period spans one complete revolution (the &quot;seam&quot; is at midnight).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/2495884169/in/set-72157604494499057/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 278px;&quot; src=&quot;http://farm4.static.flickr.com/3282/2495884169_0e58808235.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;In the resulting images the patterns of change within and between days are immediately visible. As I imagined, day and night, cloud and sky are obvious. The brief, delicate colour shifts of dawn and dusk were more surprising.  Below the horizon, though, patterns appeared that complicated the work&#39;s nominal focus on the sky. It became clear that some of the richest and most revealing data here came from the landscape.  In one of the earliest sketches I found small but distinct variations in the horizon line over the course of a day, and recurring on successive days. I eventually realised that this was caused by the afternoon breeze, shifting foliage by a few pixels within the frame. In other words, subtle changes in the material field of the landscape carried through to the image data. Moreover in many ways the landscape visualises its own internal structure: the trees blowing in the breeze are partly instruments, revealing material changes around them (the breeze); but also data, traceable as pixels. In many images the passage of a shadow across the ground appears as a recurring pattern, an enfolded or multiplexed representation of another set of material interactions; the landscape measures and reveals itself, but not as an object, image or view. It is a connective, dynamic, material system; what is revealed are the specific interactions of that system with itself. The image data acts as a kind of core sample, drilling through multiple spatial and material systems, but each is connected outwards, beyond the frame. The wind in the trees doesn&#39;t belong to this image, but like the angle of the sun revealed in the shadow, is an index of a wider system.&lt;br /&gt;&lt;br /&gt;It also became clear that the landscape is densely packed with human, social data which is equally apparent in image data. In the rectangular visualisations presented here stripes of colour are visible towards the bottom of the frame. These are caused by cars, parked illegally under the trees; they form another ad-hoc graph that reflects  cultural, institutional calendars and cycles, though again they are intermingled with other scales and structures.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAil2tNTyABDX1kVTnxyF4mZajxN5mU_MumIR2gmi-chnqTNrJCxTzL4MCsMDTmKVuDGwl91nUi9LJ1FbqZbBpHH3TWos_kiMD6uBptYeVTls56xbbXvVdZhVs9VB7TDnT8-CoLw/s1600-h/tschumi_tulips.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 290px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAil2tNTyABDX1kVTnxyF4mZajxN5mU_MumIR2gmi-chnqTNrJCxTzL4MCsMDTmKVuDGwl91nUi9LJ1FbqZbBpHH3TWos_kiMD6uBptYeVTls56xbbXvVdZhVs9VB7TDnT8-CoLw/s400/tschumi_tulips.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5336650247107372850&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Landscape is also cast as a self-revealing instrument in &lt;a href=&quot;http://www.xs4all.nl/%7Enotnot/&quot;&gt;Driessens and Verstappen&#39;s&lt;/a&gt; &lt;a href=&quot;http://www.xs4all.nl/%7Enotnot/tschumitulips/tulips.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Tschumi Tulips&lt;/span&gt;&lt;/a&gt; project. This landscape installation occupied the &lt;a href=&quot;http://www.tschumipaviljoen.org/index.php&quot;&gt;Tschumi Pavilion&lt;/a&gt;, in Hereplein, Groningen, during the northern Spring in 2008. The pavilion is a rectilinear glass container, rising at an angle from the surrounding park. In this installation the artists filled the base of this box with soil and planted over ten thousand white tulips. A matching array of tulips was planted outside, extending the line of the pavilion. Like scientists, the artists set up two identical subjects, but vary their environment: ten thousand tulips inside, ten thousand outside. A webcam &lt;a href=&quot;http://www.xs4all.nl/%7Enotnot/tschumitulips/tulipswebcam.html&quot;&gt;reveals&lt;/a&gt; how these variations in environment are slowly materialised in the life of the tulips. The tulips inside grow, bloom and then, wonderfully, decay more rapidly than their twins outside. As in &lt;span style=&quot;font-style: italic;&quot;&gt;Watching the Sky&lt;/span&gt;, long time spans are compressed into human-scale time and space; and here too digital imaging plays a pragmatic role in that revelation. Deployed in rectangular masses we can easily read the flowers as abstract, sculptural materials; organic pattern and variation enframed and aestheticised. But at the same time the work has a kind of deadpan resonance, a rendition of life, and death, inside a greenhouse.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiku4kF7Z1EC8ueQaH27E8wpCXX0PaFtCHcMJd68QFBNTEZdufHcQD_WBWzcPhznJstSLoUW8BCUW3CCo3QE8Aq3VKzcCXKA-lox2Ts7NNZ3LGRM7URW32sP80kLQ8kH1dJ0URJLA/s1600-h/climateclock-01-2.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 222px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiku4kF7Z1EC8ueQaH27E8wpCXX0PaFtCHcMJd68QFBNTEZdufHcQD_WBWzcPhznJstSLoUW8BCUW3CCo3QE8Aq3VKzcCXKA-lox2Ts7NNZ3LGRM7URW32sP80kLQ8kH1dJ0URJLA/s400/climateclock-01-2.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5336651700682113698&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The &lt;a href=&quot;http://www.haque.co.uk/climateclock.php&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Huey-Dewey-Louie Climate Clock&lt;/span&gt;&lt;/a&gt;, by &lt;a href=&quot;http://www.haque.co.uk/&quot;&gt;Usman Haque&lt;/a&gt; and &lt;a href=&quot;http://homepages.gold.ac.uk/rdavis/&quot;&gt;Robert Davis&lt;/a&gt;, addresses the long timescales of environmental change head-on in a proposal that further develops this articulation of slow data and landscape. The clock is a multi-layered system of autonomous machines and material processes. The &quot;Huey&quot; agent slowly builds &quot;accretion mounds&quot; using material extracted from the atmosphere and formed into accumulating conical stacks over the course of a year; like tree rings or geological strata these embed environmental materials directly into a designed representation. The &quot;Dewey&quot; element is a circular array of one hundred transparent containers, in which air and biomass samples are preserved year by year. Like Driessens and Verstappen&#39;s &lt;span style=&quot;font-style: italic;&quot;&gt;Tulips&lt;/span&gt;, Haque and Davis propose a biological instrument of one hundred genetically identical daffodils, which are sown and harvested each year, then entombed in the plinths - again a simple grid, a layer of invariance is imposed that allows the landscape to essentially represent itself, materially. Finally Louie, an autonomous solar-powered robot, gathers soil samples and compresses them into cubes, one per day. The surface of each cube is imprinted with some current data point - chosen by daily popular vote; perhaps oil price, or rainfall. So here fast, real-time, socially selected data comes to rest directly on the slow, material medium of the soil.&lt;br /&gt;&lt;br /&gt;At one stage, not long ago, it may have seemed that we were leaving the landscape behind, or drafting it in only in as a support or substrate for the flickering patterns of real-time culture. Even now, that seems possible: the monthly figure for new housing construction, a bellwether for economic growth, is imposed on the landscape by earthmovers and roadbuilders, underscored by raw mounds of earth. The works presented here suggest an alternative role, perhaps an alternative future for the landscape; as slow data and slow instrument, a complex material system that can be subtly designed into self-revelation.</description><link>http://teemingvoid.blogspot.com/2009/05/landscape-slow-data-and-self-revelation.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm4.static.flickr.com/3101/2430618126_95d5298433_t.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6551569568942811928</guid><pubDate>Sun, 03 May 2009 06:30:00 +0000</pubDate><atom:updated>2009-05-04T11:35:08.989+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">computation</category><category domain="http://www.blogger.com/atom/ns#">fabrication</category><category domain="http://www.blogger.com/atom/ns#">hardware</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">transmateriality</category><category domain="http://www.blogger.com/atom/ns#">visualisation</category><title>Transduction, Transmateriality, and Expanded Computing</title><description>In common usage a transducer is a device that converts one kind of energy to another. Wikipedia &lt;a href=&quot;http://en.wikipedia.org/wiki/Transducer&quot;&gt;lists&lt;/a&gt; a fantastic variety of transducers, mapping out links between thermal, electrical, magnetic, electrochemical, kinetic, optical and acoustic energy. In this form transducers are everywhere: a light bulb transduces electrical energy into visible light (and some heat). A loudspeaker transduces fluctuations in voltage into physical vibrations that we perceive as sound.&lt;br /&gt;&lt;br /&gt;In analog media, transduction is overt (&lt;span style=&quot;font-style: italic;&quot;&gt;put the needle on the record...&lt;/span&gt;). But digital media are riddled with it too. Inputs and output devices all contain transducers: the keyboard transduces motion into voltage; the screen transforms voltage into light; the hard drive mediates between voltage and electromagnetic fields. A printer takes in patterns of voltage and emits patterns of ink on a page. Strictly transduction only refers to transformations between different energy types; here I want to extend it to talk about all the propagating matter and energy within something like a computer, as well as those between that system and the rest of the world. From this &lt;a href=&quot;http://teemingvoid.blogspot.com/search/label/transmateriality&quot;&gt;transmaterial&lt;/a&gt; perspective a computer is a cluster of linked mechanisms and substrates; a machine for shifting patterns through time and space.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/rreis/2299314911/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 450px; height: 337px;&quot; src=&quot;http://farm3.static.flickr.com/2270/2299314911_79e2cf05fd_d.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;If this sounds unfamiliar, it&#39;s only by historical accident. &lt;a href=&quot;http://www.diycalculator.com/sp-mechcomp.shtml&quot;&gt;Mechanical computers&lt;/a&gt;, where these patterns are physically perceptible, predate electrical (let alone digital) ones, by centuries (above: a replica of Konrad Zuse&#39;s &lt;a href=&quot;http://en.wikipedia.org/wiki/Z1_%28computer%29&quot;&gt;Z1&lt;/a&gt;, a mechanical computer from 1936. Image by &lt;a href=&quot;http://www.flickr.com/photos/rreis/2299314911/&quot;&gt;rreis&lt;/a&gt;). Materially, our current computers are more or less &lt;a href=&quot;http://en.wikipedia.org/wiki/Black_box&quot;&gt;black box&lt;/a&gt; systems. Their transductions come as a sort of preconfigured bundle or network, a set of familiar relations constructed again by mixtures of hard- and software, protocols, standards: generalising frameworks. I press a key, a letter appears; this is all I need to know. Click &quot;OK&quot;. No user-serviceable parts inside.&lt;br /&gt;&lt;br /&gt;Except that currently, across the media arts and a whole slew of other fields, the computer is undergoing a rich and productive decomposition. It&#39;s composting, to borrow a &lt;a href=&quot;http://en.wikiquote.org/wiki/Bruce_Sterling&quot;&gt;Sterlingism&lt;/a&gt;. This goes under all kind of different names: hardware hacking, device art, homebrew electronics, physical computing. Such practices mount a direct assault on the computer as a material black box, literally and figuratively cracking it open, hooking it up to new inputs and outputs, extending and expanding its connections with the environment. Microcontrollers like the &lt;a href=&quot;http://www.arduino.cc/&quot;&gt;Arduino&lt;/a&gt; present us with nothing but a row of bare I/O pins. Finally we can tackle the question of what should go in, and what should come out: of transduction. A whole generation of artists, designers, nerds and tinkerers are taking up soldering irons and doing just that. Below: the &lt;a href=&quot;http://www.openobject.org/opensourceurbanism/Spoke-o-dometer_Overview&quot;&gt;Spoke-o-dometer&lt;/a&gt; from &lt;a href=&quot;http://www.roryhyde.com/&quot;&gt;Rory Hyde&lt;/a&gt; and &lt;a href=&quot;http://www.openobject.org/scottmitchell/&quot;&gt;Scott Mitchell&#39;s&lt;/a&gt; &lt;a href=&quot;http://www.openobject.org/opensourceurbanism/About&quot;&gt;Open Source Urbanism&lt;/a&gt; project.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2OmKnp6Ek8wJLg_3k2qjQJNFL4Osbve1O7z0nbFnHNo5ivyjdTry3GOttntzDaNucdEoF28UllWc-OANnDUDBuGrJIBv1kHe7jqeCI7ZWdyRK36uaOXZU3yMgI4Gy5HaYxgVqVQ/s1600-h/spoke_o_dometer.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 300px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2OmKnp6Ek8wJLg_3k2qjQJNFL4Osbve1O7z0nbFnHNo5ivyjdTry3GOttntzDaNucdEoF28UllWc-OANnDUDBuGrJIBv1kHe7jqeCI7ZWdyRK36uaOXZU3yMgI4Gy5HaYxgVqVQ/s400/spoke_o_dometer.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5330800566944788578&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;One side-effect of this decomposition of computing is that the ontological status of the digital starts to break down with it. As &lt;a href=&quot;http://www.otal.umd.edu/%7Emgk/blog/&quot;&gt;Kirschenbaum&lt;/a&gt; &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/03/notes-on-transmateriality.html&quot;&gt;shows&lt;/a&gt; brilliantly, the digital is just the analog operating within certain tolerances or threshholds. &lt;a href=&quot;http://www.mischertraxler.com/&quot;&gt;Thomas Traxler&#39;s&lt;/a&gt; &lt;a href=&quot;http://www.mischertraxler.com/systems_concepts_the_idea_of_a_tree1.html&quot;&gt;The Idea of a Tree&lt;/a&gt; (below) is a solar-powered system that fabricates objects from epoxy, dye and string, by turning a spindle. Solar energy generates electrical energy, which drives the motor, which draws the string through the dye and onto the spindle: a chain of analog transductions produce an object that manifests specific&lt;span style=&quot;text-decoration: underline;&quot;&gt;&lt;/span&gt; changes in its local environment. The work is a beautiful demonstration that variability doesn&#39;t have to be worked up with generative code: if the system is open to it, it&#39;s already there in the flux of the material field.&lt;br /&gt;&lt;br /&gt;&lt;div style=&quot;margin: 0px auto 10px; text-align: center;&quot;&gt;&lt;object height=&quot;225&quot; width=&quot;400&quot;&gt;&lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=1277316&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=&amp;amp;fullscreen=1&quot;&gt;&lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=1277316&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;225&quot; width=&quot;400&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;This is not to dismiss computing, only to recast it: an incredibly dynamic, pliable set of techniques for manipulating the material environment. Paradoxically the very generalities of computing - the abstractions and protocols that insulate it from local, material conditions - make it a powerful tool for transduction, that is, the propagation of &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/08/aspects-of-transmateriality-specificity.html&quot;&gt;specificities&lt;/a&gt;. &lt;a href=&quot;http://haque.co.uk/&quot;&gt;Usman Haque&#39;s&lt;/a&gt; &lt;a href=&quot;http://www.pachube.com/&quot;&gt;Pachube&lt;/a&gt; is a generalised infrastructure, a set of protocols and standards that rest in turn on wider standards like XML, and which assume a whole stack of functional layers: IP, HTTP, and so on. All in order to propagate material patterns and flows from here to there: this is an architecture of transduction whose utopian aim is to &quot;&lt;a href=&quot;http://www.ugotrade.com/2009/01/28/pachube-patching-the-planet-interview-with-usman-haque/&quot;&gt;patch the planet&lt;/a&gt;&quot; into a translocal ecology of linked environments.&lt;br /&gt;&lt;br /&gt;Digital fabrication is part of the same shift: an expansion and extension of the computer&#39;s range of material transductions. Digital pattern, to lasercutter instructions, to physical form. Fabbing shows how material matters. It&#39;s unsurprising that a piece of laser-cut ply is aesthetically different to a luminous pattern of pixels; more interesting is the way computation reaches out into the substrate&#39;s material properties, and the range of potential applications and domains it opens up. Fabbing has often presented itself with a narrative of materialisation, making the virtual real, translating bits into atoms - &lt;a href=&quot;http://www.generatorx.no/20071130/generatorx-20-call/&quot;&gt;Generator.x 2.0&lt;/a&gt; was subtitled &quot;Beyond the Screen.&quot; Not so: because of course, the &quot;virtual&quot; never was, and the screen is material too. Fabbing does get us beyond the screen, but only because its processes and materials have different properties, different specificities, and they hook us up to new contexts, as well as new sensations. (Below: &lt;a href=&quot;http://dasautomat.com/?p=129&quot; rel=&quot;nofollow&quot;&gt;Andreas Nicolas Fischer&lt;/a&gt; &amp;amp; &lt;a href=&quot;http://www.allesblinkt.com/project/reflection&quot; rel=&quot;nofollow&quot;&gt;Benjamin Maus&lt;/a&gt;:  &lt;span style=&quot;font-style: italic;&quot;&gt;Reflection&lt;/span&gt; - from &lt;a href=&quot;http://www.flickr.com/photos/watz/sets/72157605938577977/&quot;&gt;5 Days Off: Frozen&lt;/a&gt;)&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/watz/2631108862/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 451px; height: 300px;&quot; src=&quot;http://farm4.static.flickr.com/3178/2631108862_d761a4832d_d.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Transduction suggests a way to link practices like physical computing, fabrication, networked environments, and many more. Data visualisation - in the broadest sense, from poetic to fuctionalist - is about creating customised transductions, sourcing new inputs and/or manifesting new outputs (even if they don&#39;t reach &quot;beyond the screen&quot;). We could add tangible interfaces, augmented reality, and locative systems. What does all this amount to? In 1970 Gene Youngblood observed a similar moment as the dominant cultural form diversified into a networked, participatory, interdisciplinary field of practices. He called it &lt;a href=&quot;http://www.vasulka.org/Kitchen/PDF_ExpandedCinema/ExpandedCinema.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;expanded cinema&lt;/span&gt;&lt;/a&gt;. So perhaps we can call this &lt;span style=&quot;font-style: italic;&quot;&gt;expanded computing&lt;/span&gt;: digital media and computation as material flows, turned outwards, transducing anything to anything else.</description><link>http://teemingvoid.blogspot.com/2009/01/transduction-transmateriality-and.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2OmKnp6Ek8wJLg_3k2qjQJNFL4Osbve1O7z0nbFnHNo5ivyjdTry3GOttntzDaNucdEoF28UllWc-OANnDUDBuGrJIBv1kHe7jqeCI7ZWdyRK36uaOXZU3yMgI4Gy5HaYxgVqVQ/s72-c/spoke_o_dometer.jpg" height="72" width="72"/><thr:total>3</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-7571862789614055466</guid><pubDate>Wed, 15 Apr 2009 02:49:00 +0000</pubDate><atom:updated>2009-04-15T14:07:20.249+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">advertising</category><category domain="http://www.blogger.com/atom/ns#">canberra</category><category domain="http://www.blogger.com/atom/ns#">digital design</category><category domain="http://www.blogger.com/atom/ns#">education</category><category domain="http://www.blogger.com/atom/ns#">generative art</category><category domain="http://www.blogger.com/atom/ns#">opensource</category><title>Master of Digital Design / Grow Your Own Logotype</title><description>Over the past year or so I&#39;ve been working on a major new offering here at &lt;a href=&quot;http://www.canberra.edu.au/&quot;&gt;UC&lt;/a&gt;. So, I&#39;m delighted to finally launch the new  &lt;a href=&quot;http://www.canberra.edu.au/faculties/arts-design/digital-design&quot;&gt;Master of Digital Design&lt;/a&gt; online. This course will offer something quite unique in the Australian context: a trans-disciplinary coursework Masters focused on digital practice for designers and creative practitioners of all sorts. The key practical approaches are generative techniques, data visualisation and design, and physical computing; and we&#39;ll be using  these to address three core themes or questions: the urban, the public, and the sustainable.&lt;br /&gt;&lt;br /&gt;As readers of this blog will know, these themes and approaches are right in line with my own research and creative interests; so frankly, I&#39;m thrilled to be leading this course. Teaching with me will be a crew of talented designers, artists and researchers including &lt;a href=&quot;http://ontherooftopsofparis.wordpress.com/&quot;&gt;Stephen Barrass&lt;/a&gt;, &lt;a href=&quot;http://meetpi.edublogs.org/&quot;&gt;Sam Hinton&lt;/a&gt; and &lt;a href=&quot;http://twitter.com/gravitron&quot;&gt;Geoff Hinchcliffe&lt;/a&gt;. Finally, we&#39;ll be drawing on the wisdom and experience of an international advisory panel whose work exemplifies what we mean by digital design - a practice that engages deeply, and critically, with digital processes, digital materials, and digital contexts: &lt;a href=&quot;http://postspectacular.com/&quot;&gt;Karsten Schmidt&lt;/a&gt;, &lt;a href=&quot;http://roryhyde.com/&quot;&gt;Rory Hyde&lt;/a&gt;, &lt;a href=&quot;http://n-e-r-v-o-u-s.com/&quot;&gt;Nervous System&lt;/a&gt;, &lt;a href=&quot;http://offshorestudio.net/&quot;&gt;Anthony Burke&lt;/a&gt; and &lt;a href=&quot;http://f0.am/&quot;&gt;foAM&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.flickr.com/photos/mtchl/3443908840/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 388px; height: 400px; border:0px&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTp3msbSl1MN6K8QJdlHikVBrVTGErxCvvMs8MpM0by51pamrrlnUoX7Qb128ni4_aQODz6CfWSdhYSeZQciGodiw0rjwipw_7JdoMu6n1zztNM2tsi9J7LEy8UM699UB9GCYzyg/s400/DD_logotype_web.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5324752087528478914&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The course launch has also provided a great excuse (er, opportunity) to play with some ideas around generative branding and marketing. I&#39;ve been tinkering with this logotype for &lt;a href=&quot;http://www.flickr.com/photos/mtchl/2741313366/&quot;&gt;ages&lt;/a&gt;; it uses the same basic algorithm as &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;Limits to Growth&lt;/a&gt; but artificially constrains the growth to a letterform (in the guise of a hidden bitmap image). Lately I&#39;ve extended the logotype into a little generative marketing &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/dd/&quot;&gt;gadget&lt;/a&gt;; a Processing applet that lets you grow endless variations, and receive the results as a PDF file, attached to an email. The aim is to provide a little taste of the power - and pleasure - of generative design.&lt;br /&gt;&lt;br /&gt;Behind the scenes this project was yet another demonstration of the brilliance of &lt;a href=&quot;http://processing.org/&quot;&gt;Processing&lt;/a&gt; and its community. The key technical challenge was the upload-and-email functionality. Seltar&#39;s &quot;save to web&quot; &lt;a href=&quot;http://processing.org/hacks/hacks:savetoweb&quot;&gt;hack&lt;/a&gt; provided the template; upload image data over HTTP, and have a PHP script catch and save the file. From there it was relatively straightforward to have PHP generate the email, with the help of the Pear &lt;a href=&quot;http://pear.php.net/package/Mail_Mime&quot;&gt;MailMime&lt;/a&gt; package. The final step was uploading a PDF, rather than a bitmap. This seemed impossible, because the built-in PDF library needs to write a local file, which means the extra annoyance of a signed applet. I posted a query on the Processing forums and within 24 hours &lt;a href=&quot;http://phi.lho.free.fr/index.fr.html&quot;&gt;PhiLho&lt;/a&gt; saved me with a &lt;a href=&quot;http://processing.org/discourse/yabb2/YaBB.pl?board=LibraryProblems;action=display;num=1237962559&quot;&gt;solution&lt;/a&gt; that extends the PDF class to allow access to the PDF data as a Byte array, without first saving the file. Amazing: thank you! Add the super-useful &lt;a href=&quot;http://www.sojamo.de/libraries/controlP5&quot;&gt;ControlP5&lt;/a&gt; for the UI sliders and buttons, and the whole thing is built on, in and with free, open-source software. Again, a demonstration of why digital design is such an exciting field of practice right now.</description><link>http://teemingvoid.blogspot.com/2009/04/master-of-digital-design-grow-your-own.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTp3msbSl1MN6K8QJdlHikVBrVTGErxCvvMs8MpM0by51pamrrlnUoX7Qb128ni4_aQODz6CfWSdhYSeZQciGodiw0rjwipw_7JdoMu6n1zztNM2tsi9J7LEy8UM699UB9GCYzyg/s72-c/DD_logotype_web.jpg" height="72" width="72"/><thr:total>3</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-1797507849395534958</guid><pubDate>Sun, 15 Mar 2009 04:17:00 +0000</pubDate><atom:updated>2009-03-15T15:17:35.812+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">canberra</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><category domain="http://www.blogger.com/atom/ns#">urban</category><category domain="http://www.blogger.com/atom/ns#">visualisation</category><title>Watching the Street (Navigator) / citySCENE</title><description>&lt;a href=&quot;http://vagueterrain.net/journal13/&quot;&gt;Vague Terrain 13: citySCENE&lt;/a&gt; has just launched. As editor Greg J. Smith writes:&lt;br /&gt;&lt;blockquote&gt;This issue of Vague Terrain is founded on two notions - that the city is a stage set for intervention and an engine for representation. &lt;/blockquote&gt;The collection expands out from this premise in multiple directions: carto-mashups, projection-bombing, sound walks, psychogeographic imaging and ubicomp experiments. Early highlights for me included &lt;a href=&quot;http://vagueterrain.net/users/crisis-fronts&quot;&gt;Crisis Fronts&lt;/a&gt;&#39; &lt;a href=&quot;http://vagueterrain.net/journal13/crisis-fronts/01&quot;&gt;Cognitive Maps and Database Urbanisms&lt;/a&gt;, which presents some impressive work on data visualisation and generative models as urban mapping strategies (below: &lt;a href=&quot;http://vagueterrain.net/journal13/crisis-fronts/02&quot;&gt;Case Study: Los Angeles&lt;/a&gt;). Overall, on a first look, this collection is incredibly rich. It shows that a creative, wired-up, critical urbanism is not just a wisftul aspiration of the technorati, but a real practice.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_hZ7aH3Ksi8z0HKTmHTxSXJurE-LocYNlJJVffkSQc4HqTtHsyKrJYNQ86ZyWvF-QWSzrPEPacxBTtHwn4nawXTGdLRrh9PEtpBenHVBJWHZiqVwLpP_Y3QstNUu-_8zmtNB23g/s1600-h/LAX-01_Bus-FreewayProximity2.png&quot;&gt;&lt;img style=&quot;border: medium none ; margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 450px; height: 275px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_hZ7aH3Ksi8z0HKTmHTxSXJurE-LocYNlJJVffkSQc4HqTtHsyKrJYNQ86ZyWvF-QWSzrPEPacxBTtHwn4nawXTGdLRrh9PEtpBenHVBJWHZiqVwLpP_Y3QstNUu-_8zmtNB23g/s400/LAX-01_Bus-FreewayProximity2.png&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5313259059219533986&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Having said all that, it&#39;s a privelege to be a part of this collection. My contribution is &lt;a href=&quot;http://vagueterrain.net/journal13/mitchell-whitelaw/01&quot;&gt;Watching the Street (Navigator)&lt;/a&gt;, a browsable visualisation of a single day of images from the &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/11/watching-street.html&quot;&gt;Watching the Street&lt;/a&gt; dataset. It tests out the hunch that these time-lapse slit-scans can be used to read real patterns in the urban environment - that they are (or can be) more than just suggestive abstractions. It uses a simple interface to display both a single source frame, and a correlated slit-scan visualisation, with image-space and time-space sharing an axis, a bit like a slide rule. Greg Smith called it an &quot;&lt;a href=&quot;http://twitter.com/serial_consign/statuses/1286982424&quot;&gt;urban viewfinder&lt;/a&gt;&quot;, which sums the intention up nicely.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://vagueterrain.net/journal13/mitchell-whitelaw/02&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 355px; height: 496px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiR2CyidbCTwPXZmtaCCq-TUzqENUxDeNwp67nRuw3kIH36rQ9H4kLQnma-AyUBAoGjNHXXLwDVmMjVyA0RsTfLkVBQmkztLdvlEt9iqbSS0jhbl26MfXqsKVkgFHyYRnsObAs1PA/s400/wts_navigator_grab.png&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5313261959662997586&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Playing with the navigator for a while seems to confirm that hunch. The composites reveal temporal patterns in the environment, but not the spatial context that allows us to identify their causes; the source frames show that spatial context, but not the change over time. Reading the two against each other involves chains and cycles of discovery, analysis and inference. These might be open-ended (spatiotemporal browsing) or more directed. What time do the sandwich-boards go out? How long does the delivery truck stay?&lt;br /&gt;&lt;br /&gt;Building the navigator presented some interesting technical challenges: mainly, how to make a web-friendly interface to 1440 source frames (240 x 320) and 480 slit-scan composites (720 x 320). That adds up to about 75Mb of jpegs. &lt;a href=&quot;http://processing.org/&quot;&gt;Processing 1.0&lt;/a&gt; came to the rescue, with its new built-in dynamic image loader.&lt;a href=&quot;http://processing.org/reference/requestImage_.html&quot;&gt; requestImage()&lt;/a&gt; pulls in an image from a given URL, on cue, without bringing the whole applet to a grinding halt; it provides some basic feedback on the state of that image - whether it&#39;s loading, loaded, or un-loadable. I also blundered into two other  useful lessons: how to use the applet &quot;base&quot; parameter, and how to manage Java&#39;s local cache, which kept throwing up earlier versions of the applet during testing.&lt;br /&gt;&lt;br /&gt;Having made a lean, mean, browser-friendly version, I&#39;m now thinking of adapting the navigator into a full-screen, offline app, with the whole eight-day dataset, and perhaps some tools for annotation and intra-day comparison. Best of all would be a long term installation; a sort of urban space-time observatory, watching the street but also opening it up to ongoing interpretation. If you&#39;d like it running in your foyer, let me know.</description><link>http://teemingvoid.blogspot.com/2009/03/watching-street-navigator-cityscene.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_hZ7aH3Ksi8z0HKTmHTxSXJurE-LocYNlJJVffkSQc4HqTtHsyKrJYNQ86ZyWvF-QWSzrPEPacxBTtHwn4nawXTGdLRrh9PEtpBenHVBJWHZiqVwLpP_Y3QstNUu-_8zmtNB23g/s72-c/LAX-01_Bus-FreewayProximity2.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6436949132682734134</guid><pubDate>Thu, 15 Jan 2009 23:45:00 +0000</pubDate><atom:updated>2009-01-22T13:33:56.114+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">artificial life</category><category domain="http://www.blogger.com/atom/ns#">canberra</category><category domain="http://www.blogger.com/atom/ns#">generative art</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><title>JCSMR Curls</title><description>This post is (belated) documentation of a project I worked on in 2007-8, creating an audio-responsive generative system for a permanent installation for the &lt;a href=&quot;http://info.anu.edu.au/ovc/Media/Media_Releases/2008/March/20080309_chan&quot;&gt;Jackie Chan Science Centre&lt;/a&gt; (yes, &lt;span style=&quot;font-style: italic;&quot;&gt;that&lt;/span&gt; Jackie Chan) at the &lt;a href=&quot;http://jcsmr.anu.edu.au/&quot;&gt;John Curtin School of Medical Research&lt;/a&gt;, on the ANU campus. Along with some Processing-related nitty gritty, you&#39;ll find some broader reflections on generative systems and the design process. For less process and more product, skip straight to the &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/spectrum_spiralbranch&quot;&gt;generative&lt;/a&gt; &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/s_helix_web&quot;&gt;applets&lt;/a&gt; (and turn on your sound input).&lt;br /&gt;&lt;br /&gt;In mid 2007 my colleague Stephen Barrass and I were approached by &lt;a href=&quot;http://www.thylacine.com.au/&quot;&gt;Thylacine&lt;/a&gt;, a Canberra company specialising in urban art, industrial and exhibition design. Caolan Mitchell and &lt;a href=&quot;http://alexandragillespie.net/&quot;&gt;Alexandra Gillespie&lt;/a&gt; were designing a new permanent exhibition, the first stage of the new Jackie Chan Science Centre, housed in a new building - a razor-sharp piece of contemporary architecture (below) by Melbourne firm &lt;a href=&quot;http://www.lyonsarch.com.au/&quot;&gt;Lyons&lt;/a&gt;. Instead of just bolting a display case and a few plaques to the wall, Mitchell and Gillespie (wonderfully) proposed a design that hinged on a dynamic generative motif - a system that would ebb and flow with its own life cycles, and echo the spiral / helix DNA structures central to the School&#39;s work, and already embedded in the building&#39;s architecture.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3214768448/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 438px; height: 328px;&quot; src=&quot;http://farm4.static.flickr.com/3530/3214768448_eeef0444b0.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;My initial sketches (below) took the spiral motif fairly literally, drawing vertical helices and varying their width with a combination of mouse movement and a simple sin function - the results reminded me of the beautiful spiral egg cases of the &lt;a href=&quot;http://www.austmus.gov.au/fishes/students/focus/heter.htm&quot;&gt;Port Jackson Shark&lt;/a&gt;. At that stage we were talking about the possibility of projecting back onto the facade of the building, which has big vertical glass panels; this structure informed the vertical format. I made a quick &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/helix/movs/composite_singleH264.mov&quot;&gt;video&lt;/a&gt; mockup of the form on the facade - which was incredibly easy, thanks to the robust, adaptable, extendable goodness of Processing (a recurring theme in the process to come).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNr2fnGohC0E8x-rjqj_fux1gZDQcTJCoDls4Yw0CIEm2717dBuumUs2I4GfIJ4OJ7bGmUtzMBjtQlpgVdKPbClRr4dqv8cR4g2y6CKN1EnKSVg5NQPRtTxomGeMuCR4LPPxiewg/s1600-h/sharkegg_facade.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 240px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNr2fnGohC0E8x-rjqj_fux1gZDQcTJCoDls4Yw0CIEm2717dBuumUs2I4GfIJ4OJ7bGmUtzMBjtQlpgVdKPbClRr4dqv8cR4g2y6CKN1EnKSVg5NQPRtTxomGeMuCR4LPPxiewg/s400/sharkegg_facade.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5291746425217758946&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;These sketches meet the simplest criteria of the brief (spiral forms) but do nothing about the more interesting (and difficult) ones: cycles of birth, growth and death, and dynamics over multiple time scales. Over the next couple of months I developed two or three different approaches to this goal.&lt;br /&gt;&lt;br /&gt;The phyllotaxis model blogged &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/03/self-organised-phyllotaxis.html&quot;&gt;earlier&lt;/a&gt; was one attempt. Spurred on by the hardcore a-life skills of &lt;a href=&quot;http://www.csse.monash.edu.au/%7Ejonmc/&quot;&gt;Jon McCormack&lt;/a&gt; and co. at &lt;a href=&quot;http://www.csse.monash.edu.au/%7Ecema/&quot;&gt;CEMA&lt;/a&gt;, I built a system in which phyllotactic spirals self-organised spontaneously. Because in Jon&#39;s words, anyone can draw a spiral, what you really want is a system out of which spirals emerge! The model worked, but I had trouble figuring out how phyllotactic spiral forms might meaningfully die or reproduce. Also, by that stage I had two other systems that seemed more promising.&lt;br /&gt;&lt;br /&gt;From the early stages I wanted to make the system respond to environmental audio. The installation would be in a public foyer with plenty of pedestrian traffic, so audio promised a way to tap in to the building&#39;s rhythms of activity at long time scales, as well as convey an instantaneous sense of live interaction. In the two most developed sketches audio plays a key role in the life cycle of the system.&lt;br /&gt;&lt;br /&gt;One sketch moved into 2d, and started with a pre-existing model for growth, by way of the &lt;a href=&quot;http://algorithmicbotany.org/vmm-deluxe/Section-04.html&quot;&gt;Eden&lt;/a&gt; growth algorithm (this system would later be adapted again into &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;Limits to Growth&lt;/a&gt;). I had already been playing with an &quot;off-lattice&quot; Eden-like system where circular cells could grow at any angle to their parent (rather than the square grid of the original Eden model). This system also made it easy to vary the radius of those cells individually. The next step was to couple live audio to the system; following a physical metaphor, frequency is mapped to cell size, so that larger cells responded to low frequency bands, and smaller cells to high frequencies. Incoming sound  adds to the cell&#39;s energy parameter; this energy gradually decays over time in the absence of sound. Cell reproduction, logically enough, is conditional on energy.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3200946798/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 225px;&quot; src=&quot;http://farm4.static.flickr.com/3392/3200946798_46c5befee4.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The result is that cells which are best &quot;tuned&quot; for the current audio spectrum will accumulate more energy, and so are more likely to reproduce, spawning a neighbour whose size (and thus tuning) is similar to, but not the same as, their own; so over time the system generates a range of different cell sizes, but only the well-tuned survive. The rest die, which in the best artificial life tradition, means they just go away - no mess, no fuss. In the image below cells are rendered with stroke thickness mapped to energy level. The curves and branches pop out of rules sprinkled lightly with random(), resulting in a loose take on the spiral motif, which is probably the weak point in this sketch. I still think it has potential - nightclub videowall, anyone? Try the live applet over &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/spectrum_spiralbranch&quot;&gt;here&lt;/a&gt; (adjust your audio input levels to control the growth / death balance).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3200945538/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 225px;&quot; src=&quot;http://farm4.static.flickr.com/3103/3200945538_52773b532f.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The third model takes this approach to energy and reproduction - about the simplest possible a-life simulation - and folds it back into the helical structures of the first sketches. In this world an individual is a 3d helix, built from simple line segments. Again each individual is tuned to a frequency band, which supplies energy for growth; but here &quot;growth&quot; means adding segments to the helix, extending its length. Individuals can &quot;reproduce&quot;, given enough energy, but here reproducing means spawning a whole new helix, with a slightly mutated frequency band. All the helixes grow from the same origin point - they form a colony, something like a clump of grass.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3207333191/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 200px;&quot; src=&quot;http://farm4.static.flickr.com/3098/3207333191_65335788cf.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;This sketch went through many variants and iterations over the next month or so; in retrospect the process of working to a brief, within a design team, pushed this system further than I ever would have taken it myself. At the same time I was testing the system against my own critical position; I&#39;ve argued &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/papers/SystemStories.pdf&quot;&gt;earlier&lt;/a&gt; that the generative model matters, not just for its generativity but the entities and relations it involves.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3207331163/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 200px;&quot; src=&quot;http://farm4.static.flickr.com/3261/3207331163_dde21d99dd.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;From that perspective this system was full of holes. Death was arbitrary: just a timer measuring a fixed life-span. &quot;Growth&quot; was a misnomer: the number of segments was simply a rolling average of the energy in the curl&#39;s frequency band, so the curls were really no more than slow-motion level meters. Taking the organic / metabolic analogy more seriously, I worked out a better solution. An organism needs a certain amount of energy just to function; and the bigger the organism, the more energy it needs. If it gets more than it needs, then it can grow; if it gets less than it needs, for long enough, it will die. So this is a simple metabolic logic that can link growth, energy and death. Translated into the world of the curls: for each time step, every curl has an energy threshhold, which is proportional to its size (in line segments); if the spectral energy in its band is far enough over that threshhold, it adds a segment - like adding a new cell to its body; if the energy is under that threshhold, it doesn&#39;t grow; and if it remains in stasis for too long, it dies. Funnily enough, the behaviour that results is only subtly different to the simple windowed average. Does the model really matter, in that case? It does for me at least; if and how it matters for others is another question.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3211873212/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 156px;&quot; src=&quot;http://farm4.static.flickr.com/3348/3211873212_a6c55d4a65.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Next, the curls developed a more complex life-cycle - credit to &lt;a href=&quot;http://alexandragillespie.net/&quot;&gt;Alex Gillespie&lt;/a&gt; for urging me in this direction. In line with the grass analogy, curls grow a &quot;seed&quot; at their tip when they are in stasis; when they die, that seed is released into the world. Like real seeds, these can lie dormant indefinitely before being revived  - here, by a burst of energy in their specific frequency band. After several iterations, the seed form settled on a circle that gradually grows spikes, all the while being blown back &quot;down&quot; the world (against the direction of growth) by audio energy (below). As well as adding graphic variety, seeds change the system&#39;s overall dynamics. Unlike spawned curls, seeds are genetically identical to their &quot;parent&quot; - attributes such as frequency band are passed on unaltered. Because each individual can make only one seed, that seed is a way for the curl to go dormant in lean times; if it gets another burst of energy, it can be reborn. The curls demo &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/s_helix_web&quot;&gt;applet&lt;/a&gt; demonstrates this best (again, adjust your audio input and make some noise).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3211873082/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 156px;&quot; src=&quot;http://farm4.static.flickr.com/3520/3211873082_aa6094b027.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;A few technical notes. One big lesson here was the power of transform-based geometry. Each curl is a sequence of line segments whose length relates to frequency band (lower tuned curls have longer segments); each segment is tilted (rotateZ), then translated along the x axis to the correct spot. A sine function is used to modulate the radius of each curl along its length; frequency band factors in here too; this radius is expressed as a y axis translation. Then the segment is rotated around the x axis, to give depth. I iterate this a few hundred times to get one curl, and repeat this process up to twenty times to draw the whole world - each curl has its own parameters for tilt, x rotation increment, and frequency band.&lt;br /&gt;&lt;br /&gt;In the &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/jcsmr/s_helix_web&quot;&gt;live applet&lt;/a&gt; audio energy ripples up the curls, from base to tip. This was added to reinforce the liveness of the system and add some rapid, moment-by-moment change. It was implemented very simply. I used a (Java) ArrayList to create a stack of audio level values; at each time step, the current audio level value is added at the head of the list, and the ArrayList politely shuffles all the other values along. So each segment&#39;s length is a combination of three values; the base segment length, a function to taper the curl towards the tip, and the buffered audio level.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3214013887/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 156px;&quot; src=&quot;http://farm4.static.flickr.com/3400/3214013887_010236f954.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The graphics are all drawn with OpenGL - following &lt;a href=&quot;http://www.flight404.com/blog/?p=71&quot;&gt;flight404&lt;/a&gt; I dabbled with GL blend modes, specifically additive blending, to get that luminous quality. The other key visual device here is the smearing caused by redrawing with a translucent rect(); instead of erasing the previous frame completely this fades it before overlaying the new frame. It&#39;s an easy trick that I&#39;ve used &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/boom/&quot;&gt;before&lt;/a&gt;. But as Tom Carden &lt;a href=&quot;http://www.processing.org/hacks/hacks:fading&quot;&gt;explains&lt;/a&gt;, in OpenGL it leaves traces of previous frames. I discovered this firsthand when Alex and Caolan asked whether we could lose the &quot;ghosts.&quot; I was baffled: on my dim old Powerbook screen, I simply hadn&#39;t seen them. Eventually, juggling alpha values I could reduce the &quot;ghosts&quot; to almost black (1) against the completely black (0) background - but no lower. Finally I just set the initial background to (1) instead of (0), and the ghosts were gone.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3214015723/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 156px;&quot; src=&quot;http://farm4.static.flickr.com/3521/3214015723_4321aa5975.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The adaptability of Processing came through again when it came to realising the installation. The final spec was a single long custom-made display case, with three small, inset LCD panels. These screens would run slide shows expanding on the exhibition content, but also feature the generative graphics when idle; the case itself would also integrate the curls as a graphic motif. For the case graphics, I sent Thylacine an applet that output a PDF snapshot on a key press; they could generate the graphics as required, then import the files directly into their layout.&lt;br /&gt;&lt;br /&gt;The screens posed some extra challenges. The initial idea was to have the screens switch between a Powerpoint slideshow, and the curls applet; but making this happen without window frames and other visual clutter was impossible. In the end it was easier to build a simple slide player into the applet: it reads in images from an external folder, allowing JCSMR to author and update the slideshow content independently.&lt;br /&gt;&lt;br /&gt;So to wrap up the Processing rave: it provided a single integrated development and delivery tool for a project spanning print, screen, audio, interaction, animation and even content management. Being able to burrow straight through to Java is powerful. Development was seamlessly cross-platform; the whole thing was developed on a Mac, and now runs happily on a single Windows PC with three (modest) OpenGL video cards. The installation has run daily for over six months, without a hitch (touch wood).&lt;br /&gt;&lt;br /&gt;Some installation shots below, though it&#39;s hard to photograph, being a glass fronted cabinet in a bright foyer - reflection city. I&#39;ll add some better shots when I can get them. If you&#39;re in Canberra, drop in to the JCSMR - worth it for the building alone - and see it in person.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3214481223/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 375px; height: 340px;&quot; src=&quot;http://farm4.static.flickr.com/3494/3214481223_e3338b0249.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3214480979/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 500px; height: 230px;&quot; src=&quot;http://farm4.static.flickr.com/3416/3214480979_1b17b72b78.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3215331564/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 375px; height: 500px;&quot; src=&quot;http://farm4.static.flickr.com/3483/3215331564_11ed6b3552.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://flickr.com/photos/mtchl/3215331416/&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 430px; height: 500px;&quot; src=&quot;http://farm4.static.flickr.com/3309/3215331416_702df1d0d4.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;And very finally, photographic proof of the Jackie Chan connection - image from &lt;a href=&quot;http://www.theage.com.au/news/national/martial-arts-star-gives-something-back-to-fathers-town/2008/03/09/1204998282864.html&quot;&gt;The Age&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.theage.com.au/news/national/martial-arts-star-gives-something-back-to-fathers-town/2008/03/09/1204998282864.html&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 266px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTGaWjcScAhI0oHUe3fVlL93gEhzQjWgjC1g14t6lRNs7lq-83D0OhxqTdrbONG_wGn0f97Tv9UeTAySQDbmWn_49HPCxCBbiPfDZMYDRQyW2KJaUpIc6_vKt5aymSiEI5VdHbWg/s400/Kevin_Jackie.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5293704080686845138&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;</description><link>http://teemingvoid.blogspot.com/2009/01/jcsmr-curls.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm4.static.flickr.com/3530/3214768448_eeef0444b0_t.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-5292472795952437650</guid><pubDate>Fri, 19 Dec 2008 05:49:00 +0000</pubDate><atom:updated>2008-12-19T21:09:29.455+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">fabrication</category><category domain="http://www.blogger.com/atom/ns#">generative art</category><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">projects</category><title>Fabricated Growth Forms (Processing to Ponoko)</title><description>Like &lt;a href=&quot;http://www.flickr.com/groups/digitalfabrication/&quot;&gt;many&lt;/a&gt; &lt;a href=&quot;http://postspectacular.com/work/printmag/start&quot;&gt;others&lt;/a&gt; playing with generative techniques, I&#39;m fascinated by the potential of digital fabrication. Getting &lt;a href=&quot;http://www.generatorx.no/20071130/generatorx-20-call/&quot;&gt;beyond the screen&lt;/a&gt; and into the world of objects is a significant move for a field that has, until the last few years, reveled in its own immateriality. There&#39;s a lot to think about in this material turn, but that&#39;s for another post. Here, a quick report on my first experiment with generative fabrication.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm4.static.flickr.com/3263/3119187931_7c1aea6851.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 350px; height: 466px;&quot; src=&quot;http://farm4.static.flickr.com/3263/3119187931_7c1aea6851.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;I don&#39;t have a laser cutter handy at my workplace (though as &lt;a href=&quot;http://digitalhistoryhacks.blogspot.com/2008/11/few-arguments-for-humanistic.html&quot;&gt;William Turkel&lt;/a&gt; writes there are lots of good reasons why I should) so I decided to check out &lt;a href=&quot;http://ponoko.com/&quot;&gt;Ponoko&lt;/a&gt;; I wanted to see what was involved in generating, uploading and fabbing a small project. I started with the Processing sketch from &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Limits to Growth&lt;/span&gt;&lt;/a&gt;, and tweaked it to turn out much smaller forms (a few hundred nodes, rather than tens of thousands). I used the built-in PDF export, then opened the PDFs in Illustrator. (Illustrator is the only commercial/proprietary software step in this process, so I&#39;d be interested to hear of any alternatives). The forms are drawn as linked line segments of varying stroke widths. Ponoko needs an EPS with only the outside edge of this form, so I used Illustrator to merge it into a composite path, then set the stroke colour and width as instructed (0,0,255 and 0.001mm).&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm4.static.flickr.com/3094/3120263928_b3dc0f5ccb.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 350px; height: 349px;&quot; src=&quot;http://farm4.static.flickr.com/3094/3120263928_b3dc0f5ccb.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The upload to Ponoko took a few tries - I was getting some strange errors as their system failed to &quot;see&quot; the cutting paths on the template - but after some swift and cheerful technical assistance it all worked. Pricing was also trial and error; the first design I uploaded was more complex than these, and of course these branching forms pack a long cutting path into a small surface area. I simplified the design, packed four forms onto a sheet, and opted for 4mm ply rather than acrylic. Final cost including (expensive) shipping to Australia was about $A60 (currently around $US40). Not what I&#39;d call cheap, but not prohibitive either. There are intricate discussions of the economics of the business - shipping, exchange rates, local vs global, etc - on the Ponoko &lt;a href=&quot;http://forums.ponoko.com/&quot;&gt;forums.&lt;br /&gt;&lt;/a&gt;&lt;br /&gt;Eighteen days later, they arrived. Novelty counts for a lot here, but still, I&#39;m totally charmed by these objects. A few surprises, but all good: they are smaller and finer than I imagined, and they smell very slightly of charred wood (excellent!). The cut edges are dark with a nice smooth, burnished surface, and the ply surface is clean. The scale and intricacy of the things seems to entice people to touch and handle them. I find them far more satisfying than the (much more detailed) laser prints I made with the same system.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://farm4.static.flickr.com/3278/3119188699_bda146a46b.jpg&quot;&gt;&lt;img style=&quot;margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 467px; height: 350px;&quot; src=&quot;http://farm4.static.flickr.com/3278/3119188699_bda146a46b.jpg&quot; alt=&quot;&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Immediately it&#39;s clear how the fabbing process, and the materials, can reach back up through the production chain and influence the design and the generative system. One flaw in the design is a product of how I&#39;m drawing the shapes: there are small rounded &quot;shoulders&quot; at the joints between line segments, caused by the overlap between one rounded line cap and the next segment - this is obvious in the physical forms. Better to draw the segments as tapered rectangles, and avoid the shoulders. Also, the branching topology is structurally risky; how to introduce more joins without breaking the generative model? This interplay, between computational process, manufacturing process, material and form, seems really promising. Ponoko seems to be an excellent, affordable way to try this out, and the built-in fab-on-demand shopfront is great, if you want to sell your wares. But it&#39;s still, ironically, working with a mass-production paradigm of one design, &lt;span style=&quot;font-style: italic;&quot;&gt;n &lt;/span&gt;copies. With hooks for a more dynamic, generative front end, it could get really interesting: designers like the wonderful &lt;a href=&quot;http://n-e-r-v-o-u-s.com/&quot;&gt;Nervous System&lt;/a&gt; are doing this already. More documentation of the growth forms over on &lt;a href=&quot;http://www.flickr.com/photos/mtchl/tags/ponoko/&quot;&gt;Flickr&lt;/a&gt;.</description><link>http://teemingvoid.blogspot.com/2008/12/fabricated-growth-forms-processing-to.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm4.static.flickr.com/3263/3119187931_7c1aea6851_t.jpg" height="72" width="72"/><thr:total>6</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6833173834399405030</guid><pubDate>Thu, 27 Nov 2008 10:35:00 +0000</pubDate><atom:updated>2008-11-28T16:38:32.925+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">materiality</category><category domain="http://www.blogger.com/atom/ns#">photography</category><category domain="http://www.blogger.com/atom/ns#">processing</category><category domain="http://www.blogger.com/atom/ns#">projects</category><category domain="http://www.blogger.com/atom/ns#">urban</category><category domain="http://www.blogger.com/atom/ns#">visualisation</category><title>Watching the Street</title><description>&lt;a href=&quot;http://www.flickr.com/photos/mtchl/3037900362/&quot; title=&quot;wts_out_1112 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm4.static.flickr.com/3177/3037900362_64b4e101bd.jpg&quot; alt=&quot;wts_out_1112&quot; border=&quot;0&quot; height=&quot;222&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The recent &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/10/dorkbot-cbr-at-manuka-ccas.html&quot;&gt;Dorkbot show&lt;/a&gt; seemed to go off nicely - it was great to be part of such a strong show of local work (some &lt;a href=&quot;http://www.flickr.com/photos/mtchl/tags/dorkbot/&quot;&gt;documentation&lt;/a&gt;). I showed some prints from &lt;a style=&quot;font-style: italic;&quot; href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;Limits to Growth&lt;/a&gt;, as well as a more experimental process piece, &lt;span style=&quot;font-style: italic;&quot;&gt;Watching the Street&lt;/span&gt; - a (sub)urban remake of &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/07/image-data-and-environment-notes-on.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Watching the Sky&lt;/span&gt;&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDF5oce8b4znq2DmqyybxNWsUvXzW2WBBRCP71XFz1oMEAXD8Cf8yLCa2quh0ILmo1irVAE44igW1XVxs46PyNB55-v18KT3xrYn0FIZZjfC-2ADW8wiugrSAjUd1P9c_vgbcSvw/s1600-h/manuka_street_composite.jpg&quot;&gt;&lt;img style=&quot;cursor: pointer; width: 400px; height: 321px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDF5oce8b4znq2DmqyybxNWsUvXzW2WBBRCP71XFz1oMEAXD8Cf8yLCa2quh0ILmo1irVAE44igW1XVxs46PyNB55-v18KT3xrYn0FIZZjfC-2ADW8wiugrSAjUd1P9c_vgbcSvw/s400/manuka_street_composite.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5273578165770865186&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Credit to &lt;a href=&quot;http://njmcgee.tumblr.com/&quot;&gt;Nathan McGinness&lt;/a&gt; for the suggestion: use the same time-lapse / slit-scan technique to image change in an urban environment. Technically, the setup was fairly straightforward. Instead of a digital stills camera I used a webcam (in portrait orientation), and wrote a simple Processing script to save stills at one-minute intervals, while extracting and compiling one-pixel slices into 24-hour composites. The webcam was installed in a window box on the gallery street front, with a view across the road, under a street tree, to one of Manuka&#39;s low-rise shopping arcades (above). I also attached a printer to the installed rig, so that a new composite could be produced and pinned to the wall each day. So here, some of the resulting images, and a bit of commentary.&lt;br /&gt;&lt;br /&gt;The image-gathering process got off to a rocky start. After a few hours, the webcam came unstuck from the side of the window-box, and lay forlornly on its side for the next 48 hours (here&#39;s what that &lt;a href=&quot;http://www.flickr.com/photos/mtchl/3015263866/&quot;&gt;looks like&lt;/a&gt;). I gaffed it back in place just before the opening, and restarted the capture in time to catch some gallery-goers &lt;a href=&quot;http://www.flickr.com/photos/mtchl/3014433585/&quot;&gt;loitering around&lt;/a&gt; out the front.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/3014435619/&quot; title=&quot;wts_out_1107 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm4.static.flickr.com/3239/3014435619_97bf81455c.jpg&quot; alt=&quot;wts_out_1107&quot; border=&quot;0&quot; height=&quot;222&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/3014437787/&quot; title=&quot;wts_out_1108 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm4.static.flickr.com/3017/3014437787_264779a2f0.jpg&quot; alt=&quot;wts_out_1108&quot; border=&quot;0&quot; height=&quot;222&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;br /&gt;These two are the Frday the 7th and Saturday the 8th of November, the first two full day composites. Those striped rectangular chunks around mid-frame are cars, parked in the 30 minute loading zone accross the road. Some stay for a few minutes, a couple for what looks like an hour. Of course on the Saturday, the loading zone doesn&#39;t operate, and there&#39;s a single car parked in it from mid-morning to mid-afternoon. The single-pixel vertical shards give an indication of passing car and pedestrian traffic.&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/3037899392/&quot; title=&quot;wts_out_1109 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm4.static.flickr.com/3053/3037899392_cf88a3232a.jpg&quot; alt=&quot;wts_out_1109&quot; border=&quot;0&quot; height=&quot;222&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;http://www.flickr.com/photos/mtchl/3037064185/&quot; title=&quot;wts_out_1114 by mtchl, on Flickr&quot;&gt;&lt;img src=&quot;http://farm4.static.flickr.com/3144/3037064185_0d46f426b2.jpg&quot; alt=&quot;wts_out_1114&quot; border=&quot;0&quot; height=&quot;222&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;br /&gt;A quiet, sunny Sunday the 9th; the form hinted at on the 8th, reveals itself as the shadow of the big plane tree, creeping across the footpath. Then the following Friday the 14th. It&#39;s all happening; lots of car and pedestrian traffic, changes in sunlight, looks like an afternoon breeze in the foliage as well. The dominant, bluish horizontal stripe in all these images is the neon sign on the shopping centre - which runs all night. The orange rectangle that extends into the evening is the interior light of a shop - which you&#39;ll notice switches off at slightly different times each night.&lt;br /&gt;&lt;br /&gt;So you&#39;ll notice that as in &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/07/image-data-and-environment-notes-on.html&quot;&gt;Watching the Sky&lt;/a&gt;, I&#39;m persisting in reading these as visualisations of the environment, as well as digital images in themselves. I&#39;m struck by how this simple, indiscriminate process reveals both expected and unexpected patterns, and continues to provoke new questions. This despite, or I would argue because of, its openness to multiple material / temporal systems. In an interesting bit of synchronicity, I was teaching in the UTS &lt;a href=&quot;http://streetasplatform.wordpress.com/&quot;&gt;Street as Platform&lt;/a&gt; masterclass with Dan Hill (more on that soon) while this piece was running. Could a simple visualisation process like this function &quot;informationally&quot;, as it were; to help answer real questions about a very specific slice of urban environment, in near-real time? More interesting for me, could it function in that way without prescribing the question in advance - that is, could it support an open-ended process of exploration and interpretation? I&#39;m planning to build an interactive version of this piece, to try out these ideas. In these static visualisations there&#39;s a huge amount of data missing: I set the slice point more-or-less arbitrarily, so there are 479 other potentially interesting slices to browse. It would be nice to be able to change the slice point dynamically, as well as navigating through the source images. I notice that Processing 1.0 (yay!) now supports threaded loading of images: could come in handy. Meanwhile, the full set of composite images are up on &lt;a href=&quot;http://www.flickr.com/photos/mtchl/tags/watchingthestreet/&quot;&gt;Flickr&lt;/a&gt;.</description><link>http://teemingvoid.blogspot.com/2008/11/watching-street.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="http://farm4.static.flickr.com/3177/3037900362_64b4e101bd_t.jpg" height="72" width="72"/><thr:total>5</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-6972810442441428471</guid><pubDate>Mon, 27 Oct 2008 03:36:00 +0000</pubDate><atom:updated>2008-10-28T07:05:49.218+11:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">advertising</category><category domain="http://www.blogger.com/atom/ns#">canberra</category><category domain="http://www.blogger.com/atom/ns#">exhibition</category><title>Dorkbot CBR at Manuka CCAS</title><description>&lt;div style=&quot;text-align: center;&quot;&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLB2h44SoI1U9iE3YwICRSLDKg305CCx1UVjQfttLi6r1gVrCYBY61eNzVvcjoFUP_Czpzi117c40cldvudduGoiXFbRi4LnQ1iRwHwdL_IEfhZr1367Mh5eLb42qF_dyfeMV0rg/s1600-h/dorkbot_flyer_30.jpg&quot;&gt;&lt;img style=&quot;cursor: pointer; width: 437px; height: 471px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLB2h44SoI1U9iE3YwICRSLDKg305CCx1UVjQfttLi6r1gVrCYBY61eNzVvcjoFUP_Czpzi117c40cldvudduGoiXFbRi4LnQ1iRwHwdL_IEfhZr1367Mh5eLb42qF_dyfeMV0rg/s800/dorkbot_flyer_30.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5261926642733135410&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;/div&gt;&lt;a href=&quot;http://dorkbotcbr.wordpress.com/&quot;&gt;Dorkbot Canberra&lt;/a&gt;&#39;s inaugural group show opens Thursday November 6th at Canberra Contemporary Artspace Manuka. It&#39;s a great, super diverse lineup, including wearables, data art, solar power, generative grunge, drawing machines and audiovisuals. I&#39;ll be showing a big crop of prints from &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/09/limits-to-growth.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Limits to Growth&lt;/span&gt;&lt;/a&gt;, as well as doing a kind of urban version of &lt;a href=&quot;http://teemingvoid.blogspot.com/2008/07/image-data-and-environment-notes-on.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Watching the Sky&lt;/span&gt;&lt;/a&gt;, gathering images from the street. Here&#39;s the full &lt;a href=&quot;http://dorkbotcbr.wordpress.com/2008/10/26/dorkbot-cbr-exhibition-opens-nov-6th/&quot;&gt;press release&lt;/a&gt;.</description><link>http://teemingvoid.blogspot.com/2008/10/dorkbot-cbr-at-manuka-ccas.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLB2h44SoI1U9iE3YwICRSLDKg305CCx1UVjQfttLi6r1gVrCYBY61eNzVvcjoFUP_Czpzi117c40cldvudduGoiXFbRi4LnQ1iRwHwdL_IEfhZr1367Mh5eLb42qF_dyfeMV0rg/s72-c/dorkbot_flyer_30.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7686436.post-2541680713933580211</guid><pubDate>Wed, 01 Oct 2008 23:58:00 +0000</pubDate><atom:updated>2008-10-02T13:34:17.610+10:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">audiovisual</category><category domain="http://www.blogger.com/atom/ns#">inframedia</category><category domain="http://www.blogger.com/atom/ns#">neuroaesthetics</category><category domain="http://www.blogger.com/atom/ns#">synaesthesia</category><category domain="http://www.blogger.com/atom/ns#">theory</category><title>Synesthesia and Cross-Modality in Contemporary Audiovisuals</title><description>&lt;span style=&quot;font-style: italic;&quot;&gt;Though written about a year ago, this essay has just been published in &lt;/span&gt;&lt;span&gt;Senses and Society&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;. It&#39;s related to the &lt;/span&gt;&lt;span&gt;&lt;span&gt;Synchresis&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; project posted &lt;a href=&quot;http://teemingvoid.blogspot.com/2007/12/synchresis-australian-audiovisuals.html&quot;&gt;earlier&lt;/a&gt; but makes a more rigorous investigation of synaesthesia, as it is (so often) applied to fused or algorithmic audiovisuals. After a quick tour through the history of synaesthesia in the arts, it uses some nifty perceptual neuroscience to argue for an alternative model, of contemporary audiovisuals as cross-modal objects that reveal the space of relation between modalities - the map. It takes work by Andrew Gadow (below) and Robin Fox as case studies, but also touches on Oskar Fischinger, Robert Hodgin, Norman McLaren and others. The version here has plenty of pics and vids; for a more paper-based experience  grab the &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/papers/Synesthesia_Crossmodality.pdf&quot;&gt;pdf&lt;/a&gt; (and please use the print version for any citations). Oh and pardon the American spellings here - journal style.&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3RbhWRBupYHSKRnYL76uHda-noDy8IkqwA78U9PhCQmNf05iPNwQu3apfNMYvMc74gEL9DqEduxsCuVc74iD3OAmlFXriVs3OBtLXU5Ua0mWQwBNUM5wLv9xGvJp9yIUzrDIiQ/s1600-h/gadow_4up.jpg&quot;&gt;&lt;img style=&quot;cursor: pointer; width: 450px; height: 338px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3RbhWRBupYHSKRnYL76uHda-noDy8IkqwA78U9PhCQmNf05iPNwQu3apfNMYvMc74gEL9DqEduxsCuVc74iD3OAmlFXriVs3OBtLXU5Ua0mWQwBNUM5wLv9xGvJp9yIUzrDIiQ/s400/gadow_4up.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5252381419103318834&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;span id=&quot;fullpost&quot;&gt;&lt;br /&gt;In the age of ubiquitous digital media, synesthesia is everywhere. In human, neurological form, it is rare: for perhaps three in a hundred people, a stimulus in one sensory modality automatically induces a sensation in another. Auditory-to-visual synesthesia, or “colored hearing” is much rarer still. Yet now this phenomenon is realised, apparently, inside every digital music player, on VJ screens in every club, in robot lightshows. On these screens sound is transformed into visual pattern and form instantly and automatically; an exotic perceptual phenomenon becomes a technically mediated commonplace.&lt;br /&gt;&lt;br /&gt;In fact digital synesthesia is a trope that occurs in the production and use of mainstream digital media,  as well as the media arts. Computer users find audio visualisers built in to their music players; as this software shows, audiovisual relations can now be reduced to an algorithm, a formal procedure that interprets (sound) and emits (image) data; though in this case the results are mostly mundane psychedelia. In some recent music video audio visualisations are integrated into the narrative and performative conventions of the genre; in Justin Timberlake’s &lt;a href=&quot;http://www.youtube.com/watch?v=GIYXHLlxD8U&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Lovestoned&lt;/span&gt;&lt;/a&gt; video (2007) the singer’s image is constructed from the flickering bands of a visualised audio spectrum. Here a technically guaranteed unity of sound and image is literally reinscribed on to the performing artist. In contemporary media arts practice the same techniques – computational analysis of sound driving generated visual elements – are widespread, and its aesthetics more diverse. In custom-coded audio visualisations artists such as &lt;a href=&quot;http://www.unlekker.net/proj/cronica021/&quot;&gt;Marius Watz &lt;/a&gt;(2005) and &lt;a href=&quot;http://http//www.flight404.com/blog/?p=52&quot;&gt;Robert Hodgin&lt;/a&gt; (2007) (below) construct visualisations tuned to specific soundtracks; the automatism of digital synesthesia animates specific, constructed worlds of form and image. The algorithm becomes an endlessly variable and dynamic intermediary between sound and image.&lt;br /&gt;&lt;br /&gt;&lt;div align=&quot;center&quot;&gt;&lt;object height=&quot;250&quot; width=&quot;400&quot;&gt; &lt;param name=&quot;allowfullscreen&quot; value=&quot;true&quot;&gt; &lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt; &lt;param name=&quot;movie&quot; value=&quot;http://vimeo.com/moogaloop.swf?clip_id=169308&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=&amp;amp;fullscreen=1&quot;&gt; &lt;embed src=&quot;http://vimeo.com/moogaloop.swf?clip_id=169308&amp;amp;server=vimeo.com&amp;amp;show_title=1&amp;amp;show_byline=1&amp;amp;show_portrait=0&amp;amp;color=&amp;amp;fullscreen=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; allowscriptaccess=&quot;always&quot; height=&quot;250&quot; width=&quot;400&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;In a distinct but related approach, some media artists have opted instead to simplify or reduce that audiovisual relation, ofen bypassing computation altogether. In Carsten Nicolai’s &lt;span style=&quot;font-style: italic;&quot;&gt;Telefunken&lt;/span&gt; works (2000,2004) the stereo output of an audio CD player is connected to the audio and video inputs of a television screen; what is heard as synthetic tones and noisy drones, is seen on the screen as patterns of monochrome form and line. In an approach I will refer to here as transcoding, sound and image are linked through a direct transfer of signal, a simple cross-wiring.&lt;br /&gt;&lt;br /&gt;&lt;div align=&quot;center&quot;&gt;&lt;object height=&quot;344&quot; width=&quot;425&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/M4pCJ2znTCA&amp;amp;color1=0xb1b1b1&amp;amp;color2=0xcfcfcf&amp;amp;fs=1&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;embed src=&quot;http://www.youtube.com/v/M4pCJ2znTCA&amp;amp;color1=0xb1b1b1&amp;amp;color2=0xcfcfcf&amp;amp;fs=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; height=&quot;344&quot; width=&quot;425&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;Australian artist Robin Fox plugs audio from a custom-built digital synthesiser into an oscilloscope; in the resulting hybrid instrument, Fox explores a territory in which signal is simultaneously heard and seen; every sound is a form in motion, every form a sound (above). The connection, the cross-wiring of sound to image, literally manifests the sensory cross-over of synesthesia; more, the work itself seems to somehow induce synesthetic experience. The correspondence between sound and image is immediate, agile and intense; the audiovisual relation is completely consistent, somehow self-evident, yet continually surprising. There’s a feeling of something like revelation; one reviewer &lt;a href=&quot;http://www.cyclicdefrost.com/review.php?review=795&quot;&gt;describes&lt;/a&gt; Fox’s &lt;a href=&quot;http://synrecords.blogspot.com/2007/08/syn012-robin-fox-backscatter-dvd.html&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Backscatter&lt;/span&gt;&lt;/a&gt; DVD (2005) as “mesmerizing” and “overwhelming,” and hints at a sense of “greater significance or higher purpose” (Baker Fish 2005). Andrew Gadow’s work approaches the same relationship from the other side; working with an old video synthesiser, he transfers its image signal  directly into audio. The scan-line structure of the video signal becomes audible as modulations of a 50 Hz hum; flickering, disintegrating visual textures become abrasive but intricately detailed buzzsaw audio. Again the subjective experience can be powerful, a visceral sense of force or encounter; the audiovisual coupling is so close that it seems to disappear, distinct modalities fuse into raw sensation.&lt;br /&gt;&lt;br /&gt;Synesthesia is widely used as an analogy around this work. The analogy provides a mapping that aligns subjective sensation with audiovisual signals; it maps perceptual or even neurological structures onto technical structures. The analogy also plays another role, foregrounding sensation in the reception of the artworks; proposing to operate, for the subject, at the level of direct sensation. Finally, synesthesia also connects this contemporary work with a historical artistic tradition. The new automatic or transcoded fusion of sound and image seems to mark the culmination of a practice spanning music, painting, film and electronic media and aspiring, as Jeremy Strick writes, to the ideal of synesthesia as “the unity of the senses, and, by extension, the arts.” (Strick 2005: 15) The 2005 Visual Music exhibition, curated by Strick, documents this tradition in detail, as well as making a bid for its continuation into the present:&lt;br /&gt;&lt;blockquote&gt;In digital media ... music and visual art are ... created out of the same stuff, bits of electronic information ... . [T]he aspiration to  novel experience created by the compounding of sensation and association has never been more possible. (ibid.)&lt;/blockquote&gt;This paper’s main aim is to test this analogy, and the related historical drive that Strick suggests; to consider if, and how, such practice can be thought of as synesthetic, and examine structural parallels between synesthesia as a perceptual and neurological phenomenon, and the automatic or transcoded linking of audio and visual media? Following the tradition of artistic synesthesia that Strick invokes, the approach here is to provisionally ignore the glaring gap in this analogy, between subjective sensation and objective, technical artefact. Scientific work in perception and neuropsychology is drawn in for a more detailed account of synesthesia; but it also offers an alternative model for this practice, based on theories of cross-modal interactions in normal perception. Close correlations between sound and image are, after all, an everyday perceptual occurrence. From this perspective, tightly correlated audiovisuals direct us towards the abstract structures that are its generative materials: signal, as distinct from image or sound; and the map, the pattern of correlation between signals in different domains. Although artists such as Fox and Gadow use obsolete, analog technologies, their work is a sensory manifestation of these characteristically contemporary abstractions.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Synesthesia&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Scientists studying synesthesia define it as occurring “when stimulation of one sensory modality automatically triggers perception in a second modality” (Harrison and Baron-Cohen 1996: 3). There are many documented forms of synesthesia; stimuli such as numbers, letters, words, days of the week, and musical tones may trigger perceived color and shape; taste, smell and pain can also trigger perceptions of shape. Estimates of the prevalence of synesthesia vary widely between 1 in 20 and 1 in 20 000; one recent study found a prevalence of around 3% and showed that calendar-color and letter- and number-color forms occur most often, while audiovisual synesthesia or “colored hearing” is comparatively rare (Simner et al 2006).&lt;br /&gt;&lt;br /&gt;After long being debunked or treated as a curiosity, synesthesia has attracted increasing scientific attention, and validation, in recent years. Neuropsychologist Richard Cytowic undertook one of the first modern studies (Cytowic 1989). As well as a basic validation – finding  that synesthesia is a real phenomenon with a neurological basis – Cytowic proposed a set of diagnostic and clinical features of the condition (Cytowic 1996: 23-31).  He found that synesthesia is “involuntary but elicited,” an automatic perceptual experience that cannot be supressed or controlled. Synesthetic perceptions are “durable and generic,” meaning that an individual’s cross-sensory connections do not change over their lifetime, and that synesthetic perceptions are elementary and general, rather than “elaborated” – for example colors and simple shapes, rather than a detailed mental picture.  Cytowic also points out that synesthetic perceptions are unusually memorable: some synesthetes use their triggered percepts as an index that aids their recall of the evoking stimulus. Moreover, Cytowic states, synesthesia is an emotional experience, “accompanied by a sense of certitude  (the ‘this is it’ feeling)” that he links to William James’ description of religious ecstasy, and in particular the affect of noesis, “knowledge that is experienced directly, an illumination that is accompanied by a feeling of certitude.” It is in part this affective dimension that leads Cytowic to propose a linkage between synesthesia and the limbic brain – associated with emotion and a sense of “salience.”&lt;br /&gt;&lt;br /&gt;More recent science has continued to investigate the neurology and psychology of synesthesia, using modern imaging techniques to show activation in the anatomy of the synesthetic brain, as well as behavioural experiments that seek out the parameters of synesthetic experience. Recent work has confirmed Cytowic’s finding that synesthesia is involuntary and perceptually real, though the results also suggest that  there is significant variation between synesthetic individuals (see for example Hubbard and Ramachandran 2005). Discussion centers on where synesthesia occurs in the notional chain of perceptual processing; for a minority of synesthetes it seems to occur early in the chain, before cognitive processes such as attention; for the rest it seems to occur later; Ramachandran and Hubbard label these forms&quot;lower” and “higher” (2001: 14). There is general consensus that synesthesia has a neural basis in the form of increased connectivity between normally separate neural regions or modules (contrary to Cytowic’s limbic model), though the connective mechanism and architecture is debated. Some, including Ramachandran and Hubbard, propose that this connectivity is a result of defective “pruning” of neural connections, suggesting that synesthetic cross-wiring is a normal early developmental stage. The notion of synesthesia as common, underlying or originary is supported by the correlations between synesthetic and “normal” cross-sensory associations; despite individual differences in color-tone mapping, non-synesthetes and those with “colored hearing” make similar mappings between pitch and lightness (Marks 1996: 72). Similarly Ramachandran and Hubbard (2001:19) cite the consistency of an (albeit simple) mapping between shape and sound to support the same point: asked to link two shapes, one round, one spiky, with two names, bouba and kiki, subjects overwhelmingly associate kiki with the angular form and bouba with the rounded one. The authors continue, in one of the more expansive examples of synesthesia science, to propose links between this “normal” synesthesia and the angular gyrus, an anatomical region associated with cross-modal association as well as numeracy, the neurology of metaphor, emotion, art and the evolution of language. In this formulation synesthesia is an extreme case that offers clues to the neurology of normal – and significant – human abilities to associate and synthesise disparate sensations and concepts.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Synesthesia in the Arts: Models and Maps&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;The history of synesthesia in the modern arts is well documented, and will not be recounted in detail here. More important for this argument is a sense of the major strains or variants of the concept, and their creative implications. This lineage has been traced to Romantic and Symbolist interests in the correspondence of the senses; poems of Rimbaud and Baudelaire correlated letters, colors, smells and sounds (see for example Cook 1998:25). In what Judith Zilczer (2005: 26) terms the mystical strain of artistic synesthesia, these sensory correspondences were held to refer to a higher, unitary reality, informed by Theosophy and Romantic philosophy. Kandinsky, reputedly a synesthete with “colored hearing,”  is the best known of a group whose painterly abstraction was informed by musical and synesthetic analogies. In Kandinsky’s &lt;span style=&quot;font-style: italic;&quot;&gt;Concerning the Spiritual in Art&lt;/span&gt; (1977: 25) sensation, and especially color, can set the soul vibrating like a musical instrument: his aim was a form of absolute or “nonobjective” visual art, comparable to music. Cook (1998: 46) describes this model, informed by Goethe’s philosophy, as triangular. Both sound and color derive from the spiritual, or higher vibration, at the apex; thus sound and color have no inherent correspondence but “correspond to one another in so far as they embody the same ultimate meaning.”  In a second wave of visual music between the wars, artists such as Paul Klee, Man Ray, Georgia O’Keefe and Arthur Dove adopted more concrete and structural models of correspondence, attempting to map harmony, counterpoint and rhythm into the abstract picture plane (see for example Zilczer 2005: 52-67). Synesthesia itself plays a shifting role in this context. Kandinsky and composer Alexander Scriabin seem to have experienced it, while many other artists were inspired by, or in some cases literally borrowed, synesthestic correspondences. Discussing the influence of Kandinsky’s note-color correpondences on Schoenberg,  Cook (1998: 49) proposes a “cultural synesthesia” – where the idea of sensory correspondence can carry a cultural value independent of its actual experience.&lt;br /&gt;&lt;br /&gt;In fact cultural synesthesia – evoked, suggested, implied or idealised synesthesia – dominates the visual music tradition; there are very few instances of actual, spontaneous, automatic audiovisual correspondences. In the work of Messiaen, Scriabin and perhaps Schoenberg (via Kandinsky), synesthetic experience formed the basis for a systematised set of pitch-color correspondences, though even these are not straightforward. The correspondences are different for each composer, as we would expect based on recent science. Moreover each is conditioned by what Cook (1998:46) argues is a mixture of subjective and cultural factors. Any correspondence between the continuous color spectrum and the discrete values of the Western twelve-tone scale is dubious – though these correspondences flourished in the early twentieth century in the “color organs” of Rimington and others (see for example  Cook 1998:37 and &lt;a href=&quot;http://www.cabinetmagazine.org/issues/22/peel.php&quot;&gt;Peel&lt;/a&gt; 2006). Later emblematic practitioners of the visual music tradition, John and James Whitney, used tightly composed but again ultimately arbitrary relations between sound and vision. If, as Strick argues, this creative tradition aspires towards synesthesia, when it comes to practically manifesting that sensory relation it founders on the problem of the map, the pattern of correspondences. Of all possible relations between sound and vision, what is it that makes one different, or preferable, to another? While recent science suggests some underlying perceptual commonalities, the devil, and the aesthetic, is in the detail.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;http://www.oskarfischinger.org/Sounding.htm&quot;&gt;&lt;img style=&quot;cursor: pointer;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIv5OkAvQ31Kk4NGcHnS3PAcmchmB3EM0rvVgZO6PqE_EaL-KrCr9tb54K9jOlXp7gzXZtIjUu6pkjunFuJY8kET9XTWxi84hyphenhyphenkpmMLS59Te7eyOH40_Q2kaqqJfup5CVoK44hRw/s400/OSsmstripsc_crop.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5252379621168876370&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;Animator Oskar Fischinger provides a near-precedent for transcoded audiovisuals, and demonstrates one possible solution to the question of the map. Fischinger’s &lt;a href=&quot;http://www.oskarfischinger.org/Sounding.htm&quot;&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;Ornament Sound&lt;/span&gt;&lt;/a&gt; experiments of 1932 (above) explored the double identity of the optical film soundtrack, printing regular visual patterns into the 3mm-wide sound strip at the edge of the frame, enabling them to be automatically rendered as sound. Fischinger (1932) emphasised the potential of this technique for composers: “control of every fine gradation and nuance is granted to the music-painting artist.” This form also promised a newfound “definitive” control over performance: “his creation, his work, can speak for itself directly through the film projector.” Fischinger also recognises the visual interest in recorded sound waves; and although he anticipated their use in conjunction with animation, he did not envisage the “sounding ornaments” as visual content in themselves. Nonetheless, Fischinger had found a space of audiovisual correspondence that was preexisting and “definitive,” yet seemingly had limitless creative potential. The later work of animator Norman McLaren developed Fischinger’s techniques, synchronising hand-drawn optical soundtracks with  animation in &lt;span style=&quot;font-style: italic;&quot;&gt;Dots&lt;/span&gt; (1940), and finally using the synthesised optical soundtrack as synchronised visual source material in &lt;span style=&quot;font-style: italic;&quot;&gt;Synchromy&lt;/span&gt; (1971) (below).&lt;br /&gt;&lt;br /&gt;&lt;div align=&quot;center&quot;&gt;&lt;object height=&quot;344&quot; width=&quot;425&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/Jqz_tx1-xd4&amp;amp;hl=en&amp;amp;fs=1&quot;&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;embed src=&quot;http://www.youtube.com/v/Jqz_tx1-xd4&amp;amp;hl=en&amp;amp;fs=1&quot; type=&quot;application/x-shockwave-flash&quot; allowfullscreen=&quot;true&quot; height=&quot;344&quot; width=&quot;425&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;&lt;br /&gt;Contemporary transcoded audiovisuals realise Fischinger’s experiments by similarly bypassing, or rather abdicating, the question of the map. This is not to say that the map disappears in an unmediated or inherent audiovisual connection. Instead, for Robin Fox and Andrew Gadow as well as Fischinger, the map is found, rather than constructed; it is embedded in the medium.  Fox plugs his laptop into an oscilloscope, which maps the left and right channels of the audio signal into the x and y axes of its display. This audiovisual relation is in a sense a readymade, an existing cultural/technical artefact. Its process is literally hardwired, embodied in the analog electronics of the scope, just as Fischinger’s was in the optical technology of film.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Fused AV and Synesthesia&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;Do transcoded audiovisuals then realise the synesthetic ideal, or literalise the analogy? We can draw some correlations. To recap, the current scientific consensus is that synesthetic perceptions are real, automatic and involuntary, and caused by neural cross-connections at some level of the perceptual system. The cross-mappings of synesthetic perceptions are highly variable from one individual to another, but highly consistent for the individual. The transcoding approach of artists like Fox and Gadow seems fairly close: in Fox’s oscilloscope work for example, images are created “automatically” as Fox feeds audio to the oscilloscope, cross-wiring audio to vision; Fox uses the oscilloscope’s hardwired audiovisual map, which is fixed and consistent; but that mapping is different to, for example, Gadow’s equally automatic sound-to-image mapping, based on the interchange of analog video and audio signals. Even the visual aesthetics of these works could be likened to reports of colored hearing: in Cytowic’s terms these are not “elaborated” percepts, but simple, abstract elements.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEictVEpj-l45R4br-TbFRAmfY1mS4TytwaP0yyAviOzNPGz4EKyEiCaCn9uqwukUOpvxaUi5ssPycrt-P02HooRUpghobBINplXCtowqmWOue8BJSZGT_ZD42-AhNMB7v1QkyYPQA/s1600-h/fox_photosynthesis_4up.jpg&quot;&gt;&lt;img style=&quot;cursor: pointer; width: 450px; height: 338px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEictVEpj-l45R4br-TbFRAmfY1mS4TytwaP0yyAviOzNPGz4EKyEiCaCn9uqwukUOpvxaUi5ssPycrt-P02HooRUpghobBINplXCtowqmWOue8BJSZGT_ZD42-AhNMB7v1QkyYPQA/s400/fox_photosynthesis_4up.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5252381421039954834&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;The synesthetic affect that Cytowic’s study identifies is also suggestive. Fused audiovisuals can evoke (for some at least) a similar sense of revelation or noesis. Fox’s oscilloscope works show us something that feels both self-evidently “right” and surprising or unimaginable; the primal phosphorescent dot shows us its universe, a set of relations that are manifestly coherent and consistent, but whose implications are unforeseeable. Fox’s compositional structure emphasises the process of revelation at times: &lt;span style=&quot;font-style: italic;&quot;&gt;Photosynthesis&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt; (AOR)&lt;/span&gt; (above), the opening track on his &lt;span style=&quot;font-style: italic;&quot;&gt;Backscatter&lt;/span&gt; release, offers an initially gentle introduction, as the single point of the trace is buffeted by rhythmic subsonic clicks before slowly unfurling into harmonic pattern; but by the end of that track wave after wave of complex, nested forms have emerged and co-modulated; each point on the path is another noetic moment yet each is consistent and coextensive with the others. &lt;span style=&quot;font-style: italic;&quot;&gt;Mandala I&lt;/span&gt;, following, demonstrates almost the opposite approach, as Fox’s micro-switched digital twitches call up flickering variants on the circular carrier wave; to push the cosmological analogy, this is some kind of faster than light travel – we traverse many places at once – but again there is a revelatory quality as we witness accumulating relations, both momentary – between each sound, its corresponding form and movement – and sequential, between each sound/form/movement and the next, and the next.&lt;br /&gt;&lt;br /&gt;How far can this line go, though, before it falls into the yawning gap in the analogy? Audiovisual works are artefacts; objects of perception, not perceptions. To put it bluntly, synesthesia, by definition, occurs in the perceptual system of a synesthete, not in the crossed connections of a video synth. Once again, we can use the gap as a provocation, rather than an obstacle. One response is to think of these works not as replicating human neurology, but rather something else. “Artificial synesthesia” is the term used by Dutch neuroscientist &lt;a href=&quot;http://www.seeingwithsound.com/asynesth.htm&quot;&gt;Peter Meijer&lt;/a&gt; (n.d.) to describe his work on sensory substitution; his &lt;a href=&quot;http://www.seeingwithsound.com/&quot;&gt;vOICe&lt;/a&gt; system transcodes video from  a small camera, into synthetic audio, in an attempt to use sound to provide visual information to those with little or no vision. In Meijer’s words, “we are interested in forms of learned synesthesia (acquired synesthesia) that might result from machine-generated crossmodal mappings.”  Among other things Meijer’s work suggests that perception is not a fixed set of channels, but a reconfigurable network; over time, blind users of the vOICe seem to integrate image transcoded into sound, as functional vision. A recent paper shows that the lateral-occipital tactile-visual area of the brain, normally associated with the tactile and visual perception of shape, is activated by expert vOICe users (Amedi et al 2007). Other work in the field of sensory substitution suggests that different forms of synesthesia can also be acquired: Peter König’s feelSpace belt conveys orientation through vibrating touch, providing a augmented sense that some volunteers were able to integrate over time (see &lt;a href=&quot;http://www.wired.com/wired/archive/15.04/esp_pr.html&quot;&gt;Bains&lt;/a&gt; 2007).&lt;br /&gt;&lt;br /&gt;Are transcoded audiovisuals some form of sensory substitution or artificial synesthesia? There are two important differences. Sensory substitution operates by mapping an otherwise absent modality into an existing one; absent vision into existing hearing, in the case of the vOICe, and absolute orientation into touch, in the feelSpace belt. However for most, audiovisual transcoding links two existing modalities, “channels” already in perceptual use. Secondly, sensory substitution involves long-term integration and interaction with the environment; we can learn new “channels” but only by feeling out and (literally) incorporating their correlations with our existing sensory matrix. There are some striking parallels, and transcoded AV certainly hints at artificial synesthesia and a rewired sensorium, but as bounded aesthetic objects these works cannot realise that perceptual transformation.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;The Pleasures of Binding&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;Correlation, key to artificial synesthesia, offers an alternative approach to the perceptual aesthetics of fused audiovisuals. At the core of transcoded and other tightly linked audiovisual forms is an experience based on a correlation between auditory and visual elements. While synesthesia offers a neurological analogy for the generation (poetics) of fused AV, this correlated quality leads into the neuroscience of perception, and thus offers a way to frame these works from the other side, the side of reception (or aesthetics).&lt;br /&gt;&lt;br /&gt;The detection of correlations in the perceptual field is a normal, and crucial, perceptual task. From an ecological perspective, correlations underpin the recognition of objects in an organism’s environment. Our perceptual systems “bind” correlated elements into groups that often correspond to objects in our physical environment. A cat hiding in the garden might initially appear as an unrelated set of visual elements – a light grey splodge here, a dark shape there. When we “see” the cat, we detect correlations between those elements that enable us to interpret them as part of an underlying object. The image of a hidden Dalmatian dog (below) is often used to illustrate this phenomenon.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7wktgruUJ7ofmHkkNeITu3ZJwMJY6_CthcKIxcPzEq6eeqPu7FHD6N9K-Ac-FhPpnyffKW1oB5xQFO91d_55z77nr3lzAzBHf7IcbGoWUTbcEGGWpa56Ac8gGqutf5hADdp5oNA/s1600-h/dalmatian-2.gif&quot;&gt;&lt;img style=&quot;cursor: pointer;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7wktgruUJ7ofmHkkNeITu3ZJwMJY6_CthcKIxcPzEq6eeqPu7FHD6N9K-Ac-FhPpnyffKW1oB5xQFO91d_55z77nr3lzAzBHf7IcbGoWUTbcEGGWpa56Ac8gGqutf5hADdp5oNA/s400/dalmatian-2.gif&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5252383852826022242&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;As Ramachandran and Hirstein (1999: 21-23) point out, this process of binding has some interesting features. Binding is “sticky” – we seem to hold on to bound perceptual elements. Once seen, the Dalmatian cannot be un-seen a without a conscious effort. Moreover, the act of making a binding is pleasurable in itself: “the discovery of the dog and the linking of the dog-relevant splotches generates a pleasant ‘aha’ sensation.” The authors offer an evolutionary rationale for this payoff: “The very process of discovering correlations and of ‘binding’ correlated features to create unitary objects or events must be reinforcing for the organism – in order to provide incentive for discovering such correlations.” Our limbic system apparently rewards us for detecting sensory correlations in our environment, even in advance of the final “recognition” of an object: “at every stage in processing there is generated a ‘Look, here is a clue to something potentially object-like’ signal that produces limbic activation and draws your atttention to that region or feature.” These incremental rewards “bootstrap” the final moment of recognition. Ramachandran and Hirstein work this perceptual pleasure principle into a neurological theory of aesthetic experience, suggesting that artists and designers seek out and intensify the pleasures of sensory binding, creating artefacts that “tease the system with as many of these ‘potential object’ clues as possible.”&lt;br /&gt;&lt;br /&gt;Ramachandran and Hirstein also go further in proposing that the discovery of more abstract correlations is also reinforced by a limbic reward (1999: 31). They relate this to the ecological imperitive for classification – our evolved need to establish correlations that group and distinguish objects in our environment: say, edible versus inedible plants. This version of binding operates diachronically, rather than the synchronous binding of visual elements into a recognised form. “Being able to see the hidden similarities between successive distinct episodes allows you to link or bind these episodes to create a single super-ordinate category… Consequently the discovery of similarities and the linking of superficially dissimilar events would lead to a limbic activation – in order to ensure that the process is rewarding.”&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Cross-Modal Perception&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;How might processes of binding – the discovery of correlations – operate in fused AV, where the characteristic correlations are between, rather than within, sensory modalities? While studies of perception have traditionally focused on the senses in isolation, as independent neurological “modules&quot;, recent work has begun to explore the relations between sensory modalities. Media-based metaphors for perception encourage us to think of the senses as functionally distinct input channels. If sensory substitution shows that these channels can be re-wired, studies of cross-modal perception show that they are barely even distinct. The senses are involved in what Shimojo and Shams  (2001: 506) describe as “vigorous interaction and integration,” mirroring Michel Chion’s description of the “mutual contamination” that characterises the audiovisual relationship in film sound ([1990] 1994: 9). Shimojo and Shams review experiments showing the range of these mutual influences: how vision can alter the content and spatial location of perceived sound; and how sound can alter the perceived intensity and timing of visual stimulus. We hear what we see, and see what we hear.&lt;br /&gt;&lt;br /&gt;The perceptual trickery of these experiments is less interesting than what they suggest about normal perception. Just as the binding of visual percepts into a whole enables us to recognise objects in our environment, correlations in different sensory modalities cause us to bind those stimuli into a unified perception. This is illustrated with another trick, an experiment by Sekuler et al (1997), in which subjects were presented with two moving dots on intersecting paths. Two perceptual interpretations of this animation are possible: that the dots pass each other without touching, or that they collide and bounce off each other. Without sound, the former interpretation was dominant; however adding a brief sound at the crossing point biased perception strongly towards collision. This is an instance of cross-modal binding, where correlated stimuli in different modalities become fused into a coherent whole. It also suggests the ecological basis of cross-modal binding; that we interpret correlated events as cues to objects in the environment. The interpretation of sensory data seems to be shaped by pre-conscious processes that bind percepts into wholes; wholes that map onto ecologically plausible events. In the crossing dots experiment, sound binds with vision to alter our interpretation of the event. The correlated stimuli point to a common cause, a model that explains their coherence.&lt;br /&gt;&lt;br /&gt;Fused audiovisuals are aesthetic objects founded on cross-modal binding. Ramachandran and Hirstein’s notion of the pleasures of binding applies here; in the transcoded AV of artists such as Fox and Gadow we experience sensory fields that are somehow entirely bound: completely self-consistent, devoid of extraneous elements. The affect that Ramachandran and Hirstein attribute to the moment of binding, the discovery of the Dalmatian – the ‘aha’ of recognition – seems to be intensified and prolonged here. It also suggests a connection between cross-modal binding and the noetic affect Cytowic identifies in synesthetic experience. If we accept the limbic payoff theory of aesthetics, then perhaps fused AV is a manifestation of this pleasure principle in the media arts.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Audiovisuals as Cross-Modal Objects&lt;br /&gt;&lt;br /&gt;&lt;/span&gt;Cross-modal binding is not limited to experimental audiovisuals, however;  in fact the opposite is true. Cinema and television constantly rely on our predilection for binding sound and image; this is the basis of Chion’s synchresis, a “spontaneous and irresistable mental fusion” caused by close synchronisation ([1990] 1994: 63). Lip sync is the archetypal example, where audiovisual correlation breathes life into the image of a body. Recall the ecological function of binding: to identify a common cause – an object in the environment. In most audiovisual media the objects are (all too) readily apparent. So if audiovisual correlations refer us to a shared cause, what is that cause in fused or transcoded AV? What is the underlying object, the cat hiding in the garden?&lt;br /&gt;&lt;br /&gt;In a sound-to-image mapping, for example, it seems logical to propose that the cause is the source modality – sound. This involves a kind of reflexive redundancy; in Fox’s oscilloscope work, it would mean that the image is simply a pointer to the soundtrack, that it doubles or duplicates the sound. Subjectively at least, the relation seems richer and more complex than that; and it seems at odds with an ecological model of perception. Perhaps the common object is not the sound, but something more abstract: the signal. Signal here refers to a pattern of differences or fluctuations, a flux that, like data, must always be embodied but which, again like data, can be readily transduced between one embodied form and another. Fox’s laptop does not send sound to the oscilloscope, or in fact to the audio amp; it sends signal, a pattern of fluctuating voltage. That pattern is manifest on the scope as phosphorescent image, and when it leaves the speakers, as sound: but their common origin is the flux itself.&lt;br /&gt;&lt;br /&gt;&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3RbhWRBupYHSKRnYL76uHda-noDy8IkqwA78U9PhCQmNf05iPNwQu3apfNMYvMc74gEL9DqEduxsCuVc74iD3OAmlFXriVs3OBtLXU5Ua0mWQwBNUM5wLv9xGvJp9yIUzrDIiQ/s1600-h/gadow_4up.jpg&quot;&gt;&lt;img style=&quot;cursor: pointer; width: 450px; height: 338px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3RbhWRBupYHSKRnYL76uHda-noDy8IkqwA78U9PhCQmNf05iPNwQu3apfNMYvMc74gEL9DqEduxsCuVc74iD3OAmlFXriVs3OBtLXU5Ua0mWQwBNUM5wLv9xGvJp9yIUzrDIiQ/s400/gadow_4up.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5252381419103318834&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;&lt;br /&gt;In transcoded audiovisuals sound and image perceptually triangulate a third point, the signal, that is imperceptible in and of itself. Signal maps to perception through the contingencies of both media technologies and sensory boundaries, but in itself it traverses these limits. This is apparent in Fox’s work, where subsonic fluctuations modulate the audible frequencies to create movements that are easily seen, but felt only as sharp thumps; the speakers struggle to transduce the signal into mechanical energy. Many of the complex, pointillist visual patterns are created by square-edged signal forms that again are acoustically impossible; the scope, more agile, is better able to trace them out. Similar trans-sensory signatures occur in Gadow’s work and that of other transcoders; Gadow’s &lt;span style=&quot;font-style: italic;&quot;&gt;Techne&lt;/span&gt; (2005) opens with a still blue screen and a raw, buzzsaw hum. The hum has no movement or form; as becomes clear as the piece develops, it corresponds to the blue video background. It is the sound of the 50 Hz scan-line structure of the video signal itself; so it looks like almost nothing. This is not to say that transcoded audiovisuals are reducible to the signal, an abstract or perhaps “higher” ideal, as in Goethe’s triangular model of color and sound. Here sensation and experience are foremost; these experiments feel out the ramifications of signal in specific circuits and transductions.&lt;br /&gt;&lt;br /&gt;As Ramachandran and Hirstein suggest (1999:31), perceptual binding is both synchronic and diachronic, instantaneous and sequential. If the moment-by-moment audiovisual binding in these works refers us to their shared cause – the signal – how do these works operate in the diachronic axis? They often share a simple formal structure of establishment, development and elaboration, a successive playing-out of potential. Ramachandran and Hirstein state that perceiving “hidden similarities between successive distinct episodes allows you to link or bind these episodes to create a single super-ordinate category.” We can think of the sequential similarities here as products of the constant, underlying structure that shapes all the outputs of the system. That structure is the map, the specific but abstract shape of the audiovisual correspondence. The map is an elusive entity; rather than an object we can think of it as a procedure, a verb or algorithm; a way of transforming between modalities and their shared signal. In Fox’s work the polar mapping of the oscilloscope is an algorithm that transforms phase – local relations in time – and amplitude into circular space. Considered as cross-modal objects, these works direct us to the underlying signal; and the signal is embodied audiovisually through the intermediary of the map. The map describes a space of potential, a range of possible correlations between domains; and it is that territory, I would argue, that these works reveal as they traverse it.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;Inframedia and the Map&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;In earlier work on experimental sound I &lt;a href=&quot;http://creative.canberra.edu.au/mitchell/papers/InframediaAudio.pdf&quot;&gt;proposed&lt;/a&gt; the notion of inframedia, “a stratum below or within the mainlines of electronic media” (Whitelaw 2001:51). The noisy textures, resonant fuzz, glitches, crackles and pops of electronic music since the late 90s reveal “the sensory and affective textures of a media substrate, rather than media ‘content.’” That substrate is a critical domain; media infrastructures are more than technological artefacts; they are rapidly changing focii of power. Inframedia aesthetics reflect a consciousness of that domain, while in its processes such work often pursues local and particular manipulations, hacks or diversions of those media technologies.&lt;br /&gt;&lt;br /&gt;Fused audiovisuals  – a practice with close cultural links to experimental sound and music – can be approached along similar lines. Like hiss and hum, the audiovisual aesthetics of signal direct us to the abstraction and transduction occurring inside, or underneath, our media streams. Glitch-driven audio is founded on cracks in the surface – moments of interruption which allude to, and materialise, their own infrastructure. In a sense transcoded audiovisuals are a prolongation of those moments, leading to a flattening of the surface/depth dichotomy of glitch; the cross-modal coherence of this work is based on a sustained exploration of signal. Instead of mapping signal anthropomorphically onto perceptual “inputs”, these works show us where signal and affect meet or overlap, as well as where they diverge; they show us signal passing into, out of, and through perception.&lt;br /&gt;&lt;br /&gt;These works also direct us to the map – the abstract space of possible transformations between signals. That domain of transformation is also inframedial, a key structure in digital media forms and cultures. Lev Manovich (2002) has described this question as the “built-in existential angst” of digital media: “By allowing us to map anything into anything else ... computer media simultaneously makes all these choices appear arbitrary….” In almost all digital media, the map – the pattern of relations between input and output – is imperceptible, obscured or encoded. This is clearest in the work of artists working explicitly with data inputs. In the work of &lt;a href=&quot;http://sq.ro/&quot;&gt;Alex Drauglescu&lt;/a&gt; for example, spam email is used as the input to an algorithm that creates complex three dimensional &lt;a href=&quot;http://sq.ro/spamarchitecture.php&quot;&gt;forms&lt;/a&gt;. The mapping – the process that transforms spam into form – is never revealed, and so a concrete, specific process becomes a blank spot filled in with an impression of magical transubstantiation. In some computational work the artist provides source code, an explicit specification of the map but one that is highly encoded and unavailable, in itself, to perception. In most digital media objects, the map is inextricable from the residue or artefact it shapes. We perceive only the output, the image, sound or form, in which the input and its transformations are collapsed.&lt;br /&gt;&lt;br /&gt;The wider significance of transcoded audiovisuals is that they approach a perceptual manifestation of  the map, that space of transformation. We sense it, in these works, interpolated between each instant and every other. It’s perhaps not surprising that this characteristically digital figure is manifest through largely analog means; as well as a critical distance from the digital, analog signals offer transformations that are rich and immediate. Crucially the maps themselves are simple and static – highly reduced, compared to their digital counterparts – and so more available to the aesthetic and affective  explorations of transcoded audiovisuals.&lt;br /&gt;&lt;br /&gt;The prospect of somehow apprehending the map is both esoteric and pragmatic. The map is the inescapable intermediary, the necessary condition of our data-experience; but what is the map, what is its shape, how does it transform this into that? What are its conditions, limits, bounds? These works literally feel out the map, and in the process begin to address these questions, offering a sense of the abstract transformations that underpin contemporary digital  culture.&lt;br /&gt;&lt;br /&gt;Synesthesia is a powerful and persistent trope in the audiovisual arts. As shown here it offers some enticing parallels with the techniques and affects of audiovisual practice, yet as a techno-sensory analogy it has inherent limits. As in the visual music tradition, synesthesia plays a largely figurative role, and it demands critical scrutiny as such. In this investigation however, the synesthetic analogy has opened a path towards its more everyday converse, cross-modal perception, which offers a useful framework for a neuro-aesthetics of fused audiovisuals. These two approaches converge in the figure of the map, the space of correlation; the feeling of noesis or revelation common to both synesthesia and cross-modal binding, could be described as the affect of the map. That affect is central to the aesthetics of fused audiovisuals; though I would argue it offers more than a neurological hit; it brings us into contact with the abstract but culturally crucial terrain of the map itself.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;References&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Amedi, Amir, William M Stern, Joan A Camprodon, Felix Bermpohl, Lotfi Merabet, Stephen Rotman, Christopher Hemond, Peter Meijer and Alvaro Pascual-Leone. 2007. “Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex.” &lt;span style=&quot;font-style: italic;&quot;&gt;Nature Neuroscience &lt;/span&gt;10: pp. 687-689.&lt;br /&gt;&lt;br /&gt;Baker Fish, Bob. 2005. Review of Robin Fox, Backscatter. &lt;span style=&quot;font-style: italic;&quot;&gt;Cyclic Defrost&lt;/span&gt; 10. &lt;a href=&quot;http://www.cyclicdefrost.com/review.php?review=795&quot;&gt;http://www.cyclicdefrost.com/review.php?review=795&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Bains, Sunny. 2007. “Mixed Feelings.” Wired 15.04. &lt;a href=&quot;http://www.wired.com/wired/archive/15.04/esp_pr.html&quot;&gt;http://www.wired.com/wired/archive/15.04/esp_pr.html&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Chion, Michel. [1990] 1994. &lt;span style=&quot;font-style: italic;&quot;&gt;Audio-Vision: Sound on Screen&lt;/span&gt;. New York: Columbia University Press.&lt;br /&gt;&lt;br /&gt;Cook, Nicholas. 1998. &lt;span style=&quot;font-style: italic;&quot;&gt;Analysing Musical Multimedia&lt;/span&gt;. Oxford: Oxford University Press.&lt;br /&gt;&lt;br /&gt;Cytowic, Richard. 1989. &lt;span style=&quot;font-style: italic;&quot;&gt;Synesthesia: a union of the senses&lt;/span&gt;. New York: Springer Verlag.&lt;br /&gt;&lt;br /&gt;Cytowic, Richard. 1996. “Synesthesia, phenomenology and neuropsychology: a review of current knowledge.” In John E. Harrison and Simon Baron-Cohen (eds), &lt;span style=&quot;font-style: italic;&quot;&gt;Synesthesia: Classic and Contemporary Readings&lt;/span&gt;. London: Blackwell.&lt;br /&gt;&lt;br /&gt;Fischinger, Oskar. 1932 “Sounding Ornaments.” Deutsche Allgemeine Zeitung (July 8). &lt;a href=&quot;http://www.oskarfischinger.org/Sounding.htm&quot;&gt;http://www.oskarfischinger.org/Sounding.htm&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Harrison, John E., and Simon Baron-Cohen. 1996.  “Synesthesia: an Introduction.” In John E. Harrison and Simon Baron-Cohen (eds), &lt;span style=&quot;font-style: italic;&quot;&gt;Synesthesia: Classic and Contemporary Readings&lt;/span&gt;. London: Blackwell.&lt;br /&gt;&lt;br /&gt;Hubbard, E.M., and V.S. Ramachandran. 2005. “Neurocognitive mechanisms of synesthesia.” &lt;span style=&quot;font-style: italic;&quot;&gt;Neuron&lt;/span&gt; 48(3): pp. 509-520.&lt;br /&gt;&lt;br /&gt;Kandinsky, Wassily. 1977. &lt;span style=&quot;font-style: italic;&quot;&gt;Concerning the Spiritual in Art&lt;/span&gt;. London: Dover.&lt;br /&gt;&lt;br /&gt;Manovich, Lev. 2002. “The Anti-Sublime Ideal in Data Art.” &lt;a href=&quot;http://www.manovich.net/DOCS/data_art.doc&quot;&gt;http://www.manovich.net/DOCS/data_art.doc&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Marks, Lawrence. 1996. “On colored-hearing synesthesia: cross-modal translations of sensory dimensions.” In John E. Harrison and Simon Baron-Cohen (eds), &lt;span style=&quot;font-style: italic;&quot;&gt;Synesthesia: Classic and Contemporary Readings.&lt;/span&gt; London: Blackwell.&lt;br /&gt;&lt;br /&gt;Meijer, Peter. n.d. “Artificial Synesthesia for Synthetic Vision.” &lt;a href=&quot;http://www.seeingwithsound.com/asynesth.htm&quot;&gt;http://www.seeingwithsound.com/asynesth.htm&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Peel, James. 2006. “The Scale and the Spectrum.” &lt;span style=&quot;font-style: italic;&quot;&gt;Cabinet&lt;/span&gt; 22. &lt;a href=&quot;http://www.cabinetmagazine.org/issues/22/peel.php&quot;&gt;http://www.cabinetmagazine.org/issues/22/peel.php&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Ramachandran , V.S.,  and E.M. Hubbard. 2001. “Synesthesia – A Window into Perception, Thought and Language,” &lt;span style=&quot;font-style: italic;&quot;&gt;Journal of Consciousness Studies &lt;/span&gt;8(12): pp. 3-34.&lt;br /&gt;&lt;br /&gt;Ramachandran, V.S., and William Hirstein. 1999. “The Science of Art: a Neurolgical Theory of Aesthetic Experience.” &lt;span style=&quot;font-style: italic;&quot;&gt;Journal of Consciousness Studies&lt;/span&gt; 6(6-7): pp. 15-51.&lt;br /&gt;&lt;br /&gt;Sekuler, Robert, Allison B. Sekuler and Renee Lau. 1997. “Sound alters visual motion perception.” &lt;span style=&quot;font-style: italic;&quot;&gt;Nature&lt;/span&gt; 385: 308.&lt;br /&gt;&lt;br /&gt;Shimojo, Shinsuke, and Ladan Shams. 2001. “Sensory modalities are not separate modalities: plasticity and interactions.” &lt;span style=&quot;font-style: italic;&quot;&gt;Current Opinion in Neurobiology &lt;/span&gt;11: pp. 505-509.&lt;br /&gt;&lt;br /&gt;Simner, Julia, Catherine Mulvenna, Noam Sagiv, Elias Tsakanikos, Sarah A Witherby, Christine Fraser, Kirsten Scott and Jamie Ward. 2006. “Synesthesia: the prevalence of atypical cross-modal experiences.” &lt;span style=&quot;font-style: italic;&quot;&gt;Perception&lt;/span&gt; 35(8): pp. 1024-1033.&lt;br /&gt;&lt;br /&gt;Strick, Jeremy. 2005. “Visual Music.” In &lt;span style=&quot;font-style: italic;&quot;&gt;Visual Music: Synesthesia in Art and Music Since 1900&lt;/span&gt; (exhibition catalog) . New York: Thames &amp;amp; Hudson.&lt;br /&gt;&lt;br /&gt;Whitelaw, Mitchell. 2001. “Inframedia Audio.” &lt;span style=&quot;font-style: italic;&quot;&gt;Artlink&lt;/span&gt; 21(3): pp. 49-52.&lt;br /&gt;&lt;br /&gt;Zilczer, Judith. “Music for the Eyes: Abstract Painting and Light Art,” in &lt;span style=&quot;font-style: italic;&quot;&gt;Visual Music: Synesthesia in Art and Music Since 1900&lt;/span&gt; (exhibition catalog) . New York: Thames &amp;amp; Hudson.&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-weight: bold;&quot;&gt;&lt;br /&gt;Filmography&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;Fox, Robin. 2005. &lt;span style=&quot;font-style: italic;&quot;&gt;Backscatter&lt;/span&gt;. Videorecording. Melbourne: Synesthesia Records SYN012 DVD.&lt;br /&gt;&lt;br /&gt;Gadow, Andrew. 2005. &lt;span style=&quot;font-style: italic;&quot;&gt;Techne&lt;/span&gt;. DVD-R courtesy of the artist.&lt;br /&gt;&lt;br /&gt;Hodgin, Robert. 2007. “Trentemøller and Me.” &lt;a href=&quot;http://www.flight404.com/blog/?p=52&quot;&gt;http://www.flight404.com/blog/?p=52&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;McLaren, Norman. 1940. &lt;span style=&quot;font-style: italic;&quot;&gt;Dots&lt;/span&gt;. Animation. Available: &lt;a href=&quot;http://www.youtube.com/watch?v=E3-vsKwQ0Cg&quot;&gt;http://www.youtube.com/watch?v=E3-vsKwQ0Cg&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Mclaren, Norman. 1971. &lt;span style=&quot;font-style: italic;&quot;&gt;Synchromy&lt;/span&gt;. Animation. Available: &lt;a href=&quot;http://www.youtube.com/watch?v=Jqz_tx1-xd4&quot;&gt;http://www.youtube.com/watch?v=Jqz_tx1-xd4&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Timberlake, Justin. 2007. “LoveStoned / I Think She Knows.” Zomba Recording. Music Video. Available: &lt;a href=&quot;http://www.youtube.com/watch?v=GIYXHLlxD8U&quot;&gt;http://www.youtube.com/watch?v=GIYXHLlxD8U &lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Watz, Marius. 2005. “Video for @c: int.14/37.” &lt;a href=&quot;http://www.unlekker.net/proj/cronica021/&quot;&gt;http://www.unlekker.net/proj/cronica021/&lt;br /&gt;&lt;/a&gt;&lt;br /&gt;&lt;/span&gt;</description><link>http://teemingvoid.blogspot.com/2008/10/synesthesia-and-cross-modality-in.html</link><author>noreply@blogger.com (Mitchell)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD3RbhWRBupYHSKRnYL76uHda-noDy8IkqwA78U9PhCQmNf05iPNwQu3apfNMYvMc74gEL9DqEduxsCuVc74iD3OAmlFXriVs3OBtLXU5Ua0mWQwBNUM5wLv9xGvJp9yIUzrDIiQ/s72-c/gadow_4up.jpg" height="72" width="72"/><thr:total>1</thr:total></item></channel></rss>