<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Cenatus Ltd</title>
    <description>
      Creative Technology Production
    </description>
    <link>https://cenatus.org/admin/articles</link>
    <atom:link href="https://cenatus.org/rss" rel="self" type="application/rss+xml" />
    
      <item>
        <title>
          <![CDATA[Portfolio - Deep Assignments #01]]>
        </title>
        <link>https://cenatus.org/articles/65-deep-assignments-01</link>
        <description>
          <![CDATA[<p>Presented by Deep Assignments, the London-based artist collective founded by Amanda Butterworth and Matt Spendlove, this inaugural evening of experimental sound art and critical dialogue features a lineup of pioneering practitioners working at the intersection of acoustic ecology, AI-driven narratives, and live multichannel performance.</p><ul><li><strong>Machine Listening</strong> presents <em>Environments 12</em> (<strong>Joel Stern</strong>, <strong>James Parker</strong> &amp; <strong>Sean Dockray</strong>): An multichannel reimagining of the classic <em>Environments</em> field-recording series, where human narrators, AI-cloned voices and planetary-scale loudspeakers converge to conjure speculative ecologies, reef lullabies and future-ruined sound worlds. 
</li><li><strong>James Parker:</strong> Delivers a lecture entitled <em>The Planetization of Machine Listening</em></li><li><strong>Kate Carr</strong>: Drawing on her album <em>A Field Guide to Phantasmic Birds</em>, Carr’s work traverses the boundary between naturalistic field recordings and algorithmic fabrication, inviting listeners to question what is real, what is synthesised, and how we encounter “more-than-human” sound.
</li><li><strong>Amanda Butterworth &amp; Matt Spendlove</strong>: Co-founders of Deep Assignments, will present their new collaborative works of sonic architecture, in which emergent algorithmic processes carve a 3D aural space of phasing minimalism. 
</li><li><strong>Q&amp;A hosted by Angela McArthur (UCL)</strong>: In a relaxed salon format, McArthur will guide an open discussion in which audience questions, critical perspectives and peer-to-peer exchange become integral to the evening’s unfolding.
</li></ul><p><strong>What to Expect</strong></p><ul><li><strong>Immersive Sonic Performances</strong>: Multichannel presentations that transform sound into a spatial journey, weaving new sonic architectures.
</li><li><strong>Speculative Eco-Sonic Narratives</strong>: AI-cloned narrators, glitching field recordings and digital artefacts converge to conjure phantom ecologies and future-ruined sound worlds.
</li><li><strong>Open Discussion</strong>: A relaxed salon format where the audience’s questions and perspectives become an active part of the event.
</li><li><strong>Community</strong>: Connect with artists, musicians, technologists and fellow listeners in a supportive, DIY setting dedicated to interdisciplinary listening, thinking and making.
</li></ul><p><strong>Deep Assignments</strong> is a London-based collective founded by artists Amanda Butterworth and Matt Spendlove to foster interdisciplinary listening, thinking and making. Through performances, screenings, happenings, workshops and salons, Deep Assignments cultivates a space for creative discussion, collective support and critical exchange - “a space to honour our assignments, independent of established institutions.” In an age of accelerating technological change and growing social fragmentation, Deep Assignments emphasises the creative endeavour as an existential task, affirming its intrinsic value beyond capital.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 09 Jun 2025 15:43:52 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/65-deep-assignments-01</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Music In The Metaverse]]>
        </title>
        <link>https://cenatus.org/articles/32-music-in-the-metaverse</link>
        <description>
          <![CDATA[<p>We’re collaborating with <a href="https://www.cenatus.org/blog/31-call-response-collaboration">Call &amp; Response</a> to deliver “Music In The Metaverse” - a <a href="https://www.ukri.org/what-we-do/browse-our-areas-of-investment-and-support/creative-catalyst/">Creative Catalyst 2024</a> funded initiative that will lower the barrier to entry for audio creatives wishing to work with immersive virtual environments or real world installations with interactive visual content. </p><p>Creative use of spatial audio is integral in delivering high-quality immersive experiences and growth for the creative industries. Interoperability, skills deficits and technical complexity between Digital Audio Workstations (DAW) and Game Engines such as Unreal Engine 5 (UE5) currently causes friction.</p><p>The industry standard <a href="https://adm.ebu.io/">Audio Definition Model</a> (ADM) format is used by <a href="https://www.bbc.co.uk/rd/projects/next-generation-audio">market leaders</a> in immersive audio to describe and deliver spatial audio mixes. We’ll build upon existing industry work to research and develop workflows and software to provide interoperability across tools. We’ll create interactive virtual experiences to demonstrate the concepts alongside pre-visualisations of real world experiences. </p><p>Composers and sound designers will be able to prototype faster, communicating their ideas in real time through bi-directional editing and transfer. A simplified production pipeline will remove technical barriers, lower costs and reduce the number of specialist skills needed to implement audio assets into 3D virtual environments.</p><p><a href="https://follow.it/cenatus">Follow along here</a> for further updates and please <a href="https://cenatus.org/contact">get in touch</a> directly.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 18 Sep 2024 11:47:52 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/32-music-in-the-metaverse</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Call & Response Collaboration]]>
        </title>
        <link>https://cenatus.org/articles/31-call-response-collaboration</link>
        <description>
          <![CDATA[<p>We’ve collaborated with <a href="https://callandresponse.org.uk">Call &amp; Response Studios</a> on many occasions over the years, most recently mixing spatial audio productions at their glorious 360° listening space. The 32.3 ambisonic / 9.1.4 Atmos Genelec monitoring system is currently situated amongst the vibrant artist studios in <a href="https://www.somersethouse.org.uk/somerset-house-studios">Somerset House</a>. </p><p>Our interests continued to align in recent years, building on 3d sound mixing and immersive audiovisual environments for installation and performance to explore XR applications and Unreal Engine/Unity for artistic creation and previsualisation of IRL experiences.  </p><p>We’re extremely pleased to have cemented the relationship more formally and have been sharing the C&amp;R studio at Somerset House since the start of 2024. We have lots of exciting creative ideas to research and develop for our own projects, clients and the wider community. We have more to announce on that soon so <a href="https://follow.it/cenatus">follow along here</a> for further updates and please <a href="https://cenatus.org/contact">get in touch</a> directly if you prefer.</p><p>Big thanks to Tom Slater for being so generous with his time, knowledge and the studio equipment. Really looking forward to what we’ll create together!</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 23 May 2024 15:25:44 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/31-call-response-collaboration</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - The Situationists' Walkman - A deep dive into Apple's PHASE audio engine.]]>
        </title>
        <link>https://cenatus.org/articles/30-the-situationists-walkman---a-deep-dive-into-apple-s-phase-audio-engine-</link>
        <description>
          <![CDATA[<p><em>A guest post by <a href="https://www.timcowlishaw.co.uk">Tim Cowlishaw</a>.</em></p><p>One of the challenges we encountered during the <a href="http://cenatus.org/tags/86-cif2021">CIF2021</a> R&amp;D whilst working on the <a href="https://cenatus.org/blog/28-the-situationists-walkman---an-audio-augmented-reality-experience">Situationist’s Walkman</a> was getting <a href="https://developer.apple.com/documentation/phase">Apple’s PHASE spatial audio engine</a> functioning as part of our project. We’ve <a href="https://cenatus.org/blog/29-the-situationists-walkman---tech-deep-dive">written elsewhere</a> about the more general technical challenges involved in the project, but we felt that it’d be worth writing in more detail about this specific aspect, not least because one of the principal challenges we faced was the lack of written introductory documentation (or open source examples - we <em>think</em> we might be <a href="https://github.com/cenatus/situationists-walkman/">the first</a>!) anywhere else online.</p><p>Therefore, I’m going to take you step-by-step through everything we did to get up and running with PHASE - first as a little spike to evaluate its suitability for our project, and later to more fully integrate it into the AR experience we developed. This article is fairly code-heavy and definitely aimed towards developers or anyone else working on the technical implementation of an experience using PHASE, but there’s also information that might prove useful for folks who are more focused on audio too, so I’ll do my best to make sure it’s followable without needing to read or understand the code excerpts.  Our process (and the code) very closely follows the process outlined in the <a href="https://developer.apple.com/videos/play/wwdc2021/10079/">WWDC21 presentation video</a> introducing the new API, so if you’ve already worked through that you might find a lot of this familiar. However, as well as thinking it’d be useful to have a written tutorial and working code examples up online, we also encountered a few gotchas that weren’t (to us) obvious from the video, so hopefully there’ll be useful information here even if you have already seen the video. Finally, for folks specifically interested in using PHASE in their AR applications and experiences, we’ll provide details on the particular integration points you’ll need to pay attention to in that particular use case, as well as a little bonus on integrating the <a href="https://support.apple.com/en-gb/guide/airpods/dev00eb7e0a3/web">head-tracking functionality</a> available in the 3rd generation, Max, and Pro versions of the <a href="https://www.apple.com/airpods-pro/">Apple Airpods</a> earphones.</p><p>Firstly, a brief introduction to PHASE. PHASE stands for Physical Audio Spatialization Engine, and was developed to provide immersive sound environments for games, applications, and xR experiences. <a href="https://developer.apple.com/videos/play/wwdc2021/10265/">This (other) video provides a decent non-technical overview of what that entails</a>. The key words here are SPATIALization and PHYSICAL: <em>Spatial</em> audio refers to the ability to place and move sounds in a virtual environment, as well as to move the listener within this virtual environment, with sounds appearing to emanate from their positions relative to the listener (this doesn’t require a special surround-sound speaker setup - just a pair of normal headphones, via the magic of <a href="https://en.wikipedia.org/wiki/Binaural_recording">Binaural reproduction</a> and <a href="https://en.wikipedia.org/wiki/Head-related_transfer_function">HRTF</a>s), and _physical _refers to the ability to model physical properties of that environment such as reverberation and occlusion. The particularly exciting thing about PHASE, from a creative perspective, is that it allows us to use these tools interactively - to produce spatial audio experiences that change over time, and adapt procedurally or in response to user interaction, all within a common consumer technology platform. However, working with these tools requires both familiarity with the APIs available and their underlying concepts, as well as approaches to mixing and producing audio that are not necessarily obvious from a more traditional audio background. We’ll attempt to summarise all these aspects below.</p><p>PHASE is pretty standalone, and isn’t coupled to Realitykit / Scenekit or any other framework for UI, AR, VR or gameplay. As such, while getting started with it is a little complex, the general process shouldn’t vary too much depending on the context in which you use it. The code I’m going to show you is taken from one of our early audio AR experiments - see <a href="https://github.com/cenatus/audio-ar-playground/blob/28d5df44fb593c12bf7d84dcee25c5ae57714117/TestARKitObjectDetection/TestARKitObjectDetection/ViewController.swift">this revision of this file</a> to see it in-situ. It’s an iOS app written in Swift using UIKit, ARKit and Scenekit, so you might have to adjust a few things depending on your own circumstances, but once you’ve finished reading it should be obvious what needs to be changed. To get started, you will need an instance of a <code><a href="https://developer.apple.com/documentation/phase/phaseengine">PHASEEngine</a></code> class - this represents the entirety of your sonic environment and handles all the lifecycle, coordination and DSP required - you will very probably only need one of these in your entire app. In our case, we had a single UIKit <code>UIViewController</code> which handled our app’s AR View, and was the only view which would be making sound, so it made sense to make the <code>PHASEEngine</code> an instance variable on that controller, and instantiate it in the <code>onViewDidLoad</code> method:</p><p></p><p>The first thing you’ll notice is that this engine is instantiated with an <code class="inline">updateMode</code> argument. This refers to <a href="https://developer.apple.com/documentation/phase/phaseengine/updatemode">the strategy PHASE uses to make updates to its internal state</a> in response to changes we make to its configuration (such as the positions of sound emitting objects or the listener). We’ve elected to delegate the responsibility of timing these updates to the PHASE engine itself, which will schedule them automatically, but in cases where performance / latency is super critical, you can choose to handle these manually, we won’t go into the details of that, mostly because we haven’t tried it ourselves!</p><p>The <code>phaseEngine</code> also needs to be explicitly started, at which point, in theory, it would start outputting audio, had we actually set up an audio environment for it to output. We’ll get onto that in a second, but first one more bit of housekeeping:</p><p></p><p>…when we’re done with PHASE, we need to explicitly stop it, which will stop audio and tear down its state. Since the audio environment in our app only exists within the context of this View, we do that in the <code class="inline">viewWillDisappear</code> callback.</p><p>One useful thing to know (and of particular interest to those from more of an audio background, is that the <code class="inline">PHASEEngine</code> can be configured with a Reverb preset, you can choose from <a href="https://developer.apple.com/documentation/phase/phasereverbpreset">several included for different types of environment</a>:</p><p></p><p>The important thing to note here is that this is set for the global <code>PHASEEngine</code> object, so it applies across your entire environment, you can’t therefore have different reverberation qualities in different areas of your simulation, or applied to different sound sources. Happily, as we’ll see later, you do at least have the ability to set the reverb send level per-source, so if you need different sound sources to sound more or less wet or dry, this can be achieved no problem.</p><p>Now, we can get on with creating our sound environment, and the first thing we will need within it is a <em>listener</em>, using the class <code><a href="https://developer.apple.com/documentation/phase/phaselistener">PHASEListener</a></code>. This might seem slightly superfluous coming from a traditional audio background based on mixing sources in stereo, where the listener can be safely assumed to be roughly between the two speakers. In an audio <em>environment</em> such as we’re going to define with PHASE, the listener can be positioned anywhere, facing in any direction, and can move! Therefore, we need to model their position within the environment in order that the PHASE engine can render a mix which corresponds to what they would be hearing at that point:</p><p></p><p>We also stash this listener object away in an instance variable, as we’re going to need to access it again when we come to update the listener position later, and add it as a child of the <code>rootObject</code> of the engine. This gives us an important clue about how PHASE’s environments are structured - they’re basically a hierarchical tree of objects, which can perceive sound (like listeners), emit sound (sources), or interfere with it (like occluders, which we won’t cover here).</p><p>We’re now ready to start adding sound sources. In our case all our assets were pre-rendered as mp3s and included as assets in our application bundle. We’ll now show you how to load an asset and add it to the environment, as well as giving you an overview of the types of control you have over how each sound behaves within the environment. This isn’t quite as simple as it sounds though - there’s a lot of different levers and options that can be configured, and a fairly complex graph of objects that need to be plumbed together. In our own app we ended up writing <a href="https://github.com/cenatus/situationists-walkman/blob/main/SituationistsWalkman/SituationistsWalkman/SpeakerPHASEPlayer.swift">a couple of classes that abstract over a lot of the PHASE internals</a>, in order to be able to compose our app of simpler, configurable sound sources in a way that is hopefully a little easier to reason about. For now though, I’ll go through everything step by step, first we’ll need a reference to the sound asset in our app bundle which we want to add as a source:</p><p></p><p>This needs to be registered with the phase engine’s <code><a href="https://developer.apple.com/documentation/phase/phaseassetregistry">assetRegistry</a></code>, which manages all the objects used by PHASE:</p><p></p><p>The most important things we learned here is that the <code class="inline">url</code><em>must</em> not be nil, and <em>must </em> point to an existing audio asset that PHASE can read, and that the <code class="inline">identifier</code><em>must</em> be non-nil and unique. These might sound obvious, but PHASE crashes in a profoundly cryptic and non-obvious way if any of these things are untrue, which caused us several hours of head-scratching. Therefore a sensible first step, if faced with any mysterious crashes, is to double check that you’re registering all your assets properly.</p><p>We’ll gloss over the other options here, they’re well explained by the docs, apart from to say that the above is probably a useful starting point for working with mono or stereo sources. To work with multichannel audio, you’ll need to pass a <code class="inline">channelLayout</code> explicitly to tell PHASE how to interpret it, but for mono and stereo this can be derived automatically, hence passing <code class="inline">nil</code>.</p><p>We now have to instruct PHASE how to position and play the audio file we’ve registered. Recall that a PHASE environment is a hierarchical tree of objects representing the entities in that environment, so we have to add a node to this tree which handles playback of our sound asset. The type of node which plays an audio file is a <em>Sampler </em> node, and therefore to create one, we need to instantiate a <code class="inline">PHASESamplerNodeDefinition</code>, which defines that node. However, this definition depends on quite a few other objects, the role of which isn’t necessarily obvious until you’ve wired them all together. For that reason, I’m going to work through the following process backwards, starting with the sampler node definition, and working back through its dependencies, explaining them as I go. Obviously this doesn’t make this post particularly cut-and-paste-able, but hopefully will make the purpose of the code clearer. If you want to just grab the code and get on with it, that’s absolutely fine, but remember to paste the following blocks in reverse order :-)</p><p>So, without further ado, the final step in the chain is to create your sampler node definition add it to the <code class="inline">assetRegistry</code> (as we did with our asset itself), and start it playing:</p><p></p><p>(I’ve commented the objects here which are for the moment undefined, we’ll get to them in a second)</p><p>There’s a fair bit to discuss above, so I’ll start with a broad outline of what’s going on, and a couple of pitfalls we encountered while writing it, then move onto the specific configuration we’re providing and the options it affords. A common pattern in PHASE is that we don’t deal with the objects of its internal representation directly (for example Mixers, Nodes, Assets), but instead create <em>definitions</em> for each of them which are registered as assets, and then referenced by an identifier (allowing PHASE itself to handle the creation and destruction of the objects itself as needed). We see this above, instead of instantiating a Sampler node directly, instead we create a <code><a href="https://developer.apple.com/documentation/phase/phasesamplernodedefinition">PHASESamplerNodeDefinition</a></code> and register it with the <code>assetRegistry</code>, before creating a <code><a href="https://developer.apple.com/documentation/phase/phasesoundevent">PHASESoundEvent</a></code> (you can think of this like a <em>cue</em>, an instruction for something to happen within the audio environment) referring to that definition, and firing it by calling its <code>start</code> method.</p><p>The principle gotcha here, as before, is that the <code class="inline">assetName</code> of your sampler node definition must be non-nil and unique, and crucially, it exists in the same namespace as your sound assets and any other object you register with the <code class="inline">assetRegistry</code>, so it must be <em>globally</em> unique. As before, failing to do this leads to some very cryptic errors, so it’s a good thing to check first if you’re having problems.</p><p>There’s a couple of options here that give us useful fine-grained control over the behaviour of our audio environment, and we’ll run through those now. The sampler node definition’s <code><a href="https://developer.apple.com/documentation/phase/phaseplaybackmode">playbackMode</a></code> property defines whether a sample will play back as a loop or as a one-shot audio event before stopping - here we set it to loop indefinitely.</p><p>We call the <code><a href="https://developer.apple.com/documentation/phase/phasegeneratornodedefinition/3835787-setcalibrationmode">setCalibrationMode</a></code> method to set the calibration mode for sample playback - this is where we can set the overall level of the sound emitted by this node relative to others. Here we choose to do this by defining the <a href="https://en.wikipedia.org/wiki/Sound_pressure#Sound_pressure_level">relative SPL</a> of the sound emitted by the node, and set it to 0dB.</p><p>Finally, we set the <code><a href="https://developer.apple.com/documentation/phase/phaseculloption">cullOption</a></code>, which defines what PHASE does when a sound becomes inaudible (for instance because the user has moved outside the sound source’s radius). Here we have chosen that the sound should, in effect, continue indefinitely, inaudibly, muting and unmuting as necessary, with the imagined ‘start point’ being the point where the PHASE environment was started. There’s a bunch of options available here which can be used to interesting creative effect, including restarting the audio each time it comes into earshot, and starting randomly at a different point each time.</p><p>So, having described our sampler node definition, we now need to work our way back and look at its dependencies, of which there are two - the similarly-named (and related!) <code class="inline">phaseSpatialMixerDefinition</code>, and <code class="inline">mixerParameters</code>. Since we’re working backwards, I’m going to start with the mixer parameters, as it _also _depends on the spatial mixer definition:</p><p></p><p>The mixer parameters identify how a specific sound is mixed, for a given listener, within the environment. Here we pass in the <code class="inline">PHASEListener</code> we defined right back at the start, the <code class="inline">PHASESpatialMixerDefinition</code> that we will deal with shortly, and a <code><a href="https://developer.apple.com/documentation/phase/phasesource">PHASESource</a></code>, which encapsulates the physical properties of the sound source within our world:</p><p></p><p>This is reasonably simple - we define a Source, which is made up of several <code><a href="https://developer.apple.com/documentation/phase/phaseshape">PHASEShapes</a></code>, each of which is defined by an <code><a href="https://developer.apple.com/documentation/modelio/mdlmesh">MDLMesh</a></code> which gives its shape and volume. A slight caveat at this point - in our fairly ad-hoc and subjective testing, we weren’t able to verify that changing the radius of this mesh had any effect at all on the sound itself (there are other parameters which affect the travel and diffusion of sound which we’ll get onto later). However, we didn’t look into this in a particularly detailed or rigorous manner, so your mileage may vary!</p><p>The other important thing to identify here is how you set the position (and scale, and rotation) of the object within the sound environment - by setting the transform of the source. This is one of the points where PHASE needs to integrate with whatever gameplay / xR framework you’re using, so we’ll highlight this and leave it undefined for now. The transform is expressed as an <a href="https://www.brainvoyager.com/bv/doc/UsersGuide/CoordsAndTransforms/SpatialTransformationMatrices.html">affine transformation matrix </a>expressed as a <code><a href="https://developer.apple.com/documentation/accelerate/simd_float4x4">simd_float4x4</a></code> - if this is starting to sound scarily mathematical to you (as it is to me, to be honest), there’s no need to panic - if you’re working with other Apple game and xR frameworks (such as ARKit or RealityKit) and attaching sounds to objects that exist within those (for instance visual assets or AR anchors), this is the same format as the transform property of those objects, so all you need to do is plumb them together. Be aware though, that depending on the lifecycle of your own application, you might not have this information available at the point where you instantiate the PHASE object graph, so you will likely need to keep a reference to this source object somewhere so that this can be updated later. At the end of this post, we’ll look in more detail into how this is done with ARKit as an example.</p><p>Having set up our mixer definition the only dependency that remains to sort out is our spatial mixer definition, which, however, comes with its own chain of dependencies we’ll build out as we go.</p><p>One thing that tripped me up here was the name of this particular object - from my heavily audio-engineering influenced perspective, you have one ‘mixer’ which handles mixing lots of different sources, and so initially I followed this intuition and created one spatial mixer definition which was shared between all my sources. This was frustrating because it meant that all the sources had the same parameters - FX sends, radius, level, etc. However, after some experimentation, I realised there’s no requirement for each source to share a spatial mixer definition - you can think of it more like a mixer <em>channel</em> or <em>bus</em> - you can create as many as are necessary - either one per channel, or one each for groups of sources that have the same properties. In our case, we create one per channel, which may not be the most efficient, but worked fine for our purposes:</p><p></p><p>So, the spatial mixer definition itself just links together two other objects, a <code><a href="https://developer.apple.com/documentation/phase/phasespatialpipeline">PHASESpatialPipeline</a>, and a <a href="https://developer.apple.com/documentation/phase/phasedistancemodelparameters">PHASEDistanceModelParameters</a>.</code></p><p>The <code class="inline">PHASESpatialPipeline</code> controls the various layers of how the environmental sound is built up, and you control it by passing it a set of <code><a href="https://developer.apple.com/documentation/phase/phasespatialpipeline/flags?changes=lat_2__8_1___2">Flags</a></code>which control the layers of sound rendered - direct transmission (so the sound arriving directly at the ears of the listener from the source, early reflections, and late reverb. Here, we configure ours to add reverberation (recall we set the reverb preset on the engine earlier), and set the send level for this source:</p><p></p><p>So, the final piece of the jigsaw is the <code><a href="https://developer.apple.com/documentation/phase/phasedistancemodelparameters">PHASEDistanceModelParameters</a></code>, which is where we set the crucial properties that control the behaviour of our sound sources in the world we’re building:</p><p></p><p>The PHASEDistanceModelParameters class itself is an abstract superclass, and there’s a couple of concrete implementations we can choose from, allowing us to specify the behaviour of the sound source in different ways. Here we choose to use an instance of <code><a href="https://developer.apple.com/documentation/phase/phasegeometricspreadingdistancemodelparameters">PHASEGeometricSpreadingDistanceModelParameters</a></code>, which gives us a decent tradeoff of natural-sounding sound spreading, and ease of configuration, as it only has two parameters - the <em>cull distance </em> and <em>rolloff factor</em>. The cull distance is the distance (in metres) at which the sound becomes inaudible and stops playing (what happens when the user moves back <em>into</em> the source’s radius is defined by the <code>cullOption</code> we set earlier on our sampler node definition). The rolloff factor controls the steepness of the curve with which the sound level decreases over this distance - a value of 1.0 gives a halving of level with a doubling of distance, higher values give a faster decay, and lower values slower. Beware though that these parameters aren’t linked as such, and we found that particularly with small cull distances, we needed to manually tune a bit in order to avoid a very obviously audible hard cutoff at the cull radius. However, this was fairly easy to do, and much simpler than the other distance model parameters implementations, which allow you to, for instance, define an envelope over which the sound decays.</p><p>This completes the chain of dependencies for our sound source, so now, all being well, our project will compile, and if we’ve set the distance model parameters and positions of our source and listener such that the listener is in range of the source, we’ll hear some audio!</p><p>Without the ability to move our listener around though, the effect of all this setup is rather lost. Thankfully, at least from PHASE’s point of view, that is all very simple - you just set the <code class="inline">transform</code>property of our <code>phaseListener<em></em></code>object whenever the listener position updates. This, like the source position we saw earlier, is expressed as an <a href="https://www.brainvoyager.com/bv/doc/UsersGuide/CoordsAndTransforms/SpatialTransformationMatrices.html">affine transformation matrix </a>expressed as a <code><a href="https://developer.apple.com/documentation/accelerate/simd_float4x4">simd_float4x4</a></code>. Of course, where you get this <em>from</em>, and where you choose to update the listener position, might be more complicated, and depends entirely on the environment in which you’re attempting to integrate PHASE. In our case, however, integrating with <a href="https://developer.apple.com/augmented-reality/arkit/">ARKit</a>, this also turned out to be simple, assuming you’re already familiar with common patterns in iOS development, and the ARKit library. In our <code><a href="https://developer.apple.com/documentation/arkit/arsessiondelegate">ARSessionDelegate</a></code>, we needed to implement the <code><a href="https://developer.apple.com/documentation/arkit/arsessiondelegate/2865611-session">session(_:didUpdate:)</a></code> callback, from which it’s trivial to get the current transform of the listener in AR space from the frame object that’s passed in:</p><p></p><p>There’s one important caveat though - this transformation matrix does <em>not </em> take into account the rotation of the device at all, and, while it tracks position in-world accurately, the rotation around the head only works when the device is held horizontally, with the camera in the top left corner. This took <em>ages</em> to identify, and highlights a more general point - that it’s very difficult indeed to debug based purely on audio feedback - it’s easy to tell when something ‘sounds wrong’ or ‘isn’t working’, but quite hard to work out exactly what the issue is. For this reason, we recommend adding a simple visual debug mode to your app as early as possible if you’re working with audio-only experiences like us. In our case, the solution to this problem was to constrain the application to only display in landscape mode, and prompt the user to hold the phone in landscape while in the experience. This is far from ideal, but due to time constraints it was the pragmatic choice. However, with a little more time this should be easily solved - it should be possible to listen for the device rotation, then multiply the ARKit camera transformation matrix through a constant rotation matrix corresponding to the current rotation, before passing it into PHASE. I’ll leave this as an exercise for the reader for now, but will update this post if and when I get back into the project and fix this :-)</p><p>As a final bonus, I wanted to quickly show how to integrate the head tracking available in Apple headphones such as the <a href="https://www.apple.com/airpods-pro/">Airpods pro</a>, as this is very easily achieved (which came as a very pleasant surprise to us).</p><p>The head tracking information is available from the <a href="https://developer.apple.com/documentation/coremotion">CoreMotion framework</a>, via the <a href="https://developer.apple.com/documentation/coremotion/cmheadphonemotionmanager">CMHeadphoneMotionManager</a> class:</p><p></p><p>At first glance there’s quite a lot going on here, but it’s actually very simple. In our view controller where we’re presenting the AR experience, we create a new headphone motion manager, as well as two <code class="inline">simd_4x4</code> matrices, to store the current device position and headphone position respectively. In the <code class="inline">onViewDidLoad</code> callback we then request device motion updates from the headphone motion manager, passing in a callback which simply calls the <code class="inline">handleHeadMovement</code>function with the result. In the definition of <code class="inline">handleHeadMovement</code>, we have to do a little bit of song and dance to convert the <code><a href="https://developer.apple.com/documentation/coremotion/cmrotationmatrix">CMRotationMatrix</a></code> that Core Motion gives us to the <code>simd_float4x4</code> that PHASE expects, which we then assign to the <code>headphoneTransform</code> variable we created earlier. Finally, we set our listener position to the device’s position <em>multiplied by </em>the head rotation, which, via the magic of linear algebra (and assuming the listener has their phone close by and roughly in front of them), gives us their listening position including the rotation of their head.</p><p>The only other change we need to make is to change our definition of <code class="inline">session(_:didUpdate:)</code> slightly, firstly to store the <code class="inline">deviceTransform</code> in an instance variable so we can get at it when the head motion callback is called, and secondly to apply the current head position transformation when the device position is updated too (this prevents glitches when the device position updates but the head position hasn’t yet updated).</p><p>So, that’s about it for now - you can see <a href="https://cenatus.org/blog/28-the-situationists-walkman---an-audio-augmented-reality-experience">a demo of the finished experience</a> in the video on our blog, and <a href="https://github.com/cenatus/situationists-walkman/">the code for it</a> on Github (as well as some <a href="https://github.com/cenatus/audio-ar-playground">previous prototypes and experiments</a>). If you’re within travelling distance of London, have a recent iPhone or iPad and would like to try out the experience, please <a href="https://cenatus.org/contact">get in touch</a> with us, and we’ll let you know when we launch! Any other questions and comments are always very welcome and you can <a href="https://follow.it/cenatus">follow our updates here</a>. If you need advice, consultancy, creative or development work done on an AR or spatial audio project, we’re <a href="https://cenatus.org/contact">available for hire</a>! Thanks for reading.</p><p>Photo credit <a href="https://unsplash.com/@obuol">Auguras Pipiras</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 02 Mar 2022 13:40:56 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/30-the-situationists-walkman---a-deep-dive-into-apple-s-phase-audio-engine-</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - The Situationists' Walkman - An Audio Augmented Reality Experience]]>
        </title>
        <link>https://cenatus.org/articles/28-the-situationists-walkman---an-audio-augmented-reality-experience</link>
        <description>
          <![CDATA[<p>We focussed our research &amp; development work for the <a href="https://cenatus.org/blog/26-creative-industries-fund-2021">CIF2021 award</a> around two eXtended Reality (XR) experiences with a specific focus on 3D or spatial sound. Building on <a href="https://sharawadji.cat/">previous projects</a> and <a href="https://www.bbc.co.uk/rd/blog/2019-11-audio-augmented-reality-guide-tips">research work</a> undertaken with colleagues at BBC R&amp;D, <a href="https://timcowlishaw.co.uk">Tim Cowlishaw</a> and I planned a prototype to investigate the use of headtracked, 3D audio to provide the foundation for overlaying exciting sonic artworks, stories and experiences that transport us from, or augment our real world environment.</p><h3>Goals</h3><p>We set out to further validate the hypothesis that using Audio Augmented Reality would create a convincing and compelling immersive experience. Unlike computer graphics running on most consumer hardware, the fidelity of recorded audio is great enough that our ears are already convinced by reality of what we perceive. The missing factor is sound emanating from a fixed point in space, and being <a href="https://research.mach1.tech/glossary/general-terms/#spatial-audio">rendered spatially</a>. In short, sounding as natural as possible and closely approximating what we hear IRL.</p><p>There are plenty of <em>headlocked</em>, “spatial” audio experiences but it was important for us to investigate delivery of audio that is <strong>headtracked</strong> (doesn’t reposition as you move), has the sense of <strong>externalisation</strong> (the sound in relation to you) and <strong>presence</strong> (<em>really</em> being there).</p><p>Conceptually, we were interested in creating a playful, non-directed experience that offered surprising, uncanny sonic augmentation to the world around you. Using the city as a playground - a canvas upon which we could project directed audio stimulus that’s unconstrained by the physicality and hardware of a gallery setting, augmenting familiar surroundings with another layer of perceptual information. A 3D graphical score with a participant conductor. </p><p>One particular reference we kept returning to was the <a href="https://en.wikipedia.org/wiki/Situationist_International">Situationist</a> idea of the <em>dérive - “a mode of experimental behaviour linked to the conditions of urban society: a technique of rapid passage through varied ambiances.” </em> - an undirected, spontaneous exploration of the urban environment. This seemed particularly germane during the months in which we were working on this project - Facebook had just announced their “metaverse” strategy and rebrand, and a lot of the commentary and discourse around the announcement (as well as other <a href="http://hyper-reality.co">artistic references</a> we’d been inspired by) were concerned with the possibility that AR technologies might lead to further enclosure and commercialisation of public spaces. In this context, the idea that we might use the project both as an exploration of both the playful, open-ended possibilities of the technology for artistic expression, and a means of celebrating the importance of open public spaces and expanding them into AR-space, made the idea of a situationist-like intervention even more pertinent. </p><p>The result was <em>The Situationists’ Walkman</em> - a<em> digital dérive</em>, playing out within a small area of East London around Arnold Circus. An updated version of the Situationists’ vision using audio led exploration of the urban environment that prompts us to reconsider our relationship to this constructed or delineated space.  </p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644422138/blog-situationists-walkman/sw-p-Screenshot_2021-11-16_164022.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644422138/blog-situationists-walkman/sw-p-Screenshot_2021-11-16_164022.png" alt="augmented audio"/></a><p>We commissioned a number of artists to make sonic artworks hosted inside overlapping zones around Arnold Circus and the Boundary Estate. Each artist provided a number of individual tracks or stems as well as directions of how to “attach’ them to the topography of the zone. </p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644422141/blog-situationists-walkman/sw-t-Screenshot_2021-11-16_164310.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644422141/blog-situationists-walkman/sw-t-Screenshot_2021-11-16_164310.png" alt="augmented audio plan view"/></a><h3>Strategy</h3><p>The <a href="https://developer.apple.com/videos/play/wwdc2021/10079/">WWDC21 announcement</a> of the <a href="https://developer.apple.com/documentation/phase">PHASE</a> audio engine from Apple really piqued our interest. That and the head tracking available via their Airpod headphones pointed at tools worthy of investigation for the project.</p><p>We evaluated numerous software environments (honorary shout out to <a href="https://roundware.org">Roundware</a>) and frameworks en route to the prototype and what became clear is we had a number of challenges to solve:</p><ol><li>Listener location - <em>accurately</em> position them in world space
</li><li>Listener orientation - the direction of their head/ears 
</li><li>Virtual speaker positioning
</li><li>Sound emission shape, direction and attenuation
</li><li>Sound occlusion
</li><li>3D or spatial audio
</li></ol><p>Modern game engines such as Unity and Unreal offer the tools to design audio experiences in a constructed, virtual environment and provide all of the properties listed above. That is, assuming you design a sound stage and navigate around it in a fully immersed Virtual Reality environment via a hardware headset. We require a similar experience but instead, overlaid IRL. One of our guiding principles is for the technology to disappear into the background so the participant can become fully immersed, without getting distracted by the medium. It seemed like a good time to refamiliarise ourselves with these <a href="https://www.bbc.co.uk/rd/blog/2020-11-audio-augmented-reality-recommendations-guide">tips for creating audio AR experiences</a> from Henry Cooke and BBC R&amp;D friends before kicking off to avoid any pitfalls they uncovered.</p><p>You can take a <a href="http://cenatus.org/blog/29-the-situationists-walkman---tech-deep-dive">deeper dive into the technology</a> we used and how we arrived at the final experience over at this post.</p><h3>Results</h3><p>After a lot of work finessing onsite, we had an experience that we were really happy with! It’s very effective wandering around and discovering augmented sounds emanating from various points around the site.</p><p>Here’s a DIY video of part of the experience. It can’t really capture what it feels like to explore the site but will give you a sense of it. <strong>Make sure you wear headphones otherwise the spatial effect will be lost.</strong></p><p>During testing, we noticed a really uncanny perceptual side effect whereby after exposure to augmented audio, particularly any material that was clearly not coming from the local environment (that was not always obvious), your ear <em>really</em> tuned back into the natural sounds in a way that focussed you sharply back into the location - almost like a heightened state of sensory perception!</p><p>Another interesting observation and an area we’d explore further is we preferred the experience without the Airpods head tracking data! This is counterintuitive but having the hardware device held against your chest meant limiting the amount your head can turn before you twist your body anyway. There also seemed to be some lag between quick head movement and the soundstage reorientation. </p><p>We plan to run some tests with a small group of participants onsite in March. After that, we’ll release the experience to a wider group via Test Flight before pushing to the App Store. <a href="https://cenatus.org/contact">Drop us a line</a> if you’d like to be part of the test groups and get early access! N.B. -  you’ll need a LIDAR capable Apple device and iOS 15 or later.</p><h4>Thanks and credits</h4><p>Concept, Design and Development by: Matt Spendlove &amp; Tim Cowlishaw</p><p>Featured artists:</p><ul><li>Elvin Brandhi
</li><li>Tim Cowlishaw &amp; Constanza Piffardi
</li><li>Sally Golding 
</li><li>Mark Harwood
</li><li>Nick Luscombe
</li><li>The Nonument Group
</li><li>Ruaridh Law
</li><li>Spatial &amp; Oliver Coates
</li></ul><p>View a map of the <a href="https://www.google.com/maps/d/edit?mid=1z8k3vLh3j7iv4D46FEJX26YEzs4noh0A&ll=51.52595005176665%252C-0.07529080000001187&z=17">participating artist zones here</a>.</p><p>Produced by <strong>Cenatus</strong> with the generous support of the Innovate UK Creative Industries Fund.</p><p>Thanks to Dave Johnston, Irini Papadimitriou and Henry Cooke for mentoring and advice.</p><p>If you’re interested in our work you can <a href="https://follow.it/cenatus">follow along here</a> or please <a href="https://cenatus.org/contact">get in touch</a> if you need some help designing spatial audio experiences!</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 09 Feb 2022 19:07:06 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/28-the-situationists-walkman---an-audio-augmented-reality-experience</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - The Situationists' Walkman - Tech deep dive]]>
        </title>
        <link>https://cenatus.org/articles/29-the-situationists-walkman---tech-deep-dive</link>
        <description>
          <![CDATA[<p>This post follows on from our <a href="http://cenatus.org/blog/28-the-situationists-walkman---an-audio-augmented-reality-experience">introductory entry</a> where we introduce the project goals and concepts and present our findings. It is intended for a technical audience and takes a deep dive into the process.</p><p>To recap and contextualise, these are the challenges we needed to solve:</p><ol><li>Listener location - <em>accurately</em> position them in world space
</li><li>Listener orientation - the direction of their head/ears 
</li><li>Virtual speaker positioning
</li><li>Sound emission shape, direction and attenuation
</li><li>Sound occlusion
</li><li>3D or spatial audio
</li></ol><h3>Prototyping &amp; technology</h3><p>One early idea was to construct a 3D “digital twin” of Arnold Circus via mapping and LIDAR data. The hypothesis was we could then work on that model in either XCode or Unity/Unreal to place the virtual speakers and design the artist zones, leaning on an audio engine such as PHASE or Google’s Resonance Audio for spatial sound. An advantage of this approach would be the ability to develop and test offsite on a PC or even in VR. The trick would be to later swap out the first person navigation from joystick/keyboard with realworld listener positional data via GPS. This approach seemed advantageous by leaning on a lot of tried and tested subsystems. </p><p>We were warned a few times about the lack of precision from GPS and infrequent updates but what didn’t really coalesce until discussing amongst the wider XR community was that the accuracy was a likely problem for the <em>listener positioning</em> rather than speaker location. The analogy would be a first person shooter video game with a very low frame rate. Rather than jittery graphics we’d likely experience audio glitches as the audio engine calculations tried to keep up with the low resolution updates. We couldn’t afford any playback issues that would break the sense of immersion. I should clarify that we didn’t have capacity to test this out with a prototype so had to reason it through. That’s something we would change in hindsight.</p><p>Whilst looking into <a href="https://developer.apple.com/documentation/arkit">ARKit</a>, we became aware of  <code class="inline">ARGeoAnchors</code> as one of the possible <a href="https://developer.apple.com/documentation/arkit/content_anchors">content anchors</a>. These offer the ability to anchor AR overlays to persistent world locations, alongside the more familiar surface, image, object and body or face recognition. One important restriction is that geo anchors are currently limited to <a href="https://developer.apple.com/documentation/arkit/argeotrackingconfiguration">a number of U.S. cities and London</a> - we just snuck in there! </p><p>At this stage the first big compromise started to become clear. To allow us to achieve 1) above - if we lean on geo anchors, then the participant <em>must</em> have their phone out! To enable the kind of accurate location lock AR experiences need, the Apple device and ARKit will get an initial location from GPS data (similar to how you navigate in Maps) and then download pre scanned 3D image data around your approximate location and compare that to feedback from the device’s LIDAR depth sensor. We decided to accept the tradeoff initially and dive deeper into ARKit to see what other advantages it brings. Ultimately, we could still prompt the user to position their phone in a way that doesn’t encourage them to look at it e.g. hold it to their chest and provide experience feedback or instructions aurally, discouraging any need to look at the screen. A third eye, if you will!</p><p>To dig deeper into the capabilities of ARKit, we needed to get familiar with creating a model or an <a href="https://developer.apple.com/documentation/arkit/arreferenceobject">ARReferenceObject</a> (*.arobject files) to trigger content anchors. To do this we used some sample code from Apple to <a href="https://developer.apple.com/documentation/arkit/content_anchors/scanning_and_detecting_3d_objects">scan and detect 3D objects</a> and create the model a water bottle (amongst other things). One can then set up code in XCode to detect a model and trigger AR content. </p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644422146/blog-situationists-walkman/object-scanning-img_0004.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644422146/blog-situationists-walkman/object-scanning-img_0004.png" alt="augmented beer can"/></a><p>A far simpler way to test an object or model recognition is to load it into a Reality Composer project. You can see an example in this <a href="https://github.com/cenatus/audio-ar-playground/tree/main/water-bottle.rcproject">Github repo</a> where we animate properties of a simple geometric shape to appear via a <a href="https://developer.apple.com/documentation/realitykit/creating_3d_content_with_reality_composer/bringing_a_reality_composer_scene_to_life">behaviour</a> once the object is detected. It’s worth noting at this stage, if you create a new project in XCode from the “Augmented Reality App” template alongside SwiftUI, you’ll generate the boilerplate code to run up and load a Reality Composer <code class="inline">*.rcproject</code> file into a RealityKit runtime. This theoretically means you can prototype or design in Reality Composer before you fully integrate into your experience.</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644422140/blog-situationists-walkman/reality-composer-bottle-2022-02-07-at-13.59.09.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644422140/blog-situationists-walkman/reality-composer-bottle-2022-02-07-at-13.59.09.png" alt="reality composer"/></a><p>Next we worked on building an indoor AR experience that triggered audio on object recognition. Starting simple with just one <a href="https://github.com/cenatus/audio-ar-playground/commit/6b9d969bfe11484da6ff34b34a0fb67c49c3caba#diff-3d225fa2edae514be55ee7d2a42c92c781d04c0897bacfc517756c63dcacda1a">object + sound</a> before adding a <a href="https://github.com/cenatus/audio-ar-playground/commit/ef1105460308ea599ae35626ac47105623fec2a6#diff-3d225fa2edae514be55ee7d2a42c92c781d04c0897bacfc517756c63dcacda1a">few into the scene</a> and creating a mix of audio as you move around.</p><p>That worked pretty well but alas, we’re not quite done yet as there didn’t seem to be a way to control the radius and attenuation of the sound using the built in <a href="https://developer.apple.com/documentation/scenekit/scnaudioplayer">SCNAudioPlayer</a>. We’re putting the finishing touches to another post focussing specifically on the various audio engine options but we <a href="https://github.com/cenatus/audio-ar-playground/commit/d26e9ba6e47ec732bc499994052410a29878570e">trialled and discounted AVFoundation</a> (attenuation settings were global, not per source) en route to getting PHASE going as in <a href="https://github.com/cenatus/audio-ar-playground/commit/6bb44b5e40829a45349fc5e9df799a6d877c3fed#diff-6004d7062484ff853d562e53a82da30950b4a8f5f1815652035c2416f7b31896">our test AR project</a>. The subsequent repo commits tweak the settings and a visualisation wireframe mesh. Stay tuned for a <a href="http://cenatus.org/blog/30-the-situationists-walkman---a-deep-dive-into-apple-s-phase-audio-engine-">tutorial on using PHASE</a> in projects such as these, as the official documentation is rather sparse, and we ran into some interesting problems along the way.</p><p>This gave us some confidence that we could use PHASE alongside an AR experience but also lead to a bit of confusion. The code examples we followed and used as the basis for our prototypes lead us to working directly with ARKit, with SceneKit for rendering. That’s all good but Apple seems to be pushing towards <a href="https://developer.apple.com/documentation/realitykit/">RealityKit</a> as the preferred option for AR and the APIs and delegate callbacks are different enough to require a bit of work to switch between. </p><p>Also, the examples we found that <a href="https://developer.apple.com/documentation/arkit/content_anchors/tracking_geographic_locations_in_ar">implemented ARGeoAnchors</a> for a real world experience were using RealityKit! The current message is a bit mixed from Apple but the important distinction is to recognise that RealityKit has its own rendering engine and doesn’t require <a href="https://developer.apple.com/documentation/scenekit">SceneKit</a>. This doesn’t really matter when using PHASE for audio but obviously you couldn’t then use the Scenekit <code class="inline">SCNAudioPlayer</code>.</p><p>Once we verified using PHASE with ARGeoAnchors, we could start thinking about UI. We settled on <a href="https://developer.apple.com/xcode/swiftui/">SwiftUI</a> which is the newer, more familiar, declarative UI toolkit. The structure of SwiftUI projects is a little different from traditional UIKit projects so we needed to work out a bit of plumbing to translate the correct lifecycle elements from our UIKit prototypes in SwiftUI style. </p><p>The most important part of this is to have a struct that implements <a href="https://developer.apple.com/documentation/swiftui/uiviewrepresentable">UIViewRepresentable</a>, acting as a bridge to traditional UIKit code. Our <a href="https://github.com/cenatus/situationists-walkman/blob/main/SituationistsWalkman/SituationistsWalkman/ARViewContainer.swift">ARViewContainer</a> uses a Coordinator class to implement the necessary RealityKit (and underlying ARKit) delegate lifecycle callbacks. This gave a reasonable container for most of our experience code where we could do tasks like set up the <a href="https://developer.apple.com/documentation/arkit/arcoachingoverlayview">AR coaching overlay</a>, parse and load our virtual speaker information from <a href="https://github.com/cenatus/situationists-walkman/blob/main/SituationistsWalkman/SituationistsWalkman/speakers.gpx">hacked version</a> of the <a href="https://www.topografix.com/gpx.asp">GPX format</a> and receive AR frame ticks to update the listener position in PHASE and generally manage its lifecycle.</p><p>We now have the majority of our challenges solved with a couple of caveats.</p><p>We ran out of time to dig deeper and test sound shape and direction from 4), only using attenuation, which was enough to design a first pass of the experience. The bigger omission is 5) Occlusion. Whilst this is certainly possible with PHASE, we’d need to manually model all the building geometry in our zone and teach the engine about it. This was just too big a task to be valuable given the time constraints and we are happy enough to drop that for this iteration.</p><p>Please check out <a href="http://cenatus.org/blog/28-the-situationists-walkman---an-audio-augmented-reality-experience">the intro post</a> to preview the experience and in the meantime, if you’re interested in our work you can <a href="https://follow.it/cenatus">follow along here</a> or please <a href="https://cenatus.org/contact">get in touch</a> if you need some help designing spatial audio experiences!</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 09 Feb 2022 20:28:04 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/29-the-situationists-walkman---tech-deep-dive</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Building an FM Synth with MetaSounds]]>
        </title>
        <link>https://cenatus.org/articles/27-building-an-fm-synth-with-metasounds</link>
        <description>
          <![CDATA[<p>My initial exploration into DSP and 3D audio with game engines mostly focussed on Unity but I <a href="https://cenatus.org/blog/22-sprint-4---dsp-3d-audio">noted at the time</a> that Epic’s Unreal Engine was actually way ahead in terms of native DSP and procedural audio. Fast forward to mid 2021 and Epic announced MetaSounds, the latest iteration and culmination of a lot of hard work modernising the audio in their engine! They describe it as follows:</p><blockquote><p><em>​​MetaSounds are fundamentally a Digital Signal Processing (DSP) rendering graph. They provide audio designers the ability to construct arbitrarily-complex procedural audio systems that offer sample-accurate timing and control at the audio-buffer level. </em></p></blockquote><p>This is a <em>huge</em> leap forward and they provide a graphical patch editor to wire together low level audio components to generate and process sound. These patches can then expose parameters to the engine runtime and enable tight integration between gameplay (or experience play) and sound. The UI is similar in concept to Pure Data or Max MSP and clearly targets sound designers and composers. Colour me very interested!</p><p>I really wanted to explore the possibilities for use in immersive environments so as part our <a href="https://cenatus.org/tags/86-cif2021">CIF2021</a> R&amp;D work I proposed a prototype to research the use of procedural audio in creating an audiovisual artwork that can adapt and transform in real time within a Virtual Reality “activated environment”, based upon user exploration and interaction.</p><h3>Creative possibilities</h3><p>The potential of this style of integrated software is HUGE and affords the kind of control we can only dream about when creating physical <a href="https://cenatus.org/tags/92-gallery">gallery installations</a> or <a href="https://cenatus.org/tags/94-expanded-cinema">expanded cinema</a> work. My experience building audiovisual performances taught me that fundamentally we’re working on a perceptual level using <a href="https://cenatus.org/tags/69-psychophysics">psychophysical principles</a>. Our brains are fantastic pattern matching systems so providing the right cues and mappings can lead to convincing, integrated multi sensory experiences. </p><p>A simplistic, concrete example would be using synthesiser envelope data to shape audio and visual stimulus in unison. When used in audio synthesis, an envelope represents the change in some value over time. Mapping that same value data to some visual property e.g. the opacity of a primitive shape, can really bring the visual to life, make the animation dance and solidify the connection in our brains.</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644065402/blog-metasounds/ADSR_parameter.svg"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644065402/blog-metasounds/ADSR_parameter.svg" alt="ADSR envelope"/></a><h3>Technology overview</h3><p>As you can see in the header image, MetaSound patches are a graph of nodes and connections that represent an audio signal path. The editor provides access to insert and connect these nodes to build out the patch.</p><p>At the time of writing, MetaSounds are only available via plugin as part of the Unreal Engine 5 early access program and are under active development. Our research took place on the EA2 branch but I know from following <code class="inline">#metasounds</code> channel on <a href="https://unrealslackers.org">Discord</a> that the API has changed a bit after EA2.</p><p>It’s very crucial to grasp that source nodes in your MetaSounds graph <em>could</em> be traditional, static audio wave files for sample playback and processing but <em>far more interestingly</em> can be waveform generators like the classic Saw, Sine, Square, and Triangle shapes. Combining audio sources with filters, envelope generators, LFOs, mixers and maths functions provides building blocks to create everything from familiar subtractive synths to elaborate, custom hybrid synth patches - all with a set of exposed interface parameters that can be controlled by anything else in the game engine.</p><p>Alongside these prebuilt nodes exposed via the patching UI, it’s also possible to create your own nodes by writing some C++ and including them in a manual, source build of the engine.</p><h3>FM Synth</h3><p>The classic audio generators handily include a frequency modulation input for FM synthesis techniques so I decided to build an FM synth patch as the basis for my interactive experiments.</p><p>In FM synthesis, we use a “modulator” signal to modulate the pitch of another “carrier” signal that’s in a similar audio range. This modulation creates new frequency information in the resulting sound, changing the timbre or colour of the sound by adding more partials.</p><p>In recent years, I spent time learning synthesis techniques in greater detail via hardware modular systems and software like Supercollider. The latter is an incredible open source real time, interactive synth engine and an essential learning resource for DSP. I revisited <a href="https://www.youtube.com/playlist?list=PLPYzvS8A_rTaNDweXe6PX4CXSGq4iEWYC">Eli Fieldsteel’s essential course materials</a>, focussing on the FM lessons (part <a href="https://www.youtube.com/watch?v=UoXMUQIqFk4&list=PLPYzvS8A_rTaNDweXe6PX4CXSGq4iEWYC&index=22">1</a> &amp; <a href="https://www.youtube.com/watch?v=dLMSR2Kjq6Y&list=PLPYzvS8A_rTaNDweXe6PX4CXSGq4iEWYC&index=24">2</a>) and worked on porting that across into a MetaSounds patch. </p><p>I won’t repeat the instructions here but if you follow along with his video it should be simple enough to translate. I’d encourage you to do so particularly if you want to get a sense of the wide range of potential sounds from even the most basic FM synth whilst getting a demonstration of how powerful randomising the control values can be for sound design. This is the crux of why procedural audio in game engines is so appealing for developers.</p><h4>FM Basic</h4><p>In the short video below, we can see the MetaSounds implementation of Eli’s basic FM synth from Supercollider:</p><h4>FM Ratio Index</h4><p>Following on from that, a slightly more advanced example from the second lesson. We use this as the basis for further experiments. You can use following screenshots to recreate the patch:</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/FM_Ratio_Index_1_Screenshot_2022-02-02_163453.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/FM_Ratio_Index_1_Screenshot_2022-02-02_163453.png" alt="FM Ratio Index 1"/></a><a href="https://res.cloudinary.com/cenatus/image/upload/v1644065254/blog-metasounds/FM_Ratio_Index_2_Screenshot_2022-02-02_163528.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644065254/blog-metasounds/FM_Ratio_Index_2_Screenshot_2022-02-02_163528.png" alt="FM Ratio Index 2"/></a><br/><p>The playlist below demonstrate the effect of changing the modulator, carrier ratios and other inputs:</p><br/><h4>Blueprints</h4><p>All the examples so far show patching and playback directly from within the MetaSounds editor. To actually <em>publish</em> the MetaSound inputs to other components in UE5 for runtime control, you currently need to wrap the MetaSound into a Blueprint. You can see an example below where we wrap and expose the MetaSound inputs as Blueprint variables:</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/BP_MSPAVUgen_02_02_2022_17_18_51.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/BP_MSPAVUgen_02_02_2022_17_18_51.png" alt="MetaSounds and Blueprints"/></a><p>The next screenshot shows an example of using the UE5 sequencer tool to modulate some of those parameters over time. You could, of course, use any aspect of your game to manipulate the synthesiser:</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/Proun_-_Unreal_Editor_02_02_2022_17_22_10.png"><img src="https://res.cloudinary.com/cenatus/image/upload/v1644065251/blog-metasounds/Proun_-_Unreal_Editor_02_02_2022_17_22_10.png" alt="UE5 Sequencer"/></a><h4>Summary</h4><p>We’ve seen how we can build synth patches in MetaSounds and ported an FM Synth from Supercollider over to learn the basics of FM synthesis on the way. <a href="https://follow.it/cenatus">Stay tuned</a> for further posts where we’ll show some of the work we built utilising these techniques!</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Sat, 05 Feb 2022 13:10:43 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/27-building-an-fm-synth-with-metasounds</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Creative Industries Fund 2021]]>
        </title>
        <link>https://cenatus.org/articles/26-creative-industries-fund-2021</link>
        <description>
          <![CDATA[<p>We’re very pleased to announce participation in this year’s <a href="https://apply-for-innovation-funding.service.gov.uk/competition/919/overview">Creative Industries Fund</a>!</p><p>The award is funded by <strong>Innovate UK</strong>, the UK’s innovation agency. Their remit is to drive productivity and economic growth by supporting businesses to develop and realise the potential of new ideas. They fund business and research collaborations to accelerate innovation and drive business investment into R&amp;D. Their support is available to businesses across all economic sectors, value chains and UK regions. Innovate UK is part of the wider <a href="http://cenatus.orgwww.innovateuk.ukri.org">UK Research and Innovation</a>.</p><h4>eXtended Reality (XR) experiences</h4><p>Our application is to undertake research and development of two prototype XR experiences, with a particular focus on 3D sound. </p><p>The first explores the use of <a href="https://daracrawford.com/new-blog-3/what-is-procedural-audio">procedural audio</a> to create an audiovisual artwork that can adapt and transform in real time within a Virtual Reality “activated environment”, based upon user exploration and interaction.</p><p>The second will investigate the use of head tracked, 3D audio to provide the foundation to overlay exciting sonic artworks, stories and experiences that transport us from, or <a href="https://www.bbc.co.uk/rd/blog/2019-11-audio-augmented-reality-guide-tips">augment</a> our real world environment.</p><p>The funding helps us design and build these prototypes and will bolster a portfolio of work aimed to generate new client services and help identify market gaps for new product development.</p><h4>Team</h4><p>Working alongside frequent collaborator and ex BBC R&amp;D colleague <a href="https://www.timcowlishaw.co.uk/">Tim Cowlishaw</a>, we’ll create a playful, research lab environment over a three month period to explore these areas.. Our work will be guided by mentors David Johnston, technologist at <a href="https://www.digicatapult.org.uk/">Digital Catapult</a>, Irini Papadimitriou, creative director at <a href="https://futureeverything.org/">FutureEverything</a> and Nick Luscombe, creative director of <a href="https://www.mscty.space/">Musicity</a>, broadcaster and ex BBC Late Junction host. Alongside our official mentors, we’ll be sharing and evaluating our work within our network of artists and technologists to get valuable feedback.</p><p>We are extremely excited to get started and explore the potential of this space and thank Innovate UK for their confidence and support!</p><p>If you’re interested in our progress then <a href="https://www.specificfeeds.com/cenatus?subParam=followPub">subscribe </a>to this blog for updates, ideas and code samples as we proceed. You can view all related posts using the <a href="https://cenatus.org/tags/86-cif2021">CIF2021</a> tag.</p><p>Photo by<a href="https://unsplash.com/@tannerboriack?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText"> Tanner Boriack</a></p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 25 Oct 2021 17:25:28 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/26-creative-industries-fund-2021</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - XR Development Tools]]>
        </title>
        <link>https://cenatus.org/articles/25-xr-development-tools</link>
        <description>
          <![CDATA[<h4>Macbook Pro and Bootcamp</h4><p>I nearly left the goodship Apple when they seemed incapable of releasing a laptop that was an actual worthwhile upgrade from my previous model. They redeemed themselves at the 11th hour upon release of the Macbook Pro 16,1 in 2019.</p><p>I’d already spent a decent amount of time researching Windows based machines but ultimately I didn’t <em>really</em> want to switch. I have a lot of investment in the Apple ecosystem and my audio production work still leans very much towards Mac. That meant compromise as the 3D and XR world is still very much in Windows land with the Apple GPU profile not really targeting 3D and gaming. In addition, Oculus don’t support OS X with their <a href="https://www.oculus.com/setup/">Rift or Link</a> software and whilst the Steam gaming platform will run, they dropped support for <a href="https://appleinsider.com/articles/20/05/01/valve-abandons-the-macos-version-of-steamvr">SteamVR on OS X</a>.</p><p>My compromise was to buy the Mac, start my XR work on that platform and see how far it takes me. My fallback options are to <a href="https://support.apple.com/en-us/HT208544#:~:text=%2520An%2520eGPU%2520lets%2520you%2520do%2520all%2520this,while%2520a%2520user%2520is%2520logged%2520in%2520More">add an eGPU</a>, dual boot the machine in Windows via <a href="https://support.apple.com/boot-camp">Bootcamp </a>or ultimately, consider a Windows desktop or gaming laptop later down the line if required.</p><p>That served me well enough for a while but I eventually had cause to reconsider. I attended an online <a href="https://musichackspace.org/events/immersive-av-composition-live-session-2-sessions/">course </a>on <a href="https://github.com/bDunph/ImmersAV">ImmersAV</a> for audiovisual art with <a href="http://www.bryandunphy.com/">Bryan Dunphy</a> at the London Music Hackspace. It’s an impressive looking toolkit or custom engine that allows parameter mapping between raymarched OpenGL graphics and the Csound DSP audio engine via a C++ interface. The results can be previewed locally in 2D dev mode and deployed to a SteamVR compatible headset using <a href="https://github.com/ValveSoftware/openvr">OpenVR</a>. Whilst I was able to build and test the codebase on my Macbook, I was unable to run on my Quest due to the lack of Oculus Link compatibility. </p><p>The second nail in the coffin was hammered in by working on a <a href="https://github.com/msp/virtual-gallery">prototype </a>where I wanted to explore mapping light and sound in Unity using real time synthesis, rather than sample based audio - think something along the lines of Anthony McCall’s <em><a href="https://www.tate.org.uk/art/artworks/mccall-line-describing-a-cone-t12031">Line Describing a Cone</a></em>, but with live DSP. The subtle tweaks I needed to make to the synthesizer parameter mapping required constant previewing and adjusting so the offline build and deploy process to target my Quest was laborious and unmanageable. I bit the bullet and decided to dual boot my Macbook with Windows. This would allow me to hard link the laptop to Unity via an Oculus Link cable and do previewing directly from within Unity itself.</p><h4>Windows tools</h4><p>The Windows install process via Bootcamp was straightforward enough, but I quickly found myself to be very inept at navigating around Windows due to the muscle memory I have for OS X. I added a few tools to make it more Mac-like and make switching between the two operating systems less painful.</p><p>After Unity itself, the first thing I installed was Git (<a href="https://gitforwindows.org/">for Windows</a>). This comes with the _incredibly_useful Git Bash application which gives you a familiar CLI to work with. So far I haven’t needed another terminal utility. Next I installed <a href="http://trackpad.forbootcamp.org/">Trackpad++</a> to tame the Macbook trackpad and scrolling. I then installed <a href="https://github.com/randyrants/sharpkeys">Sharp Keys</a> but writing this post sometime afterwards I can’t seem to get that app to open so I’m not entirely sure if I used it! I think I likely just remapped the Ctrl and Windows keys on my Apple Keyboard in the Registry so I could use the CMD key as Windows Ctrl key which is what I’d distinctively try and hit. I definitely did use the <a href="https://docs.microsoft.com/en-gb/windows/powertoys/">Microsoft Power Toys utility</a> to remap some of my common keyboard OS X shortcuts. I enabled PowerToys run with Ctrl/Cmd + Space which opens a quick launcher, similar to Spotlight on OS X - the way I tend to open everything. Next I mapped some keyboard shortcuts:</p><a href="https://res.cloudinary.com/cenatus/image/upload/v1634672626/remap-shortcuts_qxsdei.png"><img src="https://res.cloudinary.com/cenatus/image/upload/c_thumb,w_200,g_face/v1634672626/remap-shortcuts_qxsdei.png" alt="Windows / OS X mapping"/></a><p>Now that I could sensibly  navigate my way around the Windows environment I installed the <a href="https://www.oculus.com/setup/">Oculus Link software</a> so I could enable Link on my device and have a more immediate development flow in Unity :)</p><p>I have a suspicion this all might be another stepping stone towards having a dedicated Windows desktop or laptop for XR work but I’ll see how far this next iteration takes me first!</p><p><strong>Image Credit</strong>: Installation view of <a href="https://www.xibtmagazine.com/en/2019/01/anthony-mccall-split-second-sean-kelly-gallery/"><em>Anthony McCall: Split Second</em> at Sean Kelly, New York / Photography: Jason Wyche, New York</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Tue, 19 Oct 2021 20:00:26 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/25-xr-development-tools</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Assembly Now [online]]]>
        </title>
        <link>https://cenatus.org/articles/24-assembly-now-online-</link>
        <description>
          <![CDATA[<p><a href="https://metroarts.com.au/assembly-now-online/">Assembly Now [online]</a> is a virtual experience of <a href="https://sallygolding.com/">Sally Golding</a> &amp; <a href="http://spatial.infrasonics.net/">Spatial’s</a> recent installation, which inaugurated Metro Arts’ Gallery One at West Village, as part of Brisbane Festival 2020, on view during September, 2020.</p><p>Simulating and expanding the physical installation, participants in the online version will be immersed in a work that plays with perception, interactivity and unexpected encounters – including the viewer’s own reflection captured and integrated within the artwork.</p><p>Assembly Now [online] uses the interface of the mirror to elaborate the psychology and technology of emergent algorithmic software, which functions as a contemporary screen filtering our emotions.</p><p>We live in an age of ubiquitous photography– selfie culture, and surveillance capitalism– facial recognition and eye tracking. Emotion analytics software used in neuro-marketing and image recognition is a blend of psychology and technology– capturing data on expression, to assume correlations in mood determined by machine learning.</p><p>A discussion between artist, Sally Golding, and writer and researcher, Kate Warren, about <a href="https://metroarts.com.au/assembly-now/">Assembly Now</a>, can be read <a href="https://metroarts.com.au/wp-content/uploads/2020/09/Assembly-Now_Catalogue_Web1.pdf">here</a>.</p><br/><p>Cenatus collaborated with Sally and <a href="https://www.timcowlishaw.co.uk">Tim Cowlishaw</a> on this project from concept through technical implementation and presentation.</p><ul><li><a href="https://assembly-now.net/">Experience</a></li><li><a href="https://bit.ly/an-press-kit">Press Kit</a></li><li><a href="http://github.com/msp/assembly-now-web">Github repo</a></li></ul><br/><br/>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Fri, 30 Jul 2021 13:30:31 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/24-assembly-now-online-</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Oily Cart: All Wrapped Up]]>
        </title>
        <link>https://cenatus.org/articles/23-oily-cart-all-wrapped-up</link>
        <description>
          <![CDATA[<p><em>Unwrap a world of imagination with Oily Cart’s mischievous, wintery show for under fives. Magical characters, hilarious creatures and hidden lands are brought to life through light, shadow and music. Join us as we uncover the sensory stories hidden in scrunched up paper, and create your own after the show.</em></p><p>Cenatus worked on multi-channel sound design and provided creative technological solutions. We created a series of tools in Supercollider for producing generative sound clips that could be triggered and improvised with in response to the actors movements and changing dynamics of a live theatre scene. It was important for the audio narrative to be responsive to the actors in realtime and to provide interesting variations around sonic motifs.</p><p>Alongside original sound design, we worked in collaboration with artist Sally Golding, providing tooling for her own sound design and sonic responses to the creative brief.</p><p>You can <a href="https://oilycart.org.uk/shows/all-wrapped-up/">watch a video</a> over at Oily Cart.</p><p>Source code is available via <a href="https://github.com/msp/oily-cart-awu/tree/master/supercollider">Github</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 23 Apr 2020 11:46:19 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/23-oily-cart-all-wrapped-up</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Sprint 4 - DSP & 3D Audio]]>
        </title>
        <link>https://cenatus.org/articles/22-sprint-4---dsp-3d-audio</link>
        <description>
          <![CDATA[<p>I needed to deviate off plan a little in Sprint 4 and spend time researching native audio and DSP in Unity and review solutions for 3D Sound. Ultimately, given the wide range of skills required to create a complete immersive experience of any size, I decided this would be a smart focus for future projects.</p><p>The Unity audio engine is based around <a href="https://docs.unity3d.com/Manual/class-AudioSource.html">Audio Sources</a> than can either be audio files or tracker modules, routed through a hierarchy of mixers to balance levels and optionally apply FX. The sources can be played back either 2D or 3D. This video from <a href="https://www.youtube.com/watch?v=tJGGkDkQYvs">Unite 2015</a> gives a decent overview of the concepts.</p><h4>DSP</h4><p>That’s cool but I was a little surprised (perhaps naively) that there was no built in DSP for procedural audio - a term I was <a href="http://www.netaudiolondon.org/2011/event/procedural-audio-workshop.html">introduced to</a> by Andy Farnell back in 2011. Certainly for the prototype ideas I’m considering, I was hoping to work directly with DSP on device. The <a href="https://youtu.be/tJGGkDkQYvs?t=2665">approach from Unity</a> seems to be to expose a plugin SDK and leave synthesiser creation as an exercise for the user if required. The audio dev community has filled in some of the gaps - the <a href="https://forum.juce.com/t/experimental-support-for-unity-native-audio-plugins-on-the-develop-branch/27621/21">JUCE</a> framework from ROLI is now supporting Unity as a build target (desktop only) and there are projects to publish <a href="https://github.com/grame-cncm/faust/tree/master-dev/architecture/unity">Faust</a> and PD via <a href="https://github.com/playdots/UnityPd">UnityPD</a> or <a href="https://github.com/enzienaudio/hvcc/tree/master/docs">Heavy</a> based instruments as plugins. </p><p>For more simplistic scenarios, you can bind to <code class="inline">OnAudioFilterRead</code> directly from a C# script as in this <a href="https://www.youtube.com/watch?v=GqHFGMy_51c">example oscillator</a> but I suspect there’s a performance limit to how far you can go with this approach - here’s a quote from the <a href="https://docs.unity3d.com/Manual/AudioMixerNativeAudioPlugin.html">Native Audio Plugin SDK</a>:</p><p>“Unlike scripts and because of the high demands on performance this has to be compiled for any platform that you want to support, possibly with platform-specific optimizations”</p><p>A higher level commercial plugin solution does exist in the form of <a href="https://assetstore.unity.com/packages/tools/audio/audio-helm-live-music-creator-86984">Audio Helm</a> which looks really promising and supports desktop and mobile targets. It has a built in sequencer, sampler and synth engine with GUI and scripting support. There’s also a standalone version of the synth engine you can use <a href="https://youtu.be/1oNWX0igFMo?t=58">for sound design</a> and then publish the patch to Unity for native app use. Your patches can then have their parameters scripted which is exactly what I was hoping for. It costs ~70 EUR but seems like a good workflow with minimal fuss.</p><p>Worth noting here is that Unreal Engine 4 from Epic seems to be well ahead in this regard as this <a href="https://www.youtube.com/watch?v=ErejaBCicds">video from 2017 details</a>. There’s still doesn’t seem to be much documentation around but the <a href="https://docs.unrealengine.com/en-US/API/Plugins/Synthesis/index.html">API is here</a>. It’s not clear to me if this stuff is out of beta but this functionality alone is enough for me to take another look at UE4!</p><h4>3D Sound</h4><p>Audio immersion is essential for realistic XR experiences and Unity ships with spatialization baked in - it includes a basic version of the Oculus Native Spatializer (ONSP) which can be upgraded to the full version or swapped out for solutions from <a href="https://docs.microsoft.com/en-us/windows/mixed-reality/spatial-sound-in-unity">Microsoft</a> or Google’s <a href="https://resonance-audio.github.io/resonance-audio/">Resonance Audio</a> project. Check <a href="https://developer.oculus.com/documentation/quest/latest/concepts/unity-audio/">here</a> for the high level 3D sound concepts from Oculus and this guide for the <a href="https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-ospnative-unity/">details</a>. I found this <a href="https://www.youtube.com/watch?v=sKDXksI7S6o">Spatial Audio video</a> from Oculus Connect to be a really good overview of the concepts and tooling required for 3D sound.</p><p>For a real world, end to end immersive audio production guide check out this recent <a href="https://www.youtube.com/watch?v=Owd0dbG76YM&list=PLL2xVXGs1SP78z-KC3S1ZYV_67gT7Sk9i&index=26&t=0s">Spatialized Music for AR/VR</a> presentation. It follows the entire process of producing the audio experience for the interactive Oculus First Steps tutorial that ships with the Quest. It follows the journey from sound designer/composer, through studio recording of the orchestra, post production and to final application integration. It covers the strengths and weaknesses of ambisonic mixing along the way and when to use head tracking for positioning audio or when to just mix in quad.</p><p>I also bookmarked the Facebook 360 Spatial Workstation which looks interesting: “..a software suite for designing spatial audio for 360 video and cinematic VR. It includes plugins for popular audio workstations, a time synchronised 360 video player and utilities to help design and publish spatial audio in a variety of formats.” This <a href="https://creator.oculus.com/learn/spatial-audio/">Spatial Audio for Cinematic VR and 360 Videos</a> article covers creating immersive audio content targeting Facebook and Youtube as well as XR hardware.</p><h4>Ambisonics</h4><p>Unity supports the integration Ambisonics <a href="https://docs.unity3d.com/Manual/AmbisonicAudio.html">out of the box</a> so you can import multi channel pre-spatialized sound files but you’ll need a decoder from either the Oculus or Google audio SDKs I mentioned above.</p><h4>Middleware</h4><p>Fully featured audio middleware and end-to-end solutions exist from <a href="https://www.fmod.com">Fmod</a> and <a href="https://www.audiokinetic.com/products/wwise/">Wwise</a> that are standalone immersive audio production solutions with integrations into Unity and UE4.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 18 Dec 2019 18:52:12 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/22-sprint-4---dsp-3d-audio</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Sprint 3 - Applied Learning]]>
        </title>
        <link>https://cenatus.org/articles/21-sprint-3---applied-learning</link>
        <description>
          <![CDATA[<h4>Creative Coding in Unity</h4><p>I started Sprint 3 keen to build something and revisited the <a href="https://channel9.msdn.com/Series/UnityCreativeCoding">creative coding tutorial in Unity</a> recommended by David Johnston at the beginning of the project. The thing that initially struck me about this tutorial is how it stripped back the Unity ceremony and “complexity”, the latter is actually aimed at simplifying the process for game devs but can be daunting for a new user with so many options available. The focus here is on pixels and a canvas, taking you a little closer to the metal which aligns with previous experience of Open Frameworks and Processing.</p><p>What follows is the notes I made for each episode whilst reviewing the tutorial. This is mostly for my own reference but could be helpful if you need to dissect the basics of creating dynamic animations in Unity. I’ve included a direct video link and Github commit for each section, where relevant</p><p>Episode 1 helps to orientate you inside the virtual space, explaining your development perspective, the camera perspective and the default skybox. It guides you through the creation of a <a href="https://en.wikipedia.org/wiki/Cornell_box">Cornell Box</a>, and helps explain the default lighting within a scene and how you can take control of it.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Deconstructing-Darkness">video</a></p><p>Episode 2 brings the focus inside the Cornell Box, setting up our own lighting and dealing with optimisations like light prerendering. It also introduces the concept of reusable materials that can be applied to any object in your scene - specifically those that emit light in this case, which are really powerful.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Let-There-Be-Light">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/ee94fc5ede1a124dfefc968691fee7e2e98d0e48">code</a>*</p><p>*mostly Unity metadata so not much to read, subsequent commits are more useful.</p><p>Episode 3 deals with attaching scripts to game objects and the basics of manipulating object properties via transforms over time to create animation. Also fundamentally, the decoupling of scripts and objects and how you must apply a script to an object to get it running. Another core concept here is publishing public attributes within a script that are made available in the Unity editor or other code clients for dev and runtime interaction.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Its-Alive">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/76d7e35fb1fdd4edfb6169d006c0b6a9dc72853d">code</a></p><p>Episode 4 continues the familiarisation of animating an object in space. The important concept introduced is the creation of a <a href="https://docs.unity3d.com/Manual/Prefabs.html">prefab</a> - essentially a prototype game object and script that can be reused from other scripts. In this instance, we create “spinning cube” prefab to allow us to add many cubes to the scene from one reusable definition.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Spinning-On-That-Dizzy-Edge">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/986c0818ef1a6dd0567e118d4dc89d04098665d9">code</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/2d654fcc62405ccab940e0555e5d3e75060d5972">code</a></p><p>Episode 5 is about creation and management of game objects programmatically. Now that we have a reusable spinning cube component or prefab we need some code to manage the instances. That’s where the game controller or sketch object (in line with Processing nomenclature) comes in - this is the entry point to our scene.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Absolutely-PreFabulous">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/8ab43b2550944d04bfc9d4abd26872a367e3390a">code</a></p><p>Episode 6 is more of a general creative coding / animation tutorial covering trigonometric functions and easing techniques. Very useful background and not specifically related to Unity but still really valuable.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/SIN-City">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/6c5791092c2fe8bb30ac53f09f3a50f7db95c13d">code</a></p><p>Episode 7 covers processor efficient techniques for ensuring dynamic prefabs respond correctly to static lighting via Light Probes - a mechanism to pre render light at specific node positions that objects can inherit.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Some-Light-Probing">video</a> | <a href="https://github.com/msp/c9-creative-coding-in-unity/commit/9ee8195cba501f999b93bf619ddf98a11bbe217c">code</a></p><p>Episode 8 is the final in the series and reviews the work so far before investigating post processing camera effects such as Bloom and Depth Of Field by using pre existing packages for the first time. The tutorial is a little out of date regarding post processing effects so you can read an up to date guide over at <a href="https://docs.unity3d.com/Packages/com.unity.postprocessing@2.1/manual/Installation.html">the Unity docs</a>.</p><p>The tutorial wraps up with a couple of resources that inspired the series. <a href="https://catlikecoding.com">Cat Like Coding</a> by Jasper Flick and the <a href="https://www.amazon.com/Holistic-Game-Development-Unity-All/dp/0240819330">Holistic Game Development</a> book by Penny de Byl.</p><p><a href="https://channel9.msdn.com/Series/UnityCreativeCoding/Ready-For-My-Close-Up">video</a></p><p>I really learned a lot from these and felt much more confident in Unity on completion. I hope you find the breakdown in my notes useful too. Many thanks to Rick Barraza for the series!</p><h4>Prototyping Concepts</h4><p>Energised by the tutorial I started thinking about what I might like to make. An obvious follow up to this would be to port a version of my <a href="http://spatial.infrasonics.net/transduction">Transduction</a> project. The geometric animation would be great as a 360 degree XR experience and I now understand the basics of how to implement that. </p><p>The event pattern generation for the original project is created via <a href="https://tidalcycles.org">Tidal Cycles</a> and fed to <a href="https://supercollider.github.io">Supercollider</a> (the audio engine) and <a href="https://processing.org">Processing</a> (the visual engine). It would be pretty trivial to keep a similar architecture and just swap Processing out for Unity as the transport protocol is all OSC, but I’d like application to be self contained in the Oculus Quest.</p><p>I’d like to start with event pattern generation and DSP audio engine inside Unity which maybe less dynamic (i.e. there’s no livecoding interface) but I’m sure I can still create something satisfying. The next challenge is to look into DSP and 3D Sound options for Unity so I’ll tackle that next time.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 16 Dec 2019 21:30:39 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/21-sprint-3---applied-learning</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Sprint 2 - Install & Configure]]>
        </title>
        <link>https://cenatus.org/articles/20-sprint-2---install-configure</link>
        <description>
          <![CDATA[<h4>Installation</h4><p>The focus for Sprint 2 was to get a basic <a href="https://en.wikipedia.org/wiki/%2522Hello,_World!%2522_program">“hello world”</a> example running on my Oculus Quest.  This type of program is normally trivial but it fleshes out correct configuration of the hardware and software environment, laying the foundations for more interesting projects further down the line and getting confidence with the tooling.</p><p>The first challenge was to settle on a version of Unity. The <a href="https://cenatus.org/blog/19-sprint-1---discovery">aforementioned</a> Coursera XR intro course suggested fixing to an older version but that was restrictive for other online tutorials so I settled on 2018.4, installed via the Unity Hub. That also meant I no longer needed the Android Studio app and accompanying SDK as it’s now included in Unity as a <a href="https://docs.unity3d.com/Manual/GettingStartedAddingEditorComponents.html">module</a> via the Unity Hub. I did manage to build and deploy the “VR campus” app from within the XR course but I quickly realised I needed to start with a more basic example that required less hacks and libraries.</p><p>The installation was much more of a dance than it sounds as my Internet connection is “challenged” at home so I needed to go find a café with a good connection and ideally, only install the essentials. You really notice how much development tooling assumes a solid connection when you don’t have one!</p><p>I generally found the guides from Unity / Oculus not only sufficient but usually better than third party tutorials. The caveat with the Unity guides is the availability of so many versions of their software and help docs. You need to be really careful what version of the docs you’re reading after following Google links. I guess this really highlights the pace at which the industry is moving right now!</p><h4>Hello World</h4><p>Once everything was configured I followed the Oculus <a href="https://developer.oculus.com/documentation/unity/latest/concepts/unity-tutorial/">Build Your First VR App</a> tutorial from the Getting Started docs. This also guides you in creating basic Game Objects in Unity– useful if like me, you didn’t have too much game dev experience before starting to learn XR. It didn’t take long to develop the app and I got it deployed to my headset following the latter part of this <a href="https://skarredghost.com/2019/06/08/how-get-started-oculus-quest-development-unity/">Quest dev tutorial</a> - success!</p><p>That’s a great start but I already began to wonder about the development feedback cycle. I’m familiar with using tooling (REPLs, live coding etc), tests and the running application to trial my ideas with quick feedback. Do I really have to build and deploy an Android Package (APK) to the device to test it out? The short answer is yes, I’m afraid. The situation is different for a Rift as you might expect with it being a tethered device. More about that in a later post, probably..</p><h4>Inspiration</h4><p>Alongside the techy stuff, I also reached out to my network and beyond to anyone working in XR related fields for advice. This started with my third mentor meeting, hosted by <a href="https://futureeverything.org/people/irini-papadimitriou/">Irini Papadimitriou</a>, ex V&amp;A digital programmes manager and now creative director of the brilliant <a href="https://futureeverything.org">Future Everything</a> festival in Manchester - an organisation I’ve <a href="http://www.netaudiolondon.org/tag/futureeverything.html">collaborated with in the past</a>. Irini’s mentor role on this project is as a curator and producer. Her enthusiasm was infectious in our meeting and she gave me an extensive brain dump of XR projects she’d experienced.</p><p>From Irini’s suggestions there were too many great projects to mention but a couple stood out. The first was <em>Hack The Senses</em> and their <a href="https://www.hackthesenses.com/recent">2017 V&amp;A collaboration</a>. The first two installations had a lot of crossover with my own interests - particularly the sensory perception topics I touched on in the previous post. The second was the celebrated <em>Notes On Blindness</em> recordings, film and accompanying VR experience. The film is based around John Hull’s audio diaries that he kept whilst losing his sight. The diaries are an articulate, compelling reflection on his situation and the changes to the world as he perceives it. Following a short film and subsequent BAFTA winning feature film, the VR experience is a triumph - a spellbinding narrative delivered using the strengths of immersive technology. The scenes are visually opaque, abstract yet magical. Careful 3D sound design really accentuates the diary narrative and helps to place the listener inside John’s story and experience - well, as much as anything can hope to. Smart, subtle gestures highlight or reveal parts of the scene. I particularly liked it when John was talking theoretically about the sound of rain hitting an object being useful in emphasising and positioning it with space and lamenting the lack of indoor rain for the same purpose. The VR creation of this scene gradually reveals each new object in the room visually and, more importantly, sonically through gaze based gestures. This additive process subtly forges a wonderful rhythmic Music Concrète soundtrack. The 3D sound design here particularly shines. I really can’t recommend this story, film and immersive experience enough!</p><h4>3D Sound</h4><p>Talking of 3D sound, I reached out to Tom Slater at <a href="https://www.callandresponse.org.uk">Call and Response</a>, another organisation I worked with a few times in the past. They curated an 8 channel surround sound <a href="http://www.netaudiolondon.org/2011/event/8-channel-audio-programme.html">sonic art programme</a> for our Netaudio festival way back. I also mixed an ambisonic release for Zimoun’s <a href="http://www.leerraum.ch">Leerraum</a> project at their Deptford 18.1 studio and completed a great ambisonic workshop they ran last year. Alas, logistics prevented us meeting in person so far but I was pleased to read they were successfully awarded funding for exciting projects from <a href="https://creativexr.co.uk/cohorts/">Creative XR</a> and <a href="https://jerwoodarts.org/projects/somerset-house/">The Jerwood Foundation</a> and hope to follow up later.</p><h4>Industry XR</h4><p>Lastly, I got some good insight into commercial XR and game development industry from my old friend Craig Gabell who co-founded the Brighton based <a href="https://www.westpierstudio.com">West Pier Studios</a>. Of particular interest was the mixed use of VR / AR technology in the <a href="http://myvirtualspaceapp.com">My Virtual Space</a> application they developed as a tool for exploring and customising environments. </p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 16 Dec 2019 21:19:48 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/20-sprint-2---install-configure</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Sprint 1 - Discovery]]>
        </title>
        <link>https://cenatus.org/articles/19-sprint-1---discovery</link>
        <description>
          <![CDATA[<p>I’ve just finished the first 2 week discovery “sprint” as part of my <a href="https://cenatus.org/blog/18-hello-immersive-world-">XR Research Project</a>. I’ll be sharing my insights of each sprint on the blog as it’s completed.</p><p>Once I started investigating XR properly I realised it’s an absolutely vast field, even within one of the <a href="https://cenatus.org/blog/18-hello-immersive-world-">disciplines</a>. I’d provisionally secured support from a number of mentors as part of my application so was very glad to lean on their expertise to help distill some of the information out there during my initial sprint.</p><h4>Hardware &amp; learning</h4><p>My first mentor meeting was with <a href="https://www.bbc.co.uk/rd/people/david-johnston">David Johnston</a> from BBC R&amp;D, a rare and valuable mix of technologist and producer. We had a lively discussion and I instantly felt more confident with his support.</p><p>After we met, David sent over a load of resource links covering hardware and software. He recommended the online learning from both <a href="https://learn.unity.com/">Unity</a> and <a href="https://developer.oculus.com/documentation/quest/latest/concepts/book-unity-gsg/">Oculus</a> which seem great and also included a very interesting <a href="https://channel9.msdn.com/Series/UnityCreativeCoding">creative coding tutorial in Unity</a> which aligned with my experience in Processing and Open Frameworks.</p><p>David also helped me zone in on purchasing an Oculus Quest which seemed to offer the right balance of ease of use and flexibility for my project. I’m not a hardcore gamer and my aging Macbook Pro is not compatible with many of the tethered headsets on the market so the all-in-one device appealed to me, even at the expense of some rendering power. I was also tempted by some <a href="https://www.bose.com/en_us/products/frames.html">Bose Frames</a>, especially as some BBC R&amp;D friends were <a href="https://www.bbc.co.uk/rd/blog/2019-03-audio-augmented-reality-spatial-sound">working with them</a> for audio AR prototypes but the Quest offers my more bang for my limited buck.</p><p>The Quest runs on Android so as long as my PC can manage Unity than I’m good. I did briefly consider other game engines but already had a bias of interest to Unity before I this started and couldn’t find a compelling reason to not follow that.</p><p>I enjoyed exploring the Quest - the ease of use for creating a guardian zone was impressive (a safe play space for standing 6DOF VR experience). I don’t have much to compare it to but I’m very happy with the product overall.</p><h4>Perception Hacking</h4><p>My second mentor meeting was with <a href="https://www.ucl.ac.uk/pals/research/experimental-psychology/person/john-greenwood/">Dr. John Greenwood</a>, a researcher in Experimental Psychology at UCL. John’s research is fascinating, focusing on visual perception and it’s disorders via <a href="https://en.wikipedia.org/wiki/Psychophysics">psychophysics</a> - “the scientific study of the relation between stimulus and sensation”. There’s a great overview of his projects over at <a href="http://eccentricvision.com">Eccentric Vision</a>.</p><p>He started by demonstrating the power and trickery of peripheral vision and the problems with visual crowding before going on to explain the relationship between conditions such as Amblyopia (often called lazy eye) and diseases such as dementia and Alzheimer’s. A big takeaway for me was to understand failing peripheral vision as a possible predictor for dementia.</p><p>We talked about various perception hacks/flaws including the <a href="https://en.wikipedia.org/wiki/Wagon-wheel_effect">Wagon Wheel Effect</a>, and how the visual system can override the auditory one as demonstrated by the <a href="https://www.youtube.com/watch?v=G-lN8vWm3m0">McGurk Effect</a>. I had a written note about “Bunny Hop Illusion” which could refer to either of these; the first is a tactile illusion called <a href="https://en.wikipedia.org/wiki/Cutaneous_rabbit_illusion">Cutaneous Rabbit Illusion</a> and the second is The <a href="https://www.popularmechanics.com/science/a23726484/rabbit-illusion-optical-illusion/">Rabbit Illusion</a> from Caltech.</p><p>John followed up with some slides on Spatial Vision: “the perception of the distribution of light across the visual field” which are “The ‘building blocks of object perception in the early stages of visual processing”.</p><p>Whilst studying the slides I also learned that Fourier transforms could be used to analyse images. I understood that sound can be created / distilled from simple sine waves but didn’t know about that generalising to any signal. Whilst looking for a good link to explain the theory I found <em><a href="http://www.jezzamon.com/fourier/">An Interactive Introduction to Fourier Transforms</a></em> by Jez Swanson - one of the best things I’ve seen on the web for ages! It breaks down the concepts in a really fun way, making it simpler to understand. The slides referenced <em><a href="https://www.amazon.com/dp/019957202X/ref=rdr_ext_tmb">Basic Vision: An Introduction to Visual Perception</a></em> which I ordered instantly.</p><h4>Concepts</h4><p>If you’re wondering what all this has to do with my XR research then take a look at my alpha <a href="https://drive.google.com/file/d/1LJroP1NrbZT6XmzV_vX3dssP7_EdsoUR/view?usp=sharing">mind map</a> I created before I met John. Gestalt Psychology has been of interest for a while so I’m keen to explore other areas of perception and the way the brain tricks us into understanding the world.</p><h4>Online learning</h4><p>I worked through part of the aforementioned <a href="https://www.coursera.org/learn/xr-introduction/">Introduction to XR: VR, AR, and MR Foundations</a> Coursera. The videos and general structure were very welcome but I’m not so keen on the peer assessments and a bit suspicious of the “brainstorming XR apps” module. The biggest criticism by far though is the lack of engagement on the forum. They are pretty much a ghost town, with zero input from the tutor so I struggle to see the value? It might make more sense to follow the free <a href="https://learn.unity.com">Unity learning</a> resources and take your chances in their forums.</p><h4>Blogging software</h4><p>Last but not least, I also spent some time extending this very website to give me blogging capabilities. It’s a <a href="https://github.com/msp/cenatus-ltd">custom site</a> built with Elixir/Phoenix and I considered adding a static blog generator to the domain instead but I want to limit the amount of software I’m maintaining so was happy to consume the development overhead to keep streamlined and encourage some general maintenance to the site which was well overdue.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 14 Oct 2019 14:30:56 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/19-sprint-1---discovery</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Blog - Hello Immersive World!]]>
        </title>
        <link>https://cenatus.org/articles/18-hello-immersive-world-</link>
        <description>
          <![CDATA[<p>I’m currently undertaking a research &amp; development project investigating the use of XR technologies to create an immersive art installation.</p><p>For the uninitiated, XR stands for Extended Reality and is an umbrella term for Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR) and concepts like 3D Sound and 3D Video. The distinctions can be blurry, but here’s a one liner on each:</p><ul><li>VR: creates a digital environment that <em>replaces</em> the real world.
</li><li>AR: overlays digital content into the real world.
</li><li>MR: Umm, a seamless blend of the two?!
</li><li>3D Sound: Either recorded with multiple microphones or reproduced via a speaker array.
</li><li>3D Video: Similarly, multiple cameras record different perspectives on the same shot.
</li></ul><p>Looking around when kicking off, I found this <a href="https://www.coursera.org/learn/xr-introduction/">Introduction to XR: VR, AR, and MR Foundations</a> course on Coursera. Pretty useful so far.</p><p>Armed with that, a Unity install, an Oculus Quest and a load of curiosity - let’s see where this goes. Should be a fun ride!</p><p>I’ll be posting updates in the <a href="http://cenatus.org/blog">blog</a> section and you can subscribe via <a href="https://www.specificfeeds.com/cenatus">rss</a> or <a href="https://www.specificfeeds.com/cenatus?subParam=followPub">email</a> for updates.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Sat, 10 Aug 2019 13:08:50 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/18-hello-immersive-world-</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - BBC R&D]]>
        </title>
        <link>https://cenatus.org/articles/6-bbc-r-d</link>
        <description>
          <![CDATA[<p>BBC Research &amp; Development is the national technical research department of the BBC.</p><p>It has responsibility for researching and developing advanced and emerging media technologies for the benefit of the corporation, and wider UK and European media industries, and is also the technical design authority for a number of major technical infrastructure transformation projects for the UK broadcasting industry.</p><p>Cenatus have worked on a number of projects within R&amp;D:</p><h4>Not A Robot v2</h4><p>Not A Robot v2 is a multiplayer <a href="https://www.bbc.co.uk/rd/blog/2019-03-audio-augmented-reality-spatial-sound">Audio AR</a> game where participants are invited to deprogram themselves from their digital rituals and practice real human interactions, to listen to each other and the world around them. A voice assistant guides the group through a number of exercises, which gradually increase the level of human interaction. The project allows users to interact in real-time and space with each other, without the barrier of a screen by using <a href="https://www.bose.com/en_us/products/frames.html">Bose Frames</a>.</p><p>Extending the initial <a href="https://www.bbc.co.uk/rd/blog/2019-11-audio-augmented-reality-guide-tips">research project</a>, Cenatus started working with the team to brainstorm new concepts and chapters in the experience. We provided placeholder Supercollider generative audio sequences during the workshops that were later incorporated within the game.</p><p>After the initial workshops, we implemented ambisonic, adaptive audio sequences and gameplay in Unity for further user testing, including a chapter allowing collaborative music making session amongst the participants.</p><h4>Editorial Algorithms</h4><p>At the heart of the experimental system created through this project is a scalable pipeline for the ingestion and analysis of online textual content. </p><p>Cenatus worked on an API to allow the creation and storage of arbitrary streams of data ingested by the system. We also built separate UI component allowing end users to configure streams of data based upon a subset of data sources ingested by the system, filtered by entities from DBpedia. <a href="http://www.bbc.co.uk/rd/projects/editorial-algorithms">[more info]</a>.</p><h4>CODAM</h4><p>CODAM is a collaborative project funded by Innovate UK in the area of video fingerprinting and visual search. The partners in the project are the University of Surrey and Visual Atoms. The project aims to deliver a toolkit to track and identify footage in very large video archives. This could be of significant help to broadcasters who quickly need to find a piece of footage for legal reasons, or for programme makers who want to find shots of a particular object or scene.</p><p>Cenatus worked on developing and enhancing the existing web front end that allowed video ingestion and users to search for frames or video clips ingested by the system as well as searching for ‘similar’ objects within a particular frame.  <a href="http://www.bbc.co.uk/rd/projects/codam">[more info]</a>.</p><h4>Quote Attribution</h4><p><a href="https://www.bbc.co.uk/rd/blog/2018-01-irfs-weeknotes-number-258">Citron</a> is a BBC R&amp;D exploration into understanding how a machines might help judge the veracity of a news article. In order to understand the veracity of a claim, it is helpful to know who is making it and what their interests are, and this is something a machine can assist with.</p><p>Cenatus <a href="https://www.bbc.co.uk/rd/blog/2018-04-irfs-weeknotes-number-264">worked on an prototype API / UI</a> to expose a searchable database of all quotations cited in BBC news articles over the past few years. This was used to demonstrate the value of the underlying data to journalists and audiences - creating tools that allow people to understand the types of claims made in the news media and their provenance.</p><h4>DSRP</h4><p>The <a href="https://www.bbc.co.uk/rd/projects/data-science-research-partnership">Data Science Research Partnership</a> is a five-year research partnership with eight UK Universities to unlock the potential of data in the media, commencing October 2017. The Data Science Research Partnership aims to be at the forefront of machine learning in the media industry, helping create a more personal BBC that can inform, educate and entertain in new ways.</p><p>Cenatus built a <a href="https://www.bbc.co.uk/rd/blog/2018-08-irfs-weeknotes-number-273">survey application</a> that will be used to evaluate participants viewing / reading habits within the BBC platform against their perceived behaviour, in a collaborative project with the University of Manchester. </p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 01 Jun 2017 20:47:28 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/6-bbc-r-d</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Tyneside Cinema: Decompression]]>
        </title>
        <link>https://cenatus.org/articles/17-tyneside-cinema-decompression</link>
        <description>
          <![CDATA[<p>An immersive space situated between cinema and club. Specially made for a re-invented cinema context.</p><p><a href="https://sallygolding.com/performance#/decompression">Decompression</a> unravels in the spirit of a ‘happening’ – an encounter between audiences, space and the elements of cinematic abstraction, explored via the artists’ longstanding interest in expanded cinema and sound system culture.</p><p>Decompression brings scope to the examination of cinematic conditions by introducing live coding resulting in electronic sound and LED light compositions, and improvised direct to camera manipulations of frame rate interference, eliciting stroboscopic and sonic states. Threaded throughout the performance is an examination of the original text Expanded Cinema by Gene Youngblood (1970), which re-purposed, poses questions concerning contemporary technological-utopia, pressing the issue of the viewing experience and mass entertainment into relevance with regards to today’s globally connected audiences. The performance also features a ‘prepared’ light sensitive screen, tactile projection beams, and surround sound.</p><p>Decompression considers the point at which contemporary expanded cinema meets algorithmic event. An ongoing point of enquiry, the live audiovisual composition seeks to highlight: The ‘dematerialisation’ of technology via spatial activation of the sound system, programmed LED lighting and manipulated projection beams; ‘deterritorialization’ touching on the audience’s potential role as participant; and ‘detemporalization’ in which the concept of time-based media is instead driven by generative software processes.</p><p>The performance explores the frequency of light and sound as both reductive and complex - resulting in a sensory and hypnotic live set which challenges the viewer’s expectation of auditory and visual perception.</p><p>Materials: 4 x Raspberry Pi LED wall mounted lights, live coding, web camera, Strobotac, library sound FX, various synthesisers, data projector, haze machine.</p><br/><p>Cenatus worked on this project from concept through technical implementation and performance.</p><ul><li><a href="https://sallygolding.com/performance#/decompression">Project page</a></li><li><a href="https://github.com/msp/tyneside-cinema">Github repo</a></li></ul><br/><br/><p><em>Photo by Dee Chaneva for Tyneside Cinema</em></p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Tue, 15 Jan 2019 11:27:28 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/17-tyneside-cinema-decompression</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Unconscious Archives Festival]]>
        </title>
        <link>https://cenatus.org/articles/16-unconscious-archives-festival</link>
        <description>
          <![CDATA[<p>Since launching back in 2011, London’s Unconscious Archives established itself as a leader in London’s leftfield arts and music circles, welcoming over 70 artists across 25 events. </p><p><a href="https://ua2017.unconscious-archives.org/">Unconscious Archives Festival</a>  took place between 20th - 30th September 2017 over four unique shows. Exploring the relationship between sound, art, and performance.</p><p>Four unique events make up the first Unconscious Archives Festival, from experimental live film events at Shoreditch’s intimate Close-Up Film Centre, to electronic explorations focused on the hand built and coded at beloved South London club Corsica Studios, it’s an expansive programme in place across the ten days, brought to you by UA founder; artist-curator Sally Golding. Breakout artists merge with established names from the noise / art circuits to allow for discovery, with a large focus on the boundaries of audience / performer interaction and DIY music culture. UA2017 also features a special focus on the Austrian experimental music and art scene, operating in partnership with the Austrian Cultural Forum London.</p><br/><p>Cenatus worked as associate producers on the festival and also provided the website and technical support.</p><br/><br/><p><em>Image: Myriam Bleau - Soft Revolvers, photo by Severin Smith</em></p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Tue, 15 Jan 2019 10:31:40 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/16-unconscious-archives-festival</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Cyanometer]]>
        </title>
        <link>https://cenatus.org/articles/2-cyanometer</link>
        <description>
          <![CDATA[<p>The <a href="https://cyanometer.net/">Cyanometer</a> by Martin Bricelj Baraga is a monument to the blueness of the sky. It is inspired by the original cyanometer invented by Horace-Bénédict de Saussure. His cyanometer - a blue color wheel, forms the core of the monument, gently directing our gaze back to the sky.</p><p>The Cyanometer is both a monument and software that periodically collects images of the sky. The monolith gathers data about the blueness of the sky and the quality of air and visualises them, thus becoming an instrument that raises awareness of the quality of one of the crucial elements of life. Together with air quality data, the Cyanometer website is creating a special kind of online archive and retrospective calendar that is measuring and documenting the changes to our environment.</p><p>Cenatus worked on concepts for the project, designed and built the online infrastructure to store and visualise the collected data. Each installation communicates with an <a href="https://github.com/msp/cyanometer/tree/master/web/controllers">API</a> to store data and two web components visualise that data via the <a href="http://cyanometer.net/">main website</a> and an <a href="http://archive.cyanometer.net">archive</a>.</p><p>Github:</p><ul><li><a href="https://github.com/msp/cyanometer">msp/cyanometer</a></li><li><a href="https://github.com/msp/cyanometer-archive">msp/cyanometer-archive</a></li></ul>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 19 Sep 2016 21:16:26 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/2-cyanometer</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - South London Gallery]]>
        </title>
        <link>https://cenatus.org/articles/11-south-london-gallery</link>
        <description>
          <![CDATA[<p>An exploration in perception and phenomenology which considers audiovisual art as a participatory experience. Involving multi-sensory projection, optical sheeting, reflection, lighting, and sonic composition, <a href="https://sallygolding.com/installation#/your-double-my-double-our-ghost/">‘Your Double My Double Our Ghost’</a> extends Golding’s fascination with the Double and Other, breaching both psychiatry and fiction.</p><p>Cenatus worked in collaboration with artist Sally Golding, creating concepts and realising technological solutions for dual installations at the <a href="http://www.southlondongallery.org/page/sally-golding">South London Gallery</a>. Both works had generative properties in the evolution of the sonic and visual components and included a simple interface for artist to configure timelines of events..</p><p>Github:</p><ul><li><a href="https://github.com/msp/generative-projections-slg">msp/generative-projections-slg</a></li><li><a href="https://github.com/msp/generative-lights-slg">msp/generative-lights-slg</a></li></ul>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Sun, 04 Jun 2017 23:09:54 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/11-south-london-gallery</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Picfair]]>
        </title>
        <link>https://cenatus.org/articles/10-picfair</link>
        <description>
          <![CDATA[<p>Picfair is a democratised marketplace for images, connecting buyers direct with photographers and cutting out the hefty fees of image licensing middlemen.</p><p>Cenatus headed up engineering during 2015-16, changing the culture to be one of close collaboration between product &amp; technology, ensuring focus on regular delivery working towards the business KPIs. We created visibility of work streams, fostered accountability of tasks and allowed developers to work with clear focus. We owned the technical architecture and created a suite of repeatable automated tests to allow agility within the product &amp; codebase. We developed and successfully delivered many new features into production that significantly enhanced the product.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Sun, 04 Jun 2017 22:39:35 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/10-picfair</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Melbourne Internation Film Festival]]>
        </title>
        <link>https://cenatus.org/articles/12-melbourne-internation-film-festival</link>
        <description>
          <![CDATA[<p>16mm film projection &amp; optical soundtrack, Kinect camera, data projector, camera flash units, sound composition, portable sonic device built by <a href="http://www.phantomchips.com/">Phantom Chips</a>.</p><p>A new immersive audiovisual performance, taking the audience on a hallucinogenic dark carnival ride exploring the slippage between parapsychology and technology.</p><p>Interweaving science and superstition, philosophy and pulp through sound composition, live 16mm film projection and light environments, Breaching Transmissions invites the audience to enter a hypnotic, sensory zone. Through themes of ‘transmission’ and ‘medium’, this live performance implicates the audience as a participant by harnessing abstract infrared camera images, and reconfiguring the typical performer-technology-audience relationship.</p><p>Cenatus worked on the project from concept to performance. We realised the technology required to create the desired atmosphere and created a complimentary audio narrative.</p><p>More info &amp; video clips <a href="https://sallygolding.com/performance#/breaching-transmissions/">here</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 07 Jun 2017 12:09:04 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/12-melbourne-internation-film-festival</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - ROLI]]>
        </title>
        <link>https://cenatus.org/articles/7-roli</link>
        <description>
          <![CDATA[<p>ROLI are a design-led music technology company. Their hardware and software products are designed to expand the bandwidth of interaction between people and technology.</p><p>Cenatus conducted a quality review of their back end and user facing web systems to ensure robust engineering practices that are covered by good integration tests. We worked on various enhancements and new features including integration with the OAuth authentication component, used by the front end website and VST software registration systems.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 01 Jun 2017 21:10:17 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/7-roli</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Primitives]]>
        </title>
        <link>https://cenatus.org/articles/13-primitives</link>
        <description>
          <![CDATA[<p><em>Primitives</em> is a performance based opti-sonic installation. A DVD of short films, 12” vinyl of audio interpretations, <a href="https://github.com/msp/mspUgen">source code</a> and executable software were <a href="http://store.broken20.com/album/spatial-primitives">released</a> by the excellent Broken20 imprint in 2014.</p><p><em>Primitives</em> uses custom made software to explore sonic and optical intensity articulated by simple geometric figures and extreme frequencies. Projected images drive a sensory assault, consumed by your eyes, then ears and existing somewhere between perceptions.</p><p>Cenatus conceived, built and executed the project, touring the live performance and international events and festivals.</p><p>There’s more information for curators, including video clips <a href="http://spatial.infrasonics.net/primitives">here</a>. There’s also a <a href="http://primitives.infrasonics.net">microsite</a> for the project that was launched along with the vinyl/DVD release.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 07 Jun 2017 14:24:53 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/13-primitives</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Matt Spendlove]]>
        </title>
        <link>https://cenatus.org/articles/4-matt-spendlove</link>
        <description>
          <![CDATA[<p>Matt Spendlove is the director of Cenatus, a creative producer, technologist, electronic musician and multimedia artist with 20 years experience in delivering projects which bring together arts, technology and networked culture. He co-directed and co-curated the <a href="http://netaudiolondon.org">Netaudio</a> London digital arts festival between 2006 and 2011, is an associate director on London’s Unconscious Archives sound art and new media series and festival and produced numerous events whilst resident at the Apiary Studios arts hub between 2009 and 2018. Matt’s long-term work in composing and engineering new electronic music and designing innovative digital audiovisual experiences have included international solo and collaborative commissions.</p><p>Productions, exhibitions and performances have included Melbourne International Film Festival (AU), BBC Research &amp; Development (UK), The Roundhouse (UK), Club Transmediale (DE), South London Gallery (UK), Sound and Music (UK), Future Everything (UK), Mutek Festival (CA), Serralves Museum (PT), Cafe OTO (UK), Digital Culture Centre (MX), Contemporary Art Madrid/CA2M (ES), Unsound (PL), Museum of Transitory Art (SI), High Zero Festival (US), E-FEST (TN), Tyneside Cinema (UK) and SHAPE Platform (Sound, Heterogeneous Art and Performance in Europe).</p><p>Matt can be found on <a href="http://github.com/msp">Github</a>, <a href="http://twitter.com/mattspendlove">Twitter</a>, <a href="http://www.last.fm/user/polymorphic/">Last.fm</a> and <a href="http://www.linkedin.com/in/mattspendlove">Linkedin</a> and you can check out the latest projects <a href="http://cenatus.org/">here</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 01 Jun 2017 18:25:43 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/4-matt-spendlove</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - About Cenatus]]>
        </title>
        <link>https://cenatus.org/articles/5-about-cenatus</link>
        <description>
          <![CDATA[<p>Cenatus operates in the fields of creative production and software design for live events, sound art installations, online media projects and immersive experiences.</p><p>We promote new music and media, develop artists and enable wider public use of innovative digital technologies for creativity.</p><p>Cenatus produced Netaudio London, the UK’s foremost digital culture festival incorporating strands for Live Music, Sound Art, Conference and Broadcast that operated in 2006, 2008 and 2011.</p><p>Cenatus was originally set up as a not-for-profit membership organisation in 2005 and incorporated as a community interest company in October 2009 by <a href="http://cenatus.org/articles/4-matt-spendlove">Matt Spendlove</a> and <a href="https://archive.cenatus.org/people/andi-studer/">Andi Studer</a> who lead the company as directors and were supported by a network of associates and <a href="https://archive.cenatus.org/search/by-tag/internship">staff</a>. Cenatus grew into a creative technology company in 2012. You can read more history in the <a href="https://archive.cenatus.org/about.html">archive</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Thu, 01 Jun 2017 18:31:35 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/5-about-cenatus</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Freelance Creative Technologist]]>
        </title>
        <link>https://cenatus.org/articles/8-freelance-creative-technologist</link>
        <description>
          <![CDATA[<p>Matt Spendlove</p><p>What would you like to hear? (insert current industry meme here). <strong>I make things</strong>, using technology. I have done since 2000.</p><p><strong>Pragmatic</strong>, adaptive, self-organising <strong>creative technologist</strong> with an eye for elegant, <strong>simple</strong> solutions?</p><p><strong>Value</strong> focussed?</p><p>Generalising specialist?</p><p>Interested in <strong>process</strong> and <strong>interaction</strong>?</p><p>Worked across industries and on research projects: web, art, music, finance, advertising? From start small ups to FTSE100 giants</p><p>C via Java &amp; Ruby to Functional Programming, mostly <strong>Elixir/Phoenix</strong> these days and the usual <strong>HTML/CSS</strong> &amp; <strong>Javascript</strong> niceties. Infrastructure is <strong>Git</strong> and cloud services like <strong>Heroku &amp; Amazon</strong>, but I’ve had plenty of <strong>Linux/OSX</strong> experience before they existed?</p><p>I run a bunch of creative projects, including being a director of <a href="http://cenatus.org/about">Cenatus</a> and previously the <a href="http://netaudiolondon.org/">Netaudio Festival</a>. You probably don’t need to hear about that but might make me more fun to work with. Who knows?</p><p>That I sound <em>far less arrogant</em> in person?</p><p>I can be found on <a href="http://github.com/msp">Github</a>, <a href="http://www.linkedin.com/in/mattspendlove">Linkedin</a>, <a href="http://twitter.com/mattspendlove">Twitter</a> and <a href="http://www.last.fm/user/polymorphic/">Last.fm</a>. <em>Sometimes</em>. Here’s a very old <a href="http://bit.ly/msp-cv-2012">CV</a> for the more traditional amongst you.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Sat, 03 Jun 2017 12:38:50 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/8-freelance-creative-technologist</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Lighthouse ]]>
        </title>
        <link>https://cenatus.org/articles/14-lighthouse-</link>
        <description>
          <![CDATA[<p>Lighthouse is an arts and culture agency that connects new developments in art, technology and society. They produce commissions, exhibitions, events and education schemes that support radical new contemporary art, digital culture, music, film and much more.</p><p>The creative agency Dandelion &amp; Burdock contracted Cenatus to implement a <a href="http://www.lighthouse.org.uk/">company website</a> for the Brighton based arts organisation.</p><p>We implemented an image led responsive site, designed by Dandelion &amp; Burdock and created a custom content management system. We transferred knowledge to the in-house team who now maintain the site.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 07 Jun 2017 15:11:27 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/14-lighthouse-</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - SAM / Cafe OTO commission]]>
        </title>
        <link>https://cenatus.org/articles/1-sam-cafe-oto-commission</link>
        <description>
          <![CDATA[<p><a href="http://www.penultimatepress.com">Penultimate Press</a> and Cenatus in partnership with Cafe OTO and Sound and Music present two disparate figures from the French experimental tradition of musique concrète: Two nights, one featuring musical outsider Ghédalia Tazartès and the other his mentor and friend, the theoretician, producer and musician Michel Chion.</p><ul><li><p><strong>24 March 2011: Michel Chion</strong> will present the world premiere of Live In Prose, A Symphony Concrète. This piece was composed in the period 2006-2010 and has been previously been only in extracts and drafts in Paris, Montreal Canada and Yokohama, Japan. The entire work is “for fixed sounds”.  Michel Chion was born in 1947 in Creil (France). In the 70’s he was assistant to Pierre Schaeffer at the Paris’ Conservatoire national de musique, producer of broadcasts for the GRM, and publications director for the Ina-GRM, of which he was a member from 1971 to 1976. Parallel to these activities, he composed important musique concrète works in the studios of the GRM including the classic ‘Requiem’.
25 March 2011: Ghédalia Tazartès will present an idiosyncractic solo performance for voice and electronics. His public appearances remain exceptional events as he rarely performs in concert. </p></li><li><p><strong>The autodidact Ghédalia Tazartès</strong>, born 1947 in Paris has spent 30+ years within musical practice and experimentation, letting his musical work wander from chant to rhythm, from one voice to another. Utilising magnetic tape recorders, he paves the way for the electric and the vocal paths, between the muezzin psalmody and the screaming of a rocker. He traces vague landscapes where the mitre of the white clown, the plumes of the sorcerer, the helmet of a cop and Parisian an hydride collide into polyphonic ceremonies. Don’t become a black, an arab, a Tibetan monk, a jew, a woman or an animal but to feel all this stirring deep inside of you.” Additional acts to be announced. </p></li></ul><p>Support on the night will be provided by the excellent <strong>Rashad Becker</strong> who “plays traditional music of imaginery species”.</p><p>Venue: <a href="http://cafeoto.co.uk/">Cafe OTO</a>, 18 – 22 Ashwin Street, London E8 3DL</p><p>Tickets: £10 in advance / £18 for 2 day pass</p><p>Commissioned by <a href="http://www.soundandmusic.org/projects/samoto">Sound and Music and Cafe OTO</a>.</p><p>Documentary <a href="https://vimeo.com/24145976">video</a> on the SAM/OTO series.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 19 Sep 2016 20:36:39 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/1-sam-cafe-oto-commission</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Drowned In Sound]]>
        </title>
        <link>https://cenatus.org/articles/15-drowned-in-sound</link>
        <description>
          <![CDATA[<p><a href="http://drownedinsound.com/">Drowned In Sound</a> (DiS) was started in 2000 by Sean Adams and is one of the largest independent music communities in the UK. In 2009 it was voted 9th out the top 25 websites by the Observer. It has user driven content covering gigs and reviews with an extremely active members board. The site receives around 30K unique visitors daily and has contributors from all over the globe.</p><p>Cenatus worked with DiS on numerous occasions. The biggest project was the 10th anniversary redesign that included a rebuild &amp; re-architecture of much of the user facing sections of the site. The last project was to implement a CDN to drastically reduce server costs.</p><p>More info can be found in the <a href="https://archive.cenatus.org/web-design/drowned-in-sound/">archive</a>.</p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Wed, 07 Jun 2017 15:33:48 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/15-drowned-in-sound</guid>
      </item>
    
      <item>
        <title>
          <![CDATA[Portfolio - Netaudio London]]>
        </title>
        <link>https://cenatus.org/articles/3-netaudio-london</link>
        <description>
          <![CDATA[<p>Netaudio [2006 - 2011], the UK’s foremost festival dedicated to the musical sounds of the Internet promotes the creative output of musicians, who use digital and network technologies to explore new boundaries in their work and actively supports the development of new talent.</p><p>In 2008 Netaudio took over the spacious Shunt Lounge for 4 days. The programme included 8 audio-visual installations and 45 live performances with a total of 81 artists involved. The festival welcomed over 2700 visitors.</p><p>2009 appearances:
Netaudio floor at <a href="http://www.ctm-festival.de/news/">Club Transmediale</a>, Berlin, 27 January 09
Netaudio stage at <a href="http://www.freerotation.com">Freerotation</a>, Wales, 14-16 August 09
Netaudio London stage at <a href="http://www.netaudioberlin.de">Netaudio Berlin</a>, 8-11 October 09</p><p>In 2010 we undertook an extensive <a href="https://archive.cenatus.org/production/netaudio-research-perspectives-in-digital-music/">R&amp;D project</a>.</p><p>2011 saw Netaudio taking over the Roundhouse Studios for <a href="http://www.roundhouse.org.uk/whats-on/archive/netaudio-london/">Short Circuit</a> with accompanying shows a KOKO, Cafe OTO and the Apiary Studios – check out the <a href="http://netaudiolondon.org/2011/festival">full programme</a></p>]]>
        </description>
        <dc:creator>msp</dc:creator>
        <pubDate>Mon, 19 Sep 2016 21:16:37 GMT</pubDate>
        <guid isPermaLink="true">https://cenatus.org/articles/3-netaudio-london</guid>
      </item>
    
  </channel>
</rss>
