<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>{Under reconstruction}</title>
	<atom:link href="http://blog.pansapiens.com/feed/" rel="self" type="application/rss+xml" />
	<link>http://blog.pansapiens.com</link>
	<description>The blog formerly known as &#34;Your bones got a little machine&#34;</description>
	<lastBuildDate>Thu, 28 Jun 2018 02:19:22 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.0.7</generator>
	<item>
		<title>TrinotateWeb in a Docker container</title>
		<link>http://blog.pansapiens.com/2018/06/28/trinotateweb-in-a-docker-container/</link>
		<comments>http://blog.pansapiens.com/2018/06/28/trinotateweb-in-a-docker-container/#respond</comments>
		<pubDate>Thu, 28 Jun 2018 02:17:19 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[bioinformatics]]></category>
		<category><![CDATA[annotation]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[sysadmin]]></category>
		<category><![CDATA[web]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=383</guid>
		<description><![CDATA[TrinotateWeb shows some reports from Trinotate. I know very little about it (please don&#8217;t ask me how to run Trinotate or interpret your results), but I wanted to serve up the reports. To make the services we provide to end-users &#8230; <a href="http://blog.pansapiens.com/2018/06/28/trinotateweb-in-a-docker-container/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><a href="https://trinotate.github.io/TrinotateWeb.html">TrinotateWeb</a> shows some reports from Trinotate. I know very little about it (please <strong>don&#8217;t</strong> ask me how to run Trinotate or interpret your results), but I wanted to serve up the reports. To make the services we provide to end-users a little more portable and reproducable, we tend to wrap them up as Docker containers. Even if we don&#8217;t actually ever move the images/containers between hosts, the Dockerfile acts as &#8216;runnable documentation&#8217; on how a key part of the service is setup.</p>
<p>We do a similar thing for private instances of <a href="https://www.sequenceserver.com/">SequenceServer</a> when researchers want a convenient interface to BLAST search some of their private (hopefully eventually open !) sequence databases.</p>
<p>The container here is self-contained with the data baked in. You may not want this, but an immutable container containing the analysis is what we wanted.</p>
<p>The code lives here: <a href="https://github.com/MonashBioinformaticsPlatform/bio-service-containers/tree/master/TrinotateWeb">github.com/MonashBioinformaticsPlatform/bio-service-containers/</a></p>
<p><strong>Requires:</strong></p>
<ul>
<li><code>TrinotateAnno.sqlite</code> &#8211; the database generated via Trinotate</li>
<li><a href="https://github.com/MonashBioinformaticsPlatform/bio-service-containers/blob/master/TrinotateWeb/lighttpd.conf"><code>lighttpd.conf</code></a> (provided) &#8211; preconfigured, don&#8217;t edit.</li>
</ul>
<pre><code class="Dockerfile">FROM debian:buster-slim

ENV TRINOTATE_HOME=/app/Trinotate
ENV TRINOTATE_VERSION=3.1.1

WORKDIR /app

RUN apt-get -y update &amp;&amp; \
    apt-get install -y lighttpd libhtml-template-perl libdbd-sqlite3-perl &amp;&amp; \
    rm -rf /var/lib/apt/lists/*

# Not required, using Debian packages instead
# RUN apt-get install -y cpanminus build-essential
# RUN cpanm -i DBI &amp;&amp; \
#     cpanm -i HTML &amp;&amp; \
#     cpanm -i HTML::Template &amp;&amp; \
#     cpanm -i DBD::SQLite

ADD https://github.com/Trinotate/Trinotate/archive/Trinotate-v${TRINOTATE_VERSION}.tar.gz Trinotate-v${TRINOTATE_VERSION}.tar.gz
RUN tar xvzf Trinotate-v${TRINOTATE_VERSION}.tar.gz &amp;&amp; \
    rm Trinotate-v${TRINOTATE_VERSION}.tar.gz &amp;&amp; \
    mv Trinotate-Trinotate-v${TRINOTATE_VERSION} Trinotate

COPY TrinotateAnno.sqlite /data/TrinotateAnno.sqlite 
COPY lighttpd.conf /app/lighttpd.conf

RUN chown -R www-data:www-data /app

EXPOSE 80

ENTRYPOINT ["lighttpd", "-D", "-f", "/app/lighttpd.conf"]
</code></pre>
<h2>&#8220;Production&#8221;</h2>
<p>On port 4569.</p>
<pre><code class="bash">docker run --name DatasetName_Trinotate --restart=always -it -d -p 4569:80 trinotate:dataset_name
</code></pre>
<p>Use Apache to forward (proxy) to the container for a nice URL (eg /apps/trinotate/DatasetName), behind <code>.htaccess</code>.</p>
<h2>Option: external data and config in a host directory</h2>
<p>With a few small edits to the <code>Dockerfile</code> (comment out the Trinotate download and sqlite db COPY), you can instead use an external copy of Trinotate and a database on the host.<br />
You might want this for data that is going to be in flux for a while, before baking it permanently in a container (?).</p>
<pre><code class="bash">docker run --name DatasetName_Trinotate --rm -it -d -p 4569:80 -v $(pwd):/app -v /home/perry/bin/Trinotate-Trinotate-v3.1.1/:/app/Trinotate -v $(pwd)/TrinotateAnno.sqlite:/data/TrinotateAnno.sqlite trinotate:dataset_name
</code></pre>
<h2>Apache config</h2>
<p>Use this to forward incoming requests to <code>/apps/trinotate/DatasetName/</code> -> the port on the Docker container (4569), with a custom htaccess file for Basic Auth.</p>
<pre><code class="apache2">    # /apps/trinotate/DatasetName
    &lt;Proxy "http://localhost:4569/*"&gt;
      Order deny,allow
      Allow from all
      Authtype Basic
      Authname "Restricted Content"
      AuthUserFile /etc/apache2/htaccess/DatasetName
      Require valid-user
    &lt;/Proxy&gt;

    RewriteEngine on

    # For TrinotateWeb inside a Docker container - absolute URLs mean /css and /js links break
    # when proxied, unless we use this RewriteCond trick detecting referrer. 
    RewriteCond "%{HTTP_REFERER}" ".*bioinformatics.erc.monash.edu(?:.au)?/apps/trinotate/DatasetName/.*" [NV]
    RewriteRule ^/css/(.*)$ "http://localhost:4569/css/$1" [P]
    RewriteCond "%{HTTP_REFERER}" ".*bioinformatics.erc.monash.edu(?:.au)?/apps/trinotate/DatasetName/.*" [NV]
    RewriteRule ^/js/(.*)$ "http://localhost:4569/js/$1" [P]
    RewriteRule ^/apps/trinotate/DatasetName$ /apps/trinotate/DatasetName/ [R]
    RewriteRule ^/apps/trinotate/DatasetName/(.*)$ "http://localhost:4569/$1" [P]
</code></pre>
<p>TrinotateWeb makes requests to <a href="https://canvasxpress.org/">https://canvasxpress.org/</a> &#8211; as of <strong>28-Jun-2018</strong> the certificates for HTTPS are currently expired. The user should visit <a href="https://canvasxpress.org/">https://canvasxpress.org/</a> first and accept the insecure certificate so that icons in TrinotateWeb load correctly.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2018/06/28/trinotateweb-in-a-docker-container/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>mPartsRegistry : small update</title>
		<link>http://blog.pansapiens.com/2012/07/30/mpartsregistry-small-update/</link>
		<comments>http://blog.pansapiens.com/2012/07/30/mpartsregistry-small-update/#respond</comments>
		<pubDate>Mon, 30 Jul 2012 05:24:30 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[science]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[synthetic biology]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[web2.0]]></category>
		<category><![CDATA[igem]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=351</guid>
		<description><![CDATA[I just made a small update to mPartsRegistry, the mobile interface I wrote to make browsing the Registry of Standard Biological Parts a little easier on smartphones. This update adds a &#8220;Random Part&#8221; button &#8211; it&#8217;s mostly just so people who &#8230; <a href="http://blog.pansapiens.com/2012/07/30/mpartsregistry-small-update/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>I just made a small update to <a href="http://mpartsregistry.appspot.com/">mPartsRegistry</a>, the mobile interface I wrote to make browsing the <a href="http://partsregistry.org/">Registry of Standard Biological Parts</a> a little easier on smartphones.</p>
<p><a href="http://blog.pansapiens.com/wp-content/uploads/2012/07/Selection_008.png" rel="lightbox[351]"><img class="aligncenter size-medium wp-image-354" title="mPartsRegistry Random button" src="http://blog.pansapiens.com/wp-content/uploads/2012/07/Selection_008-300x273.png" alt="" width="300" height="273" srcset="http://blog.pansapiens.com/wp-content/uploads/2012/07/Selection_008-300x273.png 300w, http://blog.pansapiens.com/wp-content/uploads/2012/07/Selection_008-328x300.png 328w, http://blog.pansapiens.com/wp-content/uploads/2012/07/Selection_008.png 356w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<p>This update adds a &#8220;<a href="http://mpartsregistry.appspot.com/html/random">Random Part</a>&#8221; button &#8211; it&#8217;s mostly just so people who want to play with it without actually knowing a part ID can get some instant gratification. This is in addition to the quiet update I made a few months ago to replace jQTouch with <a href="http://jquerymobile.com/">JQuery Mobile</a>, since jQTouch development stagnated for a while and never really properly supported most mobile browsers.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2012/07/30/mpartsregistry-small-update/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Wiider postmortem</title>
		<link>http://blog.pansapiens.com/2012/01/28/wiider-postmortem/</link>
		<comments>http://blog.pansapiens.com/2012/01/28/wiider-postmortem/#respond</comments>
		<pubDate>Sat, 28 Jan 2012 03:00:54 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[software]]></category>
		<category><![CDATA[code]]></category>
		<category><![CDATA[nintendo]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[web2.0]]></category>
		<category><![CDATA[wii]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=270</guid>
		<description><![CDATA[I always intended to write this postmortem earlier &#8230; now three years after development ceased, I&#8217;m finally getting around to it. Warning &#8211; retrospective rambling ahead. In mid 2007, Nintendo released the Opera-powered browser for their Wii gaming console which &#8230; <a href="http://blog.pansapiens.com/2012/01/28/wiider-postmortem/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><em>I always intended to write this postmortem earlier &#8230; now three years after development ceased, I&#8217;m finally getting around to it. Warning &#8211; retrospective rambling ahead.</em></p>
<p>In mid 2007, Nintendo released the Opera-powered browser for their Wii gaming console which they called the <a href="http://en.wikipedia.org/wiki/Internet_Channel">Internet Channel</a>. For many people, including myself, this was the first time they had been able to use &#8220;Internet on the TV&#8221;. Because of the typical viewing distance, low resolution for CRT-based televisions, and the unique navigation interface using the Wiimote, many web sites were functional but not particularly comfortable to use. Many sites targeted at desktop PCs were too complex and heavyweight for the Internet Channel, fonts were often too small such that cumbersome zooming and scrolling was required. I felt this was a good opportunity to write a Wii-browser specific app &#8211; in particular, I wanted a news reader that was comfortable to use in a lounge room setting, controlled via the Wiimote.</p>
<p>I started the Wiider project around Dec 2007, as the successor to a Wii-specific news aggregator service I had set up called WiiRSS. The last SVN commit for Wiider was in Dec 2008.</p>
<p>The goal of the Wiider project was to create a web-based news feed reader optimized for the Nintendo Wii Internet Channel. Features included:</p>
<ul>
<li>Wii-friendly user interface &#8211; large TV friendly fonts, simple navigation</li>
<li>Cookie-less view-only access for a personal feed list (via ?key=xxx, bookmarked on the Wii once you&#8217;ve logged in)</li>
<li>Wiimote navigation controls, beyond what the browser provides</li>
<li>Painless image zooming (eg Lightbox)</li>
<li>RSS and ATOM feed support</li>
<li>Easy feed discovery using the <a href="http://code.google.com/apis/feed/">Google Feed API</a></li>
</ul>
<div style="text-align: center;"><a href="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_front_page.png" rel="lightbox[270]"><img class="aligncenter size-medium wp-image-291" title="Wiider.com Front Page" src="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_front_page-300x154.png" alt="" width="300" height="154" srcset="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_front_page-300x154.png 300w, http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_front_page.png 811w" sizes="(max-width: 300px) 100vw, 300px" /></a></div>
<div><a href="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_front_page.png"><br />
</a></div>
<div>While I&#8217;ve since retired the project, I felt it would be good to document some of the insights I gained as a result of developing it.</div>
<div><span id="more-270"></span></div>
<h3>Choosing the domain name</h3>
<p>Every cool web two-point-oh-ish app needs a cool domain name. It&#8217;s a given. Here&#8217;s the list of available domains I brainstormed at the time (good, bad and cheesy):</p>
<ul>
<li><em>wiigregator.com</em></li>
<li><em>newswiider.com</em></li>
<li><em>wii-feeds.com, <em>feedwii.com, <em> feedmywii.com, feedthewii.com</em></em></em></li>
<li><em>wiilovenews.com, <em>wiinoos.com</em></em></li>
<li><em>atomicwii.com</em></li>
<li><em>wiireadr.com</em></li>
<li><em>wiiwiire.com </em></li>
<li><em>(wiifeeds.com, wiifeeder.com and wiireader.com were taken)</em></li>
</ul>
<p>Ultimately, I settled on <em>wiider.com</em>. The<a href="http://www.zdnet.com/blog/google/google-launches-google-reader-for-the-wii/557"> term &#8216;wiider&#8217; had already been used around the tech press</a> as a pun in reference to a (now defunct) Wii friendly interface to Google Reader, released by Google. There were advantages and disadvantages to this &#8211; I felt it was more of an advantage since people might feel compelled to search for &#8216;wiider&#8217; and find my app. I wasn&#8217;t too worried about competition from Google&#8217;s offering, since my implementation was far better for it&#8217;s purpose.</p>
<p style="text-align: center;"><a href="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_about.png" rel="lightbox[270]"><img class="aligncenter size-medium wp-image-302" title="Wiider About page" src="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_about-300x190.png" alt="" width="300" height="190" srcset="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_about-300x190.png 300w, http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_about.png 812w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<h3>Choice of programming language &amp; web app framework</h3>
<p>I prefer Python. I dabbled briefly with Ruby (on Rails), but ultimately still prefer Python. However, at the time when I began this project, there was no clear &#8220;one Python framework to rule them all&#8221;. Django was one good option, Turbogears was another. Ruby on Rails was exciting (if not hyped), but was a bit of a moving target and it&#8217;s application server was very crashy. I chose to use the <a href="http://turbogears.org/">Turbogears</a> 1.0.x framework for Wiider, since I liked the philosophy of bringing together well tested components rather than reinventing the wheel (eg CherryPy as the web request handler, SQLObject as the ORM). While I now appreciate the Django templating language, at the time I also preferred the approach of Turbogears default template package, Kid (now largely superseded by Genshi), which maintains valid XML templates and allows arbitrary inline Python code if required (MVC separation be damned). Turbogears did the job just fine, but it turned out to be a fast moving project in flux. The 2.0.x release made enough key changes that migration from 1.0.x to 2.0.x wasn&#8217;t trivial, such that I was essentially stuck using 1.0.x for Wiider while the core Turbogears development focus moved on to 2.0.x and beyond. Not a showstopper, but a little annoying.</p>
<p>If I was to start the project again today, I&#8217;d very likely use <a href="https://www.djangoproject.com">Django</a>, since <em>in my opinion</em> it now provides a better battle tested and stable base with a larger library of useful optional modules. Alternatively, these days there are enough Python microframeworks (eg, <a href="http://code.google.com/p/webapp-improved/">webapp2</a>, <a href="http://flask.pocoo.org/">Flask</a>) that can be easily coupled with a templating language and an ORM (everyone seems to like <a href="http://www.sqlalchemy.org/">SQLAlchemy</a>) that you can easily roll your own preferred web app environment with a few &#8220;pip install&#8221; commands, and any of these would have been appropriate for a small project like Wiider.</p>
<h3>What worked well</h3>
<p><em><strong>You could read feeds on your TV, via the Wii, lounging on your couch.</strong></em> I used it personally in some kind of beta state, on and off, for about 12 months, reading feeds of interest over my morning coffee. Feeds were &#8216;auto-managing&#8217; &#8211; no read/unread, just the latest stuff, based on a per feed setting for how old items were allowed to be. You could add feeds directly via URL, or via Google&#8217;s feed search service embedded in the page and styled to look like it belonged there. I&#8217;d tried to design the app to scale &#8211; the database model only stored a feed with a particular URL once, even if multiple users subscribed to it.</p>
<p style="text-align: center;"><a href="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_google_feed_search.png" rel="lightbox[270]"><img class="aligncenter size-medium wp-image-301" title="Wiider Google feed search" src="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_google_feed_search-300x249.png" alt="" width="300" height="249" srcset="http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_google_feed_search-300x249.png 300w, http://blog.pansapiens.com/wp-content/uploads/2010/02/wiider_google_feed_search.png 813w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<p><strong><em>I learned a lot. </em></strong>About handling ATOM and RSS feeds. About &#8216;small screen devices&#8217; and modern Python web development. Optimizing web pages for comfortable reading on the Wii presents similar challenges to producing mobile sites for smartphones and tablets &#8211; skills that I&#8217;ve used on other projects since. Also, I honed skills in mundane things like keeping good project documentation in a wiki (which I, maybe strangely, enjoy) and using bug tracker (all via <a href="http://trac.edgewall.org/">Trac</a>).</p>
<p><em><strong>Zero login</strong></em>. One reason why desktop-targeted feed readers were cumbersome on the Wii was the difficulty of logging in using the on screen keyboard. It could be done, but it was slow and cumbersome, and since the Internet Channel clears it&#8217;s cookies between restarts, you have to log back in every time you use the app. I solved this problem with Wiider by providing a &#8216;secret URL&#8217; that would allow users to view their feeds without logging in. The user was prompted to bookmark this page for future use. To add or delete feeds, they would still need to log in, but typically I expected that people would add feeds using their desktop computer, and use the Wii only for reading. This of course meant that feed lists and content were not guaranteed 100% private, since the secret URL could be sniffed on the open network or inadvertently shared; users were warned of this danger. I believe it was a reasonable trade off between usability, privacy and security.</p>
<p><strong><em>Wiimote controls. </em></strong>The Opera browser maps Wiimote button presses to Javascript keycodes &#8211; this allowed mapping of the D-pad left and right buttons to &#8220;scroll page up&#8221; and &#8220;scroll page down&#8221; and the &#8216;1&#8217; button to &#8220;scroll to bottom&#8221; functions. This made navigation far easier than pointing and dragging to scroll large distances.</p>
<h3>What didn&#8217;t work</h3>
<p><strong><em>I&#8217;m no web designer.</em></strong> It was functional, but could have been prettier. It did however have some smooth show/hide transitions courtesy of JQuery.</p>
<p><em><strong>Refreshing feeds sometimes failed.</strong></em> Feeds were fetched on demand by the server when the page loaded &#8211; this isn&#8217;t a good way to do things since it often gave long page load times, and timeouts sometimes occurred. It&#8217;s not quite as bad as it sounds &#8211; <em>Etags</em> and <em>Last-Modified</em> headers were respected though, so updates only occurred when required. I had no interest in DoS&#8217;ing feed providers <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Feed updates should have been decoupled from the page rendering via cron or a task queue &#8211; that&#8217;s something I would have added if the project continued.</p>
<p><strong><em>Unicode.</em></strong> I never quite cracked a few lingering Unicode bugs. Somewhere between the feed parsing with BeautifulSoup, the database ORM provider, the template engine and the web application server, something was munging the Unicode. I learnt everything I never wanted to know about character encodings, but never quite managed to fix it.</p>
<p><strong><em>Boot-time and startup time matters.</em></strong> The goal was to have a news reader where you could sit down, turn on the TV and read the latest feeds quickly, without too much messing about. This is typically the appeal of a games console over a fully fledged desktop PC &#8211; startup speed, reliability and simplicity. Personally, for me, the Internet Channel cannot provide that in it&#8217;s current form. To launch Wiider on your Wii, you needed to:</p>
<ol>
<li>Turn on the TV, turn on your Wii.</li>
<li>Press &#8216;A&#8217; while waiting for Nintendo&#8217;s <span style="text-decoration: underline;">unskippable</span> health and safety message to disappear.</li>
<li>Launch the Internet Channel. Wait a little.</li>
<li>Go to bookmarks, launch Wiider, wait a little.</li>
</ol>
<div>While this may still sound simple, in practice I felt it was still too much time and too much work for a user wanting instant gratification. There are two key waiting times (2 and 3) and one extraneous interaction (3) that I had no control over due to limitations/features of the device. If Nintendo had allowed the health and safety message to be disabled, or immediately skippable, and also allowed bookmarks to web pages to appear as &#8216;Channels&#8217; on the front page of the Wii, I believe the launch time and simplicity for &#8216;instant gratification&#8217; would be met. I&#8217;d anticipated that Nintendo would continue to develop the Wii as a lightweight browsing appliance, however a feature allowing bookmarks on the home screen never appeared, and I&#8217;m pretty sure it never will given that the Wii will be considered &#8216;obsolete&#8217; next year after the release of the Wii U. In fact, Nintendo have recently started to give indications that the WiiU will make better use of the browser and &#8216;apps&#8217;, so it will be interesting to see what they do with it &#8211; Wiider may rise from the ashes as WiiderU.</div>
<p>I&#8217;m certainly not blaming Nintendo for the ultimate retirement of my little web app. While Nintendo&#8217;s lack of interest in developing the Internet Channel to meet it&#8217;s full potential limited the utility of my project, it&#8217;s not that they are at fault. They do games, and they do them well. History has shown that they rarely deviate from this formula &#8211; facilitating easier access to the wilds of the open web is just not in their nature, even if it&#8217;s within their reach. I took a known risk, scratching my own itch and investing some time in a project while making some guesses about where they might head with it. My guesses turned out to be wrong.</p>
<p>Since I never felt the application was ready for casual use, I never really promoted Wiider publicly. I did have a single random signup by a user who must have stumbled upon it and added some feeds. Not really sure how they found it.</p>
<p>These days I prefer the immediacy of my smartphone or tablet for reading feeds over firing up the Wii. On the TV, things like Google TV are emerging, and no doubt many more people have home theater PCs that would run Google Reader or Feedly in a usable fashion. Many people prefer reading &#8216;feeds&#8217; via Twitter/Facebook/Google+/Reddit link streams. I think Wiider&#8217;s niche has narrowed. Ultimately, since I couldn&#8217;t see myself using it anymore, I decided to retire the project. But it&#8217;s nice to have something &#8216;finished&#8217;, even if it was never really complete.</p>
<p>Here&#8217;s an export of the documentation wiki and the source code (tailored to run on WebFaction), for posterity: <a href="/wp-content/uploads/wiider/wiider_source_postmortem.zip">wiider_source_postmortem.zip</a></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2012/01/28/wiider-postmortem/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Running a local JABAWS server for Jalview on Ubuntu (11.04 Natty)</title>
		<link>http://blog.pansapiens.com/2011/10/14/running-a-local-jabaws-server-for-jalview-on-ubuntu-11-04-natty/</link>
		<comments>http://blog.pansapiens.com/2011/10/14/running-a-local-jabaws-server-for-jalview-on-ubuntu-11-04-natty/#respond</comments>
		<pubDate>Fri, 14 Oct 2011 03:52:52 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[bioinformatics]]></category>
		<category><![CDATA[howto]]></category>
		<category><![CDATA[linux]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=308</guid>
		<description><![CDATA[The excellent Jalview sequence alignment visualization and editing tool has the ability to send a set of sequences to a multiple sequence alignment web service (&#8220;JABAWS&#8221;) and receive the results in a new alignment window. This is really convenient when you &#8230; <a href="http://blog.pansapiens.com/2011/10/14/running-a-local-jabaws-server-for-jalview-on-ubuntu-11-04-natty/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>The excellent Jalview sequence alignment visualization and editing tool has the ability to send a set of sequences to a multiple sequence alignment web service (&#8220;JABAWS&#8221;) and receive the results in a new alignment window. This is really convenient when you are doing lots of sequence analysis, and Geoff Barton&#8217;s group at the University of Dundee provide a JABAWS server that Jalview will use by default.</p>
<p>But maybe the Dundee server is down. Or maybe you think your local machine will do things faster. Or maybe you work on über secret sequences in some Faraday cage bunker with no permanent network connection. In each of these cases, you may want to run your own local JABAWS server and use that instead. In this case, read on.</p>
<p><span id="more-308"></span></p>
<p><a href="http://www.compbio.dundee.ac.uk/jabaws/download.html">Download the JABAWS war file</a> (direct link <a href="http://www.compbio.dundee.ac.uk/jabaws/archive/jaba.war">here</a>).</p>
<p>Install Apache Tomcat and the management interface:</p>
<pre style="padding-left: 30px;">sudo apt-get install tomcat6 tomcat6-admin</pre>
<p>As root, edit the<em> /etc/tomcat6/tomcat-users.xml</em> file to enable admin access.</p>
<p>Between the <em>&lt;tomcat-users&gt;&lt;/tomcat-users&gt;</em> tags, add:</p>
<pre style="padding-left: 30px;">&lt;role rolename="admin"/&gt;
&lt;user username="tomcat" password="s3cret" roles="admin"/&gt;</pre>
<p>where &#8216;s3cret&#8217; is a secret password for the user &#8216;admin&#8217;.</p>
<p>Go to <a href="http://localhost:8080/manager/html/">http://localhost:8080/manager/html/</a> and login as &#8216;admin&#8217; and the password you set.</p>
<p>Under <em>&#8220;WAR file to deploy&#8221;</em>, click on the <em>&#8220;Choose File&#8221;</em> button, and select the <em>jaba.war</em> file you downloaded.</p>
<p>Now you need to set the permissions of the Muscle/Mafft/Clustal etc binaries that come packaged with JABAWS. Type the following commands:</p>
<pre style="padding-left: 30px;">cd /var/lib/tomcat6/webapps/jaba/binaries/src</pre>
<pre style="padding-left: 30px;">sudo chmod +x setexecflag.sh</pre>
<pre style="padding-left: 30px;">sudo ./setexecflag.sh</pre>
<p>This should do it .. in <a href="http://www.jalview.org/">Jalview</a>, go to Preferences, and under the &#8220;Web Services&#8221; tab add a new service URL &#8220;http://localhost:8080/jaba&#8221; (no quotes, no trailing backslash).  Now when you load an alignment, your local JABAWS server should appear under the &#8220;Web Service-&gt;JABAWS Alignment menu&#8221;.</p>
<p><em>For the record .. I tried this under the version of Jetty packaged with Ubuntu 11.04, but I couldn&#8217;t get it to work so I gave up and just did it with Tomcat as per the JABAWS documentation.</em></p>
<h2>Links:</h2>
<p>This HOWTO is an Ubuntu specific regurgitation of the docs below.</p>
<pre><a href="http://www.compbio.dundee.ac.uk/jabaws/manual_qs_war.html">http://www.compbio.dundee.ac.uk/jabaws/manual_qs_war.html</a></pre>
<pre><a href="https://help.ubuntu.com/10.04/serverguide/C/tomcat.html">https://help.ubuntu.com/10.04/serverguide/C/tomcat.html</a></pre>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2011/10/14/running-a-local-jabaws-server-for-jalview-on-ubuntu-11-04-natty/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>A mobile interface to the Registry of Standard Biological Parts</title>
		<link>http://blog.pansapiens.com/2010/10/24/a-mobile-interface-to-the-registry-of-standard-biological-parts/</link>
		<comments>http://blog.pansapiens.com/2010/10/24/a-mobile-interface-to-the-registry-of-standard-biological-parts/#respond</comments>
		<pubDate>Sun, 24 Oct 2010 08:37:47 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[android]]></category>
		<category><![CDATA[code]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[synthetic biology]]></category>
		<category><![CDATA[app engine]]></category>
		<category><![CDATA[gae]]></category>
		<category><![CDATA[Google App Engine]]></category>
		<category><![CDATA[igem]]></category>
		<category><![CDATA[iphone]]></category>
		<category><![CDATA[jQTouch]]></category>
		<category><![CDATA[mobile]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=241</guid>
		<description><![CDATA[Recently I developed a simple mobile interface to the Registry of Standard Biological Parts &#8211; the database that is currently the focal point for parts-based synthetic biology. I&#8217;ve called this mobile interface mPartsRegistry and I thought it would be worth &#8230; <a href="http://blog.pansapiens.com/2010/10/24/a-mobile-interface-to-the-registry-of-standard-biological-parts/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Recently I developed a simple mobile interface to the <a href="http://partsregistry.org/">Registry of Standard Biological Parts</a> &#8211; the database that is currently the focal point for parts-based synthetic biology. I&#8217;ve called this mobile interface <a href="http://mpartsregistry.appspot.com/">mPartsRegistry</a> and I thought it would be worth outlining it&#8217;s features and sharing some notes about the project, in case someone else finds it useful.</p>
<p><a href="http://mpartsregistry.appspot.com/">mPartsRegistry</a> is a simple interface to the Registry of Standard Biological Parts aimed at mobile smartphone browsers. It&#8217;s powered by the <a href="http://partsregistry.org/Registry_API">Parts Registry API</a>, which provides a simple RESTful interface to key metadata about parts in the database. It features:</p>
<ul>
<li>A simple interface tailored for mobile WebKit browsers (Android browser, mobile Safari, probably others). Web-based, zero-installation required.</li>
<li>Basic search of the Registry by part name.</li>
<li>&#8220;Favorite parts&#8221; to locally bookmark parts on your device.</li>
<li>Provides basic metadata associated with parts, including size, description, authors, DNA sequence, categories and availability.</li>
<li>Freely available and recyclable source code, released under the MIT License (<a href="http://github.com/pansapiens/mPartsRegistry">fork it on GitHub</a>).</li>
</ul>
<p><img class="alignleft" style="border: 2px solid black;" src="http://mpartsregistry.appspot.com/img/screenshot1.png" alt="" width="224" height="336" /></p>
<p>The idea for a mobile interface to the Registry came out of a moment in the wet lab, where I was supervising the Monash iGEM team, and someone asked &#8220;How many basepairs is that part again ?&#8221;. I&#8217;ve found most ideas for smartphone apps in the lab a little contrived; nothing more than an excuse to jump on the Android or iOS app bandwagon, with limited practical utility. This was a situation where I could genuinely see a use for a simple mobile interface to look up some reference information, so I thought I&#8217;d create it.</p>
<p>The goal is not to completely replicate the functionality of the Registry (at this stage the API would not allow that anyhow), but to provide simple mobile-friendly interface to quickly look up important data about a Biobrick(tm) parts in a laboratory setting, where accessing a desktop computer is often less convenient. In this context, you generally know the part name (eg B0034) that is written on a tube, but would like to quickly lookup some details.</p>
<p>The project consists of two main parts: the web frontend, build using <a href="http://jqtouch.com/">jQTouch</a> and Django templates hosted on <a href="http://code.google.com/appengine/">Google App Engine</a>, and the parser backend (<em>partsregistry.py</em>) that deals with directly querying the Registry API.</p>
<p><img class="alignright" style="border: 2px solid black;" src="http://mpartsregistry.appspot.com/img/screenshot2.png" alt="" width="224" height="336" /></p>
<p>The application uses <a href="http://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> on the server side to parse the XML served by the Registry&#8217;s API. This parser may be useful as a generic Python interface to the Registry API for other projects, although it is not yet feature complete. Why parse the XML on the server rather than the client ? The Registry API does not offer JSONP callbacks, making direct client-to-API queries by a web app served from another domain tricky (Same Origin Policy, yadda yadda). While this <em>probably</em> could have been done in straight clientside Javascript if I&#8217;d used some type of cross-domain AJAX hack, parsing on the server side also opens the possibility in the future to &#8216;value-add&#8217; to the data in some way, potentially incorporating extra data not served directly by the Registry API, before it&#8217;s sent to the client.</p>
<p>Google App Engine works as a cheap hosting solution for a low traffic app like this, which is likely to stay within the free quotas. Also, GAE supports Python, and I like Python. <a href="http://jqtouch.com/">jQTouch</a> makes for a reasonable cross-platform mobile web interface, since it is optimized for WebKit-based browsers. While officially jQTouch supports iPhone/iPod Touch and doesn&#8217;t have official Android support, in my hands it works well enough on Android (and in fact displayed some minor bugs on Mobile Safari that were not evident on Android). Typically when using jQTouch you are expected to load multiple &#8216;pages&#8217; all into several div-sections, lumped into a single HTML document. jQTouch then does the Javascript+CSS magic to render fast page switching, which actually working within a single HTML document. Since the main action of this app is to &#8216;search&#8217;, we don&#8217;t yet know what the results page will be, so this nice feature of jQTouch is barely used.</p>
<p>Searching for the same part all the time can get annoying, so mPartsRegistry provides a simple &#8216;bookmarking&#8217; feature where a list of favorite parts can be managed and stored on the device. This is implemented via HTML5 localStorage &#8211; if there was demand then this could easily be turned into server side storage, but I doubt it&#8217;s necessary. In the future, it might make sense to pre-cache the metadata for any of these &#8220;favorite parts&#8221; so that the fast page switching features in jQTouch can be used to full advantage.</p>
<p>Currently, the interface does not show information about sequence features, subparts and twins, however I plan to implement these at some point. The Registry API currently does not provide information about samples, literature references or lab groups, but once these are enabled I plan to support this metadata within mPartsRegistry too.</p>
<p>Okay, that&#8217;s all kids .. and remember .. take off your gloves before using your smartphone in the lab !</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2010/10/24/a-mobile-interface-to-the-registry-of-standard-biological-parts/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Stack Exchange sites for science</title>
		<link>http://blog.pansapiens.com/2010/05/12/stackexchange-sites-for-science/</link>
		<comments>http://blog.pansapiens.com/2010/05/12/stackexchange-sites-for-science/#comments</comments>
		<pubDate>Wed, 12 May 2010 05:33:48 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[bioinformatics]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[web2.0]]></category>
		<category><![CDATA[crystallography]]></category>
		<category><![CDATA[friendfeed]]></category>
		<category><![CDATA[nmr]]></category>
		<category><![CDATA[stack exchange]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=205</guid>
		<description><![CDATA[Recently I&#8217;ve noticed the emergence of several Stack Overflow-style sites for science-related questions and answers. For those unfamiliar with Stack Overflow &#8211; it&#8217;s a question and answer &#8216;forum&#8217; for computer programmers that keeps the signal-to-noise ratio very high through a &#8230; <a href="http://blog.pansapiens.com/2010/05/12/stackexchange-sites-for-science/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Recently I&#8217;ve noticed the emergence of several Stack Overflow-style sites for science-related questions and answers. For those unfamiliar with Stack Overflow &#8211; it&#8217;s a question and answer &#8216;forum&#8217; for computer programmers that keeps the signal-to-noise ratio very high through a carefully refined reputation system. Late last year the creators of Stack Overflow launched a hosted service called Stack Exchange, which allows anyone to start their own &#8220;Stack Overflow&#8221; based around any topic.</p>
<div id="attachment_222" style="width: 310px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/alicebartlett/2363694735/"><img class="size-medium wp-image-222   " style="margin-top: 2px; margin-bottom: 2px;" src="http://blog.pansapiens.com/wp-content/uploads/2010/05/2363694735_507a4eea3b_o-300x237.jpg" alt="2363694735_507a4eea3b_o" width="300" height="237" srcset="http://blog.pansapiens.com/wp-content/uploads/2010/05/2363694735_507a4eea3b_o-300x237.jpg 300w, http://blog.pansapiens.com/wp-content/uploads/2010/05/2363694735_507a4eea3b_o.jpg 688w" sizes="(max-width: 300px) 100vw, 300px" /></a><p class="wp-caption-text"> http://www.flickr.com/photos/alicebartlett/ / CC BY-NC 2.0</p></div>
<div>The service is was a little pricey ($129+/month), and I suspect this is one reason why a few open source clones inspired by Stack Overflow also exist. Since then, Stack Exchange sites (or clones) have proliferated &#8211; and those working as scientists (or those interested in science) haven&#8217;t been neglected. Here are my favorites:</div>
<ul>
<li><a href="http://majorgroove.org/">MajorGroove.org</a> pitches itself as a &#8216;forum for biologists&#8217;, which it is, however most of the content currently focuses on X-ray crystallography and associated techniques. It is currently in &#8216;bootstrap mode&#8217;, which means that reputation requirements are a little less strict until the userbase and site activity has grown to a critical size. Is there even a need for a Stack Exchange forum for biological crystallography ? Macromolecular crystallography already has a single, central, <em>de facto </em>standard forum &#8211; the <a href="https://www.jiscmail.ac.uk/cgi-bin/webadmin?S1=CCP4BB">CCP4BB mailing list</a>. While it may be antiquated by Web2.0 standards, CCP4BB works well for a lot of people, and there is a huge amount of useful and important information buried in it&#8217;s archives. For many crystallographers, it seems CCP4BB would only be extracted from their &#8220;cold dead hands&#8221;. Despite this, I think the Stack Overflow format will be very beneficial for people new to the field.  <em>As a side note </em>&#8211; I discovered MajorGroove via <a href="http://xia2.blogspot.com/">Graeme Winters XIA2 blog</a> right around the time when I was considering kickstarting a &#8220;Stack Overflow for crystallography&#8221;. At the moment it seems that a small userbase of crystallographers is already established on MajorGroove and there would be no purpose for another near identical forum. Even if questions about other techniques in the biosciences start to dilute out the structural biology, one click on the &#8216;<a href="http://www.majorgroove.org/questions/tagged/crystallography">crystallography</a>&#8216; tag or the &#8216;<a href="http://www.majorgroove.org/questions/tagged/ccp4">ccp4</a>&#8216; tag, and you can get straight to the good stuff. (In fact this feature was deemed useful enough by Google that they decided to bless the &#8216;<a href="http://stackoverflow.com/questions/tagged/android">android</a>&#8216; tag on Stack Overflow as the official Android Q&amp;A forum).</li>
<li>NMRWiki Q&amp;A (<a href="http://qa.nmrwiki.org/">http://qa.nmrwiki.org/</a>) is a StackExchange-clone for magnetic resonances, mostly focused on NMR, but also open to EPR/ESR and MRI users. It&#8217;s not actually running on the StackExchange platform, but uses the open source <a href="http://github.com/cnprog/CNPROG/network">OSQA / CNPROG</a> clone, built on top of Django. As far as I know, there is no &#8220;CCP4BB for NMR&#8221;, which makes the NMRWiki Q&amp;A site potentially even more valuable to structural biologists than it&#8217;s crystallography centric cousin, MajorGroove. Back when I was doing my PhD using protein NMR spectroscopy as my primary technique, there were very few good resources like this online &#8211; I do less NMR these days, but you can bet that I&#8217;ll be using the NMRWiki Q&amp;A and it&#8217;s associated wiki to refresh my memory and catch up on need methodological developments in the future.</li>
<li>BioStar (<a href="http://biostar.stackexchange.com/">http://biostar.stackexchange.com/</a>), a StackExchange for bioinformatics, computational genomics and systems biology questions and answers. This one is busier and better established than the above mentioned forums, probably by virtue of the fact the bioinformaticians spend more time in front of the computer than your average molecular biologist or structural biologist.</li>
<li>And, for a bit of fun: Skeptic Exchange (<a href="http://exchange.bristolskeptics.co.uk/">http://exchange.bristolskeptics.co.uk/</a>), which covers rational questions and answers to various topics including pseudoscience, faith healing, the supernatural and alternative medicine.</li>
</ul>
<p>Want more ? There are a bunch of science related StackExchanges listed under &#8220;Science&#8221; here: <a href="http://meta.stackexchange.com/questions/4/list-of-stackexchange-sites">http://meta.stackexchange.com/questions/4/list-of-stackexchange-sites</a> .. and digging back through the <a href="http://friendfeed.com/todd-lab/dd6ae79e/some-stack-exchange-based-science-discussion#">FriendFeed archives I see Matt Todd initiated a concise listing</a> (which if I&#8217;d seen, I probably never would have started this post).</p>
<p>And now, the latest<strong>*</strong> news <a href="http://blog.stackexchange.com/post/518474918/stack-exchange-2-0">Stack Exchange 2.0 will be &#8216;free</a>&#8216;. It looks like they are trying to structure the new Stack Exchange ecosystem a bit like the Usenet hierarchy (comp.*, rec.* etc), with a fairly involved discussion, proposal and acceptance process for new sites &#8211; it&#8217;s unclear yet whether this approach is going to work out better than just open sourcing the whole shebang, but time will tell. My guess is that BioStar, MajorGroove and probably even an incarnation of NMRWiki Q&amp;A will eventually become part of this formalized ecosystem.</p>
<p>On one hand making StackExchange sites free to run is great &#8211; it lowers the barrier to entry to allow many more sites to emerge and operate. On the other hand, as we have seen with the acquisition of FriendFeed by Facebook, not having a clear revenue stream can ultimately leave communities  (such as <a href="http://friendfeed.com/the-life-scientists">The Life Scientists</a>) without any certainty in a sites future, potentially impacting growth and participation. Personally I&#8217;m much more inclined to invest time in a site if it is something like Wikipedia, where I know my contributions are very likely to live on, in some form, for decades (centuries ?) to come. Ideally the archives of these new Stack Exchange sites could become useful online resources for decades to come &#8211; but with a single company at the helm and a &#8220;Web 2.0 business model&#8221;, continued operation for even a decade seems unlikely. The one saving grace: all content on the new Stack Exchange sites will be licensed under a Creative Commons license &#8211; so if Stack Exchange itself is acquired and shut down, we will always be able to preemptively leech the archives and provide them online elsewhere. Maybe it&#8217;s strange that I&#8217;m already thinking about archiving the new Stack Exchange upon it&#8217;s demise before it&#8217;s even begun, but I think it&#8217;s important to take the long term view with our data and recorded wisdom. Unlike when in 1994 when GeoCities (<a href="http://www.oocities.com/">R.I.P</a>) was started, teh Internets is no longer a fad &#8211; the hard disks connected to it are fast becoming the sum of all accessible human knowledge, so we&#8217;d better make sure we can retain the good bits for a little longer than 10 years.</p>
<p><em>* &#8211; as all too common these days .. I&#8217;m a little behind the curve on this one. I meant to finish this post a month ago, but with a busy time pre-holiday, then the actual holiday, a month has gone by.</em></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2010/05/12/stackexchange-sites-for-science/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>The Great Australian Internet Blackout WordPress Plugin</title>
		<link>http://blog.pansapiens.com/2010/01/22/great-australian-internet-blackout-wordpress-plugin/</link>
		<comments>http://blog.pansapiens.com/2010/01/22/great-australian-internet-blackout-wordpress-plugin/#respond</comments>
		<pubDate>Fri, 22 Jan 2010 00:28:49 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[code]]></category>
		<category><![CDATA[meta]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[wordpress]]></category>
		<category><![CDATA[censorship]]></category>
		<category><![CDATA[nocleanfeed]]></category>
		<category><![CDATA[plugin]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=193</guid>
		<description><![CDATA[Normally I stick to posts about science and technology on this blog. Like most Australians, I vote in elections, try to remain informed, but otherwise stay away from getting involved in politics. However, occasionally certain things become important enough issues &#8230; <a href="http://blog.pansapiens.com/2010/01/22/great-australian-internet-blackout-wordpress-plugin/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Normally I stick to posts about science and technology on this blog. Like most Australians, I vote in elections, try to remain informed, but otherwise stay away from getting involved in politics. However, occasionally certain things become important enough issues that they need to be advertised more widely.</p>
<p>As you may know, the Australian Federal Government is attempting to censor the Internet within Australia by forcing ISPs to block a list of websites. This <strong>proposed internet filter will not be optional</strong>; it will effect all Australians, and the blocklist will compiled by a small group of people. The<strong> list of blocked sites will remain secret</strong>, so the Australian public will find it difficult to determine if this power is being abused. It <strong>will not prevent the spread of illegal material</strong>, which is typically shared via peer-to-peer networks that will not be blocked by the internet filter. If it is not already self evident why this approach to internet censorship is both an <strong>ineffective, a waste of resources</strong> and a potential threat to the freedom of information flow required for a healthy democracy, you can read more at the <a href="http://www.internetblackout.com.au/">Great Australian Internet Blackout site</a> and the <a href="http://www.efa.org.au/">Electronic Frontiers Australia</a> site.</p>
<p>The Great Australian Internet Blackout is a combined online and offline demonstration against this imposed online censorship. For one week – January 25-29th – Aussie websites will &#8220;black out&#8221; to inform an even wider audience about the threat of imposed censorship.</p>
<div id="attachment_194" style="width: 302px" class="wp-caption aligncenter"><img class="size-medium wp-image-194" title="The Great Australian Internet Blackout popup" src="http://blog.pansapiens.com/wp-content/uploads/2010/01/tgib_popup-292x300.png" alt="This is what it looks like right now. I'm guessing that on January 25th something exciting will appear inside that popup box !" width="292" height="300" srcset="http://blog.pansapiens.com/wp-content/uploads/2010/01/tgib_popup-292x300.png 292w, http://blog.pansapiens.com/wp-content/uploads/2010/01/tgib_popup.png 709w" sizes="(max-width: 292px) 100vw, 292px" /><p class="wp-caption-text">This is what it looks like right now. I&#39;m guessing that on January 25th something exciting (or educational) will appear inside that popup box !</p></div>
<p><span id="more-193"></span></p>
<p>I&#8217;ve created a simple WordPress plugin that makes it a little easier to participate in the demonstration and spread the word. It uses the &#8216;blackout.js&#8217; script written by <a href="http://inodes.org/">John Ferlito</a> to display a popup box that tells the user about the Great Australian Internet Blackout, while &#8220;blacking out&#8221; (significantly darkening) your website in the background. Once the user closes the box things go back to normal &#8211; it uses cookies so they only see the popup once.</p>
<h3 style="text-align: center;"><strong><a href="/wp-content/uploads/internet-blackout-wordpress-plugin/internet-blackout-wordpress-plugin_0.9.zip">Download the Internet Blackout WordPress Plugin</a></strong></h3>
<p style="text-align: center;"><span style="font-weight: normal;">(version 0.9, md5: 16522abb4d492f445a4c5ffccd845c73 )</span></p>
<h3 style="text-align: center;">
<div id="_mcePaste" style="position: absolute; left: -10000px; top: 204px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;"><span style="font-weight: normal;">{{{</span></div>
<div id="_mcePaste" style="position: absolute; left: -10000px; top: 204px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;"><span style="font-weight: normal;">git rm path-of-file-to-kill</span></div>
<div id="_mcePaste" style="position: absolute; left: -10000px; top: 204px; width: 1px; height: 1px; overflow-x: hidden; overflow-y: hidden;"><span style="font-weight: normal;">}}</span></div>
</h3>
<p><a href="http://codex.wordpress.org/Managing_Plugins">Install it as you would any other simple WordPress plugin</a> &#8211; eg, unzip the archive in your <em>wp-content/plugins/</em> directory on the server. Also, online demonstrations are all well and good, but that shouldn&#8217;t be where it ends. Finish the installation by <a href="http://nocleanfeed.com/action.html">Contacting your Member of Parliament</a>.</p>
<p>This is my first WordPress plugin, so it may be sub-optimal (or even contain bugs !). I&#8217;ve put the <a href="http://github.com/pansapiens/internet-blackout-wordpress-plugin">Internet Blackout plugin  source on Github</a> so that programmer-types can fix it, if need be.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2010/01/22/great-australian-internet-blackout-wordpress-plugin/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>2009 &#8211; the posts that never made it</title>
		<link>http://blog.pansapiens.com/2010/01/02/2009-the-posts-that-never-made-it/</link>
		<comments>http://blog.pansapiens.com/2010/01/02/2009-the-posts-that-never-made-it/#comments</comments>
		<pubDate>Sat, 02 Jan 2010 11:57:23 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[meta]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[2009]]></category>
		<category><![CDATA[diybio]]></category>
		<category><![CDATA[icecondor]]></category>
		<category><![CDATA[synthetic biology]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=168</guid>
		<description><![CDATA[So, people tell me 2009 ended recently. Apparently there were fireworks and stuff. This blog as seen very little action during 2009, despite my various good intentions for a blog &#8216;reboot&#8217; (ala Pawel). Like many of my online friends, I &#8230; <a href="http://blog.pansapiens.com/2010/01/02/2009-the-posts-that-never-made-it/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>So, people tell me 2009 ended recently. Apparently there were fireworks and stuff. This blog as seen very little action during 2009, despite my various good intentions for a blog &#8216;reboot&#8217; (ala <a href="http://freelancingscience.com/2009/01/11/science-and-art-new-theme-for-the-new-year/">Pawel</a>).</p>
<p>Like many of my online friends, I blame FriendFeed. I find commenting on a FriendFeed post a much more productive way of having a conversation around some new development sweeping the web than writing a dedicated blog post. Still, despite this being my &#8220;year of FriendFeed&#8221;, I <em>started</em> writing a few blog posts / articles / essays this year which never made it out of the Drafts folder. There is a positive side to unpublished drafts &#8211; they serve to nicely organize some thoughts, even if they are ultimately never shared. Anyhow, it&#8217;s time to clean them out and move on &#8211; and as part of that process &#8211; here are the highlights of my posts that never were.</p>
<p><span id="more-168"></span></p>
<h3>&#8220;Why biohacking cannot come of age&#8221;</h3>
<p>I wrote quite a long essay around the time that synthetic biology was getting lots of press, and just before DIYbio appeared on the scene (as a side note: the name &#8220;DIYbio&#8221; is PR genius  &#8211; taking the &#8216;hacking&#8217; out of biohacking to help avoid misinterpretation by the mass media was a smart move). The opening of this defunct post pretty much sums up it&#8217;s contention:</p>
<blockquote><p>&#8220;A healthy biohacking ecosystem requires the participation of hobbyists, and will fail to flourish in the same way &#8216;Information Technology&#8217; and &#8216;The Internet&#8217; have flourished if participants remain confined to academic and commercial labs.&#8221;.</p></blockquote>
<p>The old Silicon Valley example (myth?) of the two guys, both called Steve, launching technology from their garage was cited. I then went on to state the obvious &#8211; current regulatory frameworks surrounding recombinant DNA and genetic modification make most serious pursuits by hobbyists acting alone legally dubious. Ultimately, I chickened out and decided it was better left unpublished, but a highly modified version my emerge one day. Key links:</p>
<ul>
<li><a href="http://www.wired.com/medtech/health/news/2004/06/63637">The case of Professor Steven Kurtz</a></li>
<li>&#8220;<em>The bio-security framework is going to collapse</em>. — <a href="http://www.edge.org/3rd_culture/endy08/endy08_index.html">Drew Endy</a>&#8220;</li>
<li><a href="http://www.wired.com/wired/archive/14.06/chemistry.html">Good chemistry kits are hard to buy these days</a></li>
<li>The “<a href="http://en.wikipedia.org/wiki/Precautionary_principle">precautionary principle</a>”</li>
</ul>
<h3>IceCondor &#8211; continuous location tracking</h3>
<p>Around the end of 2008 when I was momentarily in employment limbo, I began to write an Android mobile geolocation app and started playing with Don Park&#8217;s <a href="http://icecondor.com/">IceCondor</a>. I decided to highlight it with a blog post, but never got around to ultimately publishing it. Essentially, IceCondor is/was a location sharing app, but unlike BrightKite, FireEagle, Google Latitude, Foursquare (&amp; Twitter, these days), IceCondor does <em>continuous location tracking</em>. eg, your GPS location can be shared every 30 seconds via 3G on your Android device (although high frequency updates eat the battery quickly, so lower frequency updates are more practical). IceConder (initially) didn&#8217;t include any privacy settings &#8211; all locations were openly shared online, with individuals identifiable via their OpenID. As far as I could tell, the only two individuals that gave it any significant use were Don Park, and myself. My main point for writing about IceCondor was to argue that wilfully sharing your location in realtime and opting out of some privacy may actually be <em>safer</em> that not sharing your location. I believe that for most people there is more chance of being randomly mugged than actively stalked, so letting people know where you are is a Good Thing(tm). Don has since changed the focus of IceCondor (at least in the version on the Android Market) to be a simple GeoRSS reader. I get the impression that he is <a href="http://everyonedelivers.com/">working on other things</a> these days, but the original software and it&#8217;s potential uses are pretty cool &#8211; it lives on <a href="http://github.com/donpdonp/icecondor-client-android">at GitHub,</a> and I notice he has been poking at it again recently.</p>
<h3>(Re)-discovering Pymol</h3>
<p>I get a little sad thinking about this particular post. I&#8217;d planned to write about some lesser known functions of Pymol that I had recently discovered (namely the -p, -R and -G commandline options), but never got round to investigating them thoroughly enough to warrant a blog post. Some time after starting the draft and then leaving it to languish, the author of Pymol, <a href="http://www.jmdelano.com/">Warren DeLano</a>, tragically passed away at the age of 37. I never met Warren, but I was a grateful user of his amazing software, and I wish his family well over what must have been a difficult festive season without him.</p>
<h3>Protein sequence clustering tools</h3>
<p>I planned to write an article comparing protein sequence clustering tools. I still might, but here is the unannotated list so far:</p>
<ul>
<li>CLANS</li>
<li>Blastclust</li>
<li>CD-HIT</li>
<li>MCL / TribeMCL (<a href="http://micans.org/mcl/"> http://micans.org/mcl/</a> )</li>
<li>an excellent <a href="http://en.wikipedia.org/wiki/Sequence_clustering">list of sequence clustering tools on Wikipedia</a></li>
</ul>
<h3>Spam as an indicator of social network success ?</h3>
<p>Surely there are already multiple essays on this topic by social media and internet culture enthusiasts. I&#8217;ve only searched briefly. The idea for this post was stimulated by some advertising that was sent to me via my delicious inbox (On an unrelated note: 2009 was the year <a href="http://www.diigo.com/user/pansapiens">I moved to Diigo for social bookmarking</a>). This spam wasn&#8217;t as indiscriminant as the usual &#8220;enlarge your whatever&#8221; you expect by email, but some fairly niche advertising for cheminformatics software &#8230;  while probably not spam in the strictest sense, it was nonetheless &#8220;spammish&#8221; in nature since numerous others were also targeted (via delicious &#8220;for:&#8221; tags). <a href="http://nsaunders.wordpress.com/">Neil Saunders</a> also noted that he had seen some spam on Slideshare. Key ideas:</p>
<ul>
<li>Is spam an indicator of social network self-sustainability, &#8216;viral growth&#8217; or &#8216;critical mass&#8217; ?</li>
<li>or is it an indicator that &#8216;stationary phase&#8217;, the slowing of growth, has begun ?</li>
<li>Just as &#8220;<em>the network interprets censorship as damage and routes around it</em>&#8220;, does spam &#8220;<em>interpret small networks as inviable, and avoid them</em>&#8221; ?</li>
<li>How does this relate to the cost / reward &#8211; ie. cost of spamming vs. potential audience &#8211; see <a href="http://www.schneier.com/blog/archives/2008/11/the_economics_o.html">Economics of Spam</a>.</li>
</ul>
<h3>Synthetic biology 4.0: reflections on the state of play</h3>
<p>This is one I&#8217;d totally forgotten about until now, from late 2008, written shortly after I&#8217;d attended the Synthetic Biology 4.0 conference in Hong Kong. It contained the picture below, along with lots of <em>opinion.</em></p>
<p><a href="http://blog.pansapiens.com/wp-content/uploads/2008/10/sb_gartner_hype_cycle.png" rel="lightbox[168]"><img class="size-medium wp-image-85" title="Synthetic biology: where is it on the hype cycle ?" src="http://blog.pansapiens.com/wp-content/uploads/2008/10/sb_gartner_hype_cycle-300x194.png" alt="Modified from Jeremy Kemps version at Wikipedia, used under Creave Commons Attribution-ShareAlike 3.0 license." width="300" height="194" srcset="http://blog.pansapiens.com/wp-content/uploads/2008/10/sb_gartner_hype_cycle-300x194.png 300w, http://blog.pansapiens.com/wp-content/uploads/2008/10/sb_gartner_hype_cycle.png 559w" sizes="(max-width: 300px) 100vw, 300px" /></a></p>
<p><a href="http://en.wikipedia.org/wiki/Technology_hype">Gartner&#8217;s hype cycle</a></p>
<p>On re-reading it, I&#8217;ve decided to make some final changes and <a href="http://blog.pansapiens.com/2008/10/16/synthetic-biology-4-0-reflections-on-the-state-of-play">retro-publish it anyway</a>. It&#8217;s not the most coherent article I&#8217;ve ever written, and some of my opinions have probably changed in the last 12 months, but I couldn&#8217;t bring myself to just trash it.</p>
<h3>More thoughts on Biopython from a non-contributing shoegazer</h3>
<p>This post was a little bit of a rant/analysis that probably better belongs on the Biopython development mailing list. It was started by <a href="http://igotgenes.blogspot.com/2008/08/not-biopythonista-i-thought-id-be.html">Chris Lasher lamenting that academic researchers are rarely encouraged to work on tools like Biopython,</a> and continued summarizing <a href="http://ivory.idyll.org/blog/sep-08/the-future-of-bioinformatics-part-1a.html">various</a> <a href="http://www.davispj.com/posts/python-in-bioinformatics.html">peoples</a> ideas on why Bioperl still remains in dominant usage, over Biopython. My main conclusion (if there was one), was that the Biopython team over the years has tended to do a good job by maintaining a high standard of quality by deprecating unused, undocumented and unit test-less code &#8230; but sometimes perfect has been the enemy of good. Plus, Bioperl had a head start <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<h3>The Golden ratio in molecular biology ?</h3>
<p>This one has been sitting in Drafts since 2007. I really should just dump it, but the idea still appeals to me. The Golden ratio does appear in nature at the macroscopic level, so why not at the micro- or nano- scale ?</p>
<p>Here&#8217;s a choice quote from my notes that may explain why I haven&#8217;t yet finished this post:</p>
<blockquote><p>I think one difficulty in searching for this type of stuff is that the Golden ratio is popular with those into &#8220;numerical mysticism&#8221;, so if PubMed gives you naught, you have to wade through a lot of kooky pseudoscience in the Google hits before you find the &#8220;real science&#8221;.</p></blockquote>
<p>Maybe it will see the light of day in 2010, you never know.</p>
<h3>Computation in a single cell &#8230; how many logic gates would fit ?</h3>
<p>Well &#8230; you tell me <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2010/01/02/2009-the-posts-that-never-made-it/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>A proposal for encouraging user contributed annotations to Uniprot</title>
		<link>http://blog.pansapiens.com/2009/08/03/a-proposal-for-encouraging-user-contributed-annotations-to-uniprot/</link>
		<comments>http://blog.pansapiens.com/2009/08/03/a-proposal-for-encouraging-user-contributed-annotations-to-uniprot/#comments</comments>
		<pubDate>Mon, 03 Aug 2009 09:21:56 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[bioinformatics]]></category>
		<category><![CDATA[science]]></category>
		<category><![CDATA[uniprot]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=143</guid>
		<description><![CDATA[Today I attended a presentation by Maria J Martin about Uniprot and various other EBI database services. At the end of the talk, someone asked something to the effect of &#8220;How about simplifying user submission of annotations / corrections&#8221; &#8211; &#8230; <a href="http://blog.pansapiens.com/2009/08/03/a-proposal-for-encouraging-user-contributed-annotations-to-uniprot/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p>Today I attended a presentation by Maria J Martin about Uniprot and various other EBI database services. At the end of the talk, someone asked something to the effect of &#8220;How about simplifying user submission of annotations / corrections&#8221; &#8211; they wanted something in addition to the current &#8216;free text&#8217; feedback and comments forms, and wanted a way to easily suggest annotations in a structured way. There was some suggestion of wiki&#8217;s etc, and how this had been tried to some extent, but they hadn&#8217;t got it right yet.</p>
<p>Here is my take on an approach to user submitted content to Uniprot. Essentially users should be able to add/change annotations piecewise, directly via the standard Uniprot web page for each protein record. These changes would &#8216;go live&#8217; immediately, but since a large part of the value in Uniprot lies in its curation by expert annotators, the interface would also provide a very clear separation between user-submitted &#8216;uncurated&#8217; annotations and the current expertly curated annotations.</p>
<p>I&#8217;ve made some mockups of how some parts of the UI may look in my little fantasy world:</p>
<p style="text-align: left;"><a href="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup1_history_crop.png" rel="lightbox[143]"><img class="aligncenter size-medium wp-image-144" title="Uniprot mockup 1, User/annotations and History" src="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup1_history_crop-300x97.png" alt="Uniprot mockup 1, User/annotations and History" width="300" height="97" srcset="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup1_history_crop-300x97.png 300w, http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup1_history_crop.png 1002w" sizes="(max-width: 300px) 100vw, 300px" /></a><span id="more-143"></span><br />
• User login box at top (eg, OpenID)<br />
• A History tab at the top.<br />
• User submitted changes tab.<br />
• Maybe a &#8220;Discussion&#8221; tab, ala Wikipedia (not pictured).<br />
• Each field, or block of related fields, would have an Add/edit button at the top right of the block. (I&#8217;ve chosen the <a href="http://universaleditbutton.org/Universal_Edit_Button">Universal Edit Button</a> as an example)</p>
<p style="text-align: left;"><em>Aftertought: Maybe putting these features under tabs isn&#8217;t quite the best place, since the existing tabs are &#8216;actions&#8217; that can be taken rather than &#8216;extra info&#8217; to be viewed. This UI detail could certainly be refined.</em></p>
<p style="text-align: left;">
<a href="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup2_edit_button_crop.png" rel="lightbox[143]"><img class="aligncenter size-medium wp-image-145" title="Uniprot mockup 2, an edit button" src="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup2_edit_button_crop-300x88.png" alt="Uniprot mockup 2, an edit button" width="300" height="88" srcset="http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup2_edit_button_crop-300x88.png 300w, http://blog.pansapiens.com/wp-content/uploads/2009/08/mockup2_edit_button_crop.png 1002w" sizes="(max-width: 300px) 100vw, 300px" /></a><br />
This proposal has many wiki-like features (history, attribution, open editing, curation by trusted users and potentially page/section locking) but doesn&#8217;t really fit my definition of a wiki since the input format is not free-form wiki-text, but is instead constrained by the interface to enforce the submission of (mostly) structured data (eg, a traditional data entry into an HTML form, or in-line editing of fields).</p>
<p>Any authenticated user would be able to add or edit fields by clicking on the &#8220;Add/edit annotations&#8221; button associated with that block (see mockup above). They would then be sent to a page where they can click to edit a particular field (in this case a point mutation and associated change in function), or click &#8220;Add new&#8221; to add a new mutation field and fill out the details (I didn&#8217;t make a mockup picture for this .. use your imagination). They also must specify one of the standard &#8220;evidence codes&#8221; from a dropdown box for each change/addition, including the PMID of a publication if relevant. User submissions are automatically flagged with some type of &#8216;user submitted&#8217; flag too, and a username. Homologs (from UniRef clusters) could also be listed here to remind the user that certain annotations might need to be propogated to other members of the same family, if required (otherwise the curators would do this part, when applicable, for the next Uniprot release). For all I know, Uniprot may already have an interface similar to this, already in use by their professional curators. In effect, I&#8217;d like to see the 37signals &#8220;<a href="http://gettingreal.37signals.com/ch09_One_Interface.php">One interface</a>&#8221; dictum applied.</p>
<p>User submitted changes would not automatically go live on the main Uniprot record page, but can be seen by clicking the &#8220;User submitted&#8221; tab at the top. Alternatively, the user submitted annotations could be put at the bottom of the page, like most blog comments, but clearly differentiated from the curated data by colour and other visual queues. The REST API could be told to include/exclude uncurated user annotations in responses by an extra query flag in the request (eg &amp;userannotations=true). Uniprot curators can periodically review user submitted annotations and integrate them into the official Uniprot release as they see fit.</p>
<p>Under the History tab, the history of changes to that Uniprot record, both by user submitted changes and by Uniprot release would be available. This functionality is already mostly available under &#8220;Entry history-&gt;Complete history&#8221; at the bottom of the page, but user submitted annotations would also be included here with appropriate diff colouring (eg, coloured differently to curated changes, until they are officially accepted).</p>
<p>Providing user pages at a URL: <em>http://www.uniprot.org/user/some_sensible_username</em> with an associated RSS/ATOM feed would encourage participation by highlighting individual user contributions, and potentially allow a Wikipedia-like community of expert/fanatical annotators to emerge.</p>
<p>The Discussion tab would be used in much the same way Wikipedia Talk pages are &#8211; passive users, contributors and curators would be able to discuss the finer details of any submitted annotations. I&#8217;m of two minds about this one, since anyone who has read Wikipedia Talk pages knows things can get quite ugly there sometimes. On the other hand, the communication it allows would be important for building a community of annotators and helping clarify contributions.</p>
<p>PS: I&#8217;m a Uniprot fanboy. Can you tell ? <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2009/08/03/a-proposal-for-encouraging-user-contributed-annotations-to-uniprot/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Occyd : tagging for locations</title>
		<link>http://blog.pansapiens.com/2009/02/14/occyd-tagging-for-locations/</link>
		<comments>http://blog.pansapiens.com/2009/02/14/occyd-tagging-for-locations/#respond</comments>
		<pubDate>Sat, 14 Feb 2009 02:08:16 +0000</pubDate>
		<dc:creator><![CDATA[Andrew Perry]]></dc:creator>
				<category><![CDATA[android]]></category>
		<category><![CDATA[code]]></category>
		<category><![CDATA[occyd]]></category>
		<category><![CDATA[python]]></category>
		<category><![CDATA[software]]></category>
		<category><![CDATA[gae]]></category>
		<category><![CDATA[geotagging]]></category>
		<category><![CDATA[Google App Engine]]></category>

		<guid isPermaLink="false">http://blog.pansapiens.com/?p=109</guid>
		<description><![CDATA[Those who have been watching may have noticed I quietly started developing an Android application in the last month or so. It&#8217;s still super-buggy and far from feature complete, but I thought it was time to announce it here (&#8220;release &#8230; <a href="http://blog.pansapiens.com/2009/02/14/occyd-tagging-for-locations/">Continue reading <span class="meta-nav">&#8594;</span></a>]]></description>
				<content:encoded><![CDATA[<p><img class="alignright size-full wp-image-113" title="Occyd Map View (search results)" src="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_mapview.png" alt="Occyd Map View (search results)" width="320" height="480" srcset="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_mapview.png 320w, http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_mapview-200x300.png 200w" sizes="(max-width: 320px) 100vw, 320px" /></p>
<p>Those who have been watching may have noticed I quietly started developing an Android application in the last month or so. <strong>It&#8217;s still super-buggy and far from feature complete</strong>, but I thought it was time to announce it here (&#8220;release early, release often&#8221;). It&#8217;s not ready for real users yet, but developers may like to take a little look.</p>
<p><span id="more-109"></span></p>
<p>Occyd (<a href="http://www.bartleby.com/61/12.html"><img src="http://www.bartleby.com/images/pronunciation/obreve.gif" alt="" align="absbottom" />-k <img src="http://www.bartleby.com/images/pronunciation/emacr.gif" alt="" align="absbottom" />d</a> <em>.. sounds like rockied or oggied</em>) is an application for tagging geolocations, aimed at GPS-enabled network-connected devices. It currently consists of an Android client, and a server backend running on Google App Engine. The (evolving) API is simple enough that it should be easy to write clients (or servers) for various platforms. The idea is to enable people to tag locations on the surface of the planet with a list of keywords, just like they can tag web pages with <a href="http://delicious.com/">delicious</a>. They should also be able to search for tagged locations, based on tag(s), on distance from their current location and recency of the post.</p>
<p><img class="alignright size-full wp-image-111" title="Occyd posting screen" src="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_post.png" alt="Occyd posting screen" width="320" height="480" srcset="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_post.png 320w, http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_post-200x300.png 200w" sizes="(max-width: 320px) 100vw, 320px" /></p>
<p>Here&#8217;s one possible elevator pitch (for a very long, slow elevator ride):</p>
<blockquote><p>&#8220;You are a member of a large bird watching club. Your members like to record where they have spotted various species, and use Occyd to share the locations at which they have sighted various birds. You are out in the park, when you spot the rare Orange Bellied Parrot. You pull out your Android phone, fire up the Occyd client which automatically knows your location via GPS, and tag that current location &#8216;orangebelliedparrot parrot birds&#8217;. You then decide to see if others have spotted parrots in the area. You search for &#8216;parrot&#8217; in the Occyd client; a map appears showing the locations of all the other sightings tagged &#8216;parrot&#8217; in your vacinity. You tweak the search settings to show only &#8216;parrot&#8217; sightings within 100 metres and 14 days &#8230; on the map you see that your friend <em>RobHill</em> spotted an Orange Bellied Parrot here last week &#8211; looks like the numbers of this population are recovering !&#8221;</p>
<p><img class="alignright size-full wp-image-112" title="Occyd searching screen" src="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_search.png" alt="Occyd searching screen" width="320" height="480" srcset="http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_search.png 320w, http://blog.pansapiens.com/wp-content/uploads/2009/02/occyd_search-200x300.png 200w" sizes="(max-width: 320px) 100vw, 320px" /></p></blockquote>
<p>Ponder for a bit, and I&#8217;m sure you can think up at least a handful of other great uses (tagging good fishing spots, favorite cafes, or maybe even sightings of parking inspectors <img src="https://s.w.org/images/core/emoji/11/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> ).</p>
<p>As with any new project, there are lots more ideas than time to implement them (and I have a day job that doesn&#8217;t involve Occyd &#8230;). The <a href="http://github.com/pansapiens/occyd-android/tree/master">Occyd Android client</a> and <a href="http://github.com/pansapiens/occyd-gae-server/tree/master">Occyd GAE server</a> source is currently available under the GPL v3 on GitHub, and I&#8217;m keeping all my documentation and notes on the <a href="http://wiki.github.com/pansapiens/occyd-android">Occyd Android client wiki</a> provided at GitHub. Watch this space &#8230;.</p>
]]></content:encoded>
			<wfw:commentRss>http://blog.pansapiens.com/2009/02/14/occyd-tagging-for-locations/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
