<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#" xmlns:georss="http://www.georss.org/georss" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>Algocracy and Transhumanism Podcast</title>
	<atom:link href="https://algocracy.wordpress.com/category/podcast/feed/" rel="self" type="application/rss+xml"/>
	<link>https://algocracy.wordpress.com</link>
	<description>Interviews with experts and occasional audio essays about the philosophy of the future.</description>
	<lastBuildDate>Tue, 27 Oct 2020 11:00:52 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>http://wordpress.com/</generator>
<site xmlns="com-wordpress:feed-additions:1">104718194</site><atom:link href="https://algocracy.wordpress.com/osd.xml" rel="search" title="Algocracy and the Transhumanist Project" type="application/opensearchdescription+xml"/>
	<atom:link href="https://algocracy.wordpress.com/?pushpress=hub" rel="hub"/>
<itunes:summary>Interviews with leading experts about algorithmic governance, political values, human enhancement and transhumanism</itunes:summary>
<googleplay:description>Interviews with experts and occasional audio essays about the philosophy of the future.</googleplay:description>
<itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<copyright>Creative Commons</copyright>
<itunes:explicit>no</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:image href="https://3.bp.blogspot.com/-sqC6nVf2DP8/WRh2uPNc9GI/AAAAAAAAEoE/E5-UfpAYAYs_N7wBI1oBx9JQpSXDCFzcwCLcB/s1600/Podcast%2B.001%2B%25281%2529.png"/>
<googleplay:image href="https://i0.wp.com/algocracy.wordpress.com/wp-content/uploads/2016/03/podcast-001-1.png?fit=3000%2C3000&amp;ssl=1"/>



	<itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords><itunes:subtitle>The future of governance and values in the posthuman era</itunes:subtitle><itunes:category text="Education"/><itunes:owner><itunes:email>john.danaher@nuigalway.ie</itunes:email><itunes:name>John Danaher</itunes:name></itunes:owner><item>
		<title>85 – The Internet and the Tyranny of Perceived Opinion</title>
		<link>https://algocracy.wordpress.com/2020/10/27/85-the-internet-and-the-tyranny-of-perceived-opinion/</link>
					<comments>https://algocracy.wordpress.com/2020/10/27/85-the-internet-and-the-tyranny-of-perceived-opinion/#respond</comments>
		
		
		<pubDate>Tue, 27 Oct 2020 10:55:37 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2898</guid>

					<description><![CDATA[Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/10/27/85-the-internet-and-the-tyranny-of-perceived-opinion/">More <span class="screen-reader-text">85 &#8211; The Internet and the Tyranny of Perceived&#160;Opinion</span></a>]]></description>
										<content:encoded><![CDATA[<p><img data-attachment-id="2905" data-permalink="https://algocracy.wordpress.com/2020/10/27/85-the-internet-and-the-tyranny-of-perceived-opinion/henrik-skaug-saetra/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg" data-orig-size="1280,1920" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Henrik Skaug Saetra" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=683" class="alignnone  wp-image-2905" src="https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=463&#038;h=695" alt="Henrik Skaug Saetra" width="463" height="695" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=463&amp;h=695 463w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=926&amp;h=1389 926w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=100&amp;h=150 100w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=200&amp;h=300 200w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=768&amp;h=1152 768w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg?w=683&amp;h=1024 683w" sizes="(max-width: 463px) 100vw, 463px"></p>
<p>Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in the Faculty of Business, Languages and Social Science at Østfold University College in Norway. He has a particular interest in political theory and philosophy, and has worked extensively on Thomas Hobbes and social contract theory, environmental ethics and game theory. At the moment his work focuses mainly on issues involving the dynamics between human individuals, society and technology.</p>
<p>You download the episode <a href="https://archive.org/download/henrik-saetra-master-27-10-2020-10.37/Henrik%20Saetra%20Master%20-%2027%3A10%3A2020%2C%2010.37.mp3">here</a> or listen below. You can also subscribe on&nbsp;<a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>,&nbsp;<a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>,&nbsp;<a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a>&nbsp;and other&nbsp;<a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a>&nbsp;(the RSS feed is&nbsp;<a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/henrik-saetra-master-27-10-2020-10.37" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div>
<h3>Show Notes</h3>
<p>Topics discussed include:</p>
<ul>
<li>Selective Exposure and Confirmation Bias</li>
<li>How algorithms curate our informational ecology</li>
<li>Filter Bubbles</li>
<li>Echo Chambers</li>
<li>How the internet is created more internally conformist but externally polarised groups</li>
<li>The nature of political freedom</li>
<li>Tocqueville and the tyranny of the majority</li>
<li>Mill and the importance of individuality</li>
<li>How algorithmic curation of speech is undermining our liberty</li>
<li>What can be done about this problem?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.hiof.no/oss/english/people/aca/henrsatr/index.html">Henrik&#8217;s faculty homepage</a></li>
<li>Henrik on Researchgate</li>
<li>Henrik on Twitter</li>
<li><a href="https://www.researchgate.net/publication/334309245_The_tyranny_of_perceived_opinion_Freedom_and_information_in_the_era_of_big_data">&#8216;The Tyranny of Perceived Opinion: Freedom and information in the era of big data&#8217;</a> by Henrik</li>
<li>&#8216;<a href="https://www.researchgate.net/publication/344385430_Privacy_as_an_aggregate_public_good">Privacy as an aggregate public good</a>&#8216; by Henrik</li>
<li>&#8216;<a href="https://www.researchgate.net/publication/334292227_Freedom_under_the_gaze_of_Big_Brother_Preparing_the_grounds_for_a_liberal_defence_of_privacy_in_the_era_of_Big_Data">Freedom under the gaze of Big Brother: Preparing the grounds for a liberal defence of privacy in the era of Big Data</a>&#8216; by Henrik</li>
<li>&#8216;<a href="http://When nudge comes to shove: Liberty and nudging in the era of big data">When nudge comes to shove: Liberty and nudging in the era of big data</a>&#8216; by Henrik</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/10/27/85-the-internet-and-the-tyranny-of-perceived-opinion/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="105932927" type="audio/mpeg" url="http://archive.org/download/henrik-saetra-master-27-10-2020-10.37/Henrik%20Saetra%20Master%20-%2027%3A10%3A2020%2C%2010.37.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2898</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in … More 85 – The Internet and the Tyranny of Perceived Opinion</itunes:summary>
<googleplay:description>Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in … More 85 – The Internet and the Tyranny of Perceived Opinion</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/10/henrik-skaug-saetra.jpg">
			<media:title type="html">Henrik Skaug Saetra</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Are we losing our liberty as a result of digital technologies and algorithmic power? In particular, might algorithmically curated filter bubbles be creating a world that encourages both increased polarisation and increased conformity at the same time? In today’s podcast, I discuss these issues with Henrik Skaug Sætra. Henrik is a political scientist working in &amp;#8230; More 85 &amp;#8211; The Internet and the Tyranny of Perceived&amp;#160;Opinion</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>84 – Social Media, COVID-19 and Value Change</title>
		<link>https://algocracy.wordpress.com/2020/10/20/84-social-media-covid-19-and-value-change/</link>
					<comments>https://algocracy.wordpress.com/2020/10/20/84-social-media-covid-19-and-value-change/#respond</comments>
		
		
		<pubDate>Tue, 20 Oct 2020 08:57:56 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2888</guid>

					<description><![CDATA[Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/10/20/84-social-media-covid-19-and-value-change/">More <span class="screen-reader-text">84 &#8211; Social Media, COVID-19 and Value&#160;Change</span></a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large is-resized"><img data-attachment-id="2896" data-permalink="https://algocracy.wordpress.com/steffen-steinert/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg" data-orig-size="301,301" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="steffen-steinert" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg?w=301" src="https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg?w=301" alt="" class="wp-image-2896" width="328" height="328" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg 301w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg?w=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg?w=300 300w" sizes="(max-width: 328px) 100vw, 328px" /></figure>


<p>Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and aesthetics. He has published papers on roboethics, art and technology, and philosophy of science. In his previous research he also explored philosophical issues related to humor and amusement.</p>
<p>You can download the episode <a href="https://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>

<div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/steinert-master-19-10-2020-09.51" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div>



<h2 class="wp-block-heading">Show Notes</h2>



<p class="wp-block-paragraph">Topics discussed include:</p>



<ul class="wp-block-list"><li>What is a value?</li><li>Descriptive vs normative theories of value</li><li>Psychological theories of personal values</li><li>The nature of emotions</li><li>The connection between emotions and values</li><li>Emotional contagion</li><li>Emotional climates vs emotional atmospheres</li><li>The role of social media in causing emotional contagion</li><li>Is the coronavirus promoting a negative emotional climate?</li><li>Will this affect our political preferences and policies?</li><li>General lessons for technology and value change</li></ul>



<p class="wp-block-paragraph"></p>



<h2 class="wp-block-heading">Relevant Links</h2>



<ul class="wp-block-list"><li><a href="https://www.tudelft.nl/tbm/over-de-faculteit/afdelingen/values-technology-and-innovation/people/postdocs/dr-s-steffen-steinert/">Steffen&#8217;s Homepage</a></li><li><a href="https://www.valuechange.eu/">The Designing for Changing Values Project </a>@ TU Delft</li><li><a href="https://link.springer.com/article/10.1007/s10676-020-09545-z">Corona and Value Change</a> by Steffen</li><li><a href="https://link.springer.com/article/10.1007%2Fs11948-020-00195-4">&#8216;Unleashing the Constructive Potential of Emotions&#8217;</a> by Steffen and Sabine Roeser</li><li><a href="https://scholarworks.gvsu.edu/cgi/viewcontent.cgi?article=1116&amp;context=orpc">An Overview of the Schwartz Theory of Basic Personal Values</a></li></ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/10/20/84-social-media-covid-19-and-value-change/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>
<enclosure length="97195281" type="audio/mpeg" url="http://archive.org/download/steinert-master-19-10-2020-09.51/Steinert%20Master%20-%2019%3A10%3A2020%2C%2009.51.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2888</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and … More 84 – Social Media, COVID-19 and Value Change</itunes:summary>
<googleplay:description>Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and … More 84 – Social Media, COVID-19 and Value Change</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/10/steffen-steinert.jpg?w=301"/>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Do our values change over time? What role do emotions and technology play in altering our values? In this episode I talk to Steffen Steinert about these issues. Steffen is a postdoctoral researcher on the Value Change project at TU Delft, Ph.D. His research focuses on the philosophy of technology, ethics of technology, emotions, and &amp;#8230; More 84 &amp;#8211; Social Media, COVID-19 and Value&amp;#160;Change</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>83 – Privacy is Power</title>
		<link>https://algocracy.wordpress.com/2020/10/10/83-privacy-is-power/</link>
					<comments>https://algocracy.wordpress.com/2020/10/10/83-privacy-is-power/#respond</comments>
		
		
		<pubDate>Sat, 10 Oct 2020 12:50:04 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2878</guid>

					<description><![CDATA[Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/10/10/83-privacy-is-power/">More <span class="screen-reader-text">83 &#8211; Privacy is&#160;Power</span></a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large is-resized"><img data-attachment-id="2885" data-permalink="https://algocracy.wordpress.com/privacy-is-power/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png" data-orig-size="177,285" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="privacy-is-power" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png?w=177" src="https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png?w=177" alt="" class="wp-image-2885" width="273" height="439" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png 177w, https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png?w=93 93w" sizes="(max-width: 273px) 100vw, 273px" /></figure>



<p class="wp-block-paragraph">Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a Tutorial Fellow at Hertford College Oxford. She works on privacy, technology, moral and political philosophy and public policy. She has also been a guest on this podcast on two previous occasions. Today, we’ll be talking about her recently published book <em>Privacy is Power</em>.</p>



<p class="wp-block-paragraph">You can download the episode <a href="https://ia801505.us.archive.org/33/items/privacy-is-power-09-10-2020-15.59/Privacy%20is%20Power%20-%2009%3A10%3A2020%2C%2015.59.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>). </p>


<div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/privacy-is-power-09-10-2020-15.59" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div>



<p class="wp-block-paragraph"></p>



<h2 class="wp-block-heading">Show Notes</h2>



<p class="wp-block-paragraph">Topics discussed in this show include:</p>



<ul class="wp-block-list"><li>The most surprising examples of digital surveillance</li><li>The nature of privacy</li><li>Is privacy dead?</li><li>Privacy as an intrinsic and instrumental value</li><li>The relationship between privacy and autonomy</li><li>Does surveillance help with security and health?</li><li>The problem with mass surveillance</li><li>The phenomenon of toxic data</li><li>How surveillance undermines democracy and freedom</li><li>Are we willing to trade privacy for convenient services?</li><li>And much more</li></ul>



<p class="wp-block-paragraph"></p>



<h2 class="wp-block-heading">Relevant Links</h2>



<ul class="wp-block-list"><li><a href="http://www.carissaveliz.com/">Carissa&#8217;s Webpage</a></li><li><em><a href="https://www.amazon.co.uk/Privacy-Power-Should-Take-Control/dp/1787634043/">Privacy is Power</a></em> by Carissa</li><li>Summary of <em><a href="https://aeon.co/essays/privacy-matters-because-it-empowers-us-all">Privacy is Power</a></em> in <em>Aeon</em></li><li><a href="https://www.theguardian.com/books/2020/sep/28/carissa-veliz-intrusion-privacy-is-power-data">Review of <em>Privacy is Power</em></a> in <em>The Guardian</em> </li><li><a href="https://twitter.com/carissaveliz">Carissa&#8217;s Twitter feed</a> (a treasure trove of links about privacy and surveillance)</li><li><a href="https://philpapers.org/rec/BROVOP-3">Views on Privacy: A Survey</a> by Sian Brooke and Carissa Véliz</li><li><a href="https://philpapers.org/rec/VLIPM">Data, Privacy and the Individual</a> by Carissa Véliz</li></ul>



<p class="wp-block-paragraph"></p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/10/10/83-privacy-is-power/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="92273185" type="audio/mpeg" url="http://ia801505.us.archive.org/33/items/privacy-is-power-09-10-2020-15.59/Privacy%20is%20Power%20-%2009%3A10%3A2020%2C%2015.59.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2878</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a … More 83 – Privacy is Power</itunes:summary>
<googleplay:description>Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a … More 83 – Privacy is Power</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/10/privacy-is-power.png?w=177"/>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Are you being watched, tracked and traced every minute of the day? Probably. The digital world thrives on surveillance. What should we do about this? My guest today is Carissa Véliz. Carissa is an Associate Professor at the Faculty of Philosophy and the Institute of Ethics in AI at Oxford University. She is also a &amp;#8230; More 83 &amp;#8211; Privacy is&amp;#160;Power</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>82 – What should we do about facial recognition?</title>
		<link>https://algocracy.wordpress.com/2020/09/23/82-what-should-we-do-about-facial-recognition/</link>
					<comments>https://algocracy.wordpress.com/2020/09/23/82-what-should-we-do-about-facial-recognition/#respond</comments>
		
		
		<pubDate>Wed, 23 Sep 2020 21:26:46 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2870</guid>

					<description><![CDATA[Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/09/23/82-what-should-we-do-about-facial-recognition/">More <span class="screen-reader-text">82 &#8211; What should we do about facial&#160;recognition?</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2874" data-permalink="https://algocracy.wordpress.com/2020/09/23/82-what-should-we-do-about-facial-recognition/brenda-leong/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg" data-orig-size="768,1024" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Brenda Leong" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg?w=748" class="alignnone  wp-image-2874" src="https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg" alt="Brenda Leong" width="486" height="648" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg?w=486&amp;h=648 486w, https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg?w=113&amp;h=150 113w, https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg?w=225&amp;h=300 225w, https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg 768w" sizes="(max-width: 486px) 100vw, 486px" /></p>
<p>Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about this issue. Brenda is Senior Counsel and Director of Artificial Intelligence and Ethics at Future of Privacy Forum. She manages the FPF portfolio on biometrics, particularly facial recognition. She authored the FPF Privacy Expert’s Guide to AI, and co-authored the paper, “Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models.” Prior to working at FPF, Brenda served in the U.S. Air Force.</p>
<p>You can listen to the episode below or download <a href="https://archive.org/download/brenda-leong-master/Brenda%20Leong%20Master.mp3">here</a>. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/brenda-leong-master" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show notes</h3>
<p>Topics discussed include:</p>
<ul>
<li>What is facial recognition anyway?</li>
<li>Are there multiple forms that are confused and conflated?</li>
<li>What&#8217;s the history of facial recognition? What has changed recently?</li>
<li>How is the technology used?</li>
<li>What are the benefits of facial recognition?</li>
<li>What&#8217;s bad about it? What are the privacy and other risks?</li>
<li>Is there something unique about the face that should make us more worried about facial biometrics when compared to other forms?</li>
<li>What can we do to address the risks? Should we regulate or ban?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://fpf.org/brenda-leong/">Brenda&#8217;s Homepage</a></li>
<li><a href="https://twitter.com/BrendaKLeong">Brenda on Twitter</a></li>
<li><a href="https://fpf.org/wp-content/uploads/2018/10/FPF_Artificial-Intelligence_Digital.pdf">&#8216;The Privacy Expert&#8217;s Guide to AI and Machine Learning&#8217;</a> by Brenda (at FPF)</li>
<li><a href="https://oversight.house.gov/sites/democrats.oversight.house.gov/files/documents/Leong%20Testimony.pdf">Brenda&#8217;s US Congress Testimony on Facial Recognition</a></li>
<li><a href="https://www.tandfonline.com/doi/abs/10.1080/00963402.2019.1604886">&#8216;Facial recognition and the future of privacy: I always feel like … somebody’s watching me&#8217;</a> by Brenda</li>
<li>&#8216;<a href="https://tjcinstitute.com/research/the-case-for-banning-law-enforcement-from-using-facial-recognition-technology/">The Case for Banning Law Enforcement From Using Facial Recognition Technology&#8217;</a> by Evan Selinger and Woodrow Hartzog</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/09/23/82-what-should-we-do-about-facial-recognition/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="90097707" type="audio/mpeg" url="http://archive.org/download/brenda-leong-master/Brenda%20Leong%20Master.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2870</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about … More 82 – What should we do about facial recognition?</itunes:summary>
<googleplay:description>Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about … More 82 – What should we do about facial recognition?</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/09/brenda-leong.jpg">
			<media:title type="html">Brenda Leong</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Facial recognition technology has seen its fair share of both media and popular attention in the past 12 months. The runs the gamut from controversial uses by governments and police forces, to coordinated campaigns to ban or limit its use. What should we do about it? In this episode, I talk to Brenda Leong about &amp;#8230; More 82 &amp;#8211; What should we do about facial&amp;#160;recognition?</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>81 – Consumer Credit, Big Tech and AI Crime</title>
		<link>https://algocracy.wordpress.com/2020/09/18/81-consumer-credit-big-tech-and-ai-crime/</link>
					<comments>https://algocracy.wordpress.com/2020/09/18/81-consumer-credit-big-tech-and-ai-crime/#respond</comments>
		
		
		<pubDate>Fri, 18 Sep 2020 13:09:17 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2856</guid>

					<description><![CDATA[In today&#8217;s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of &#8216;too big to fail&#8217; tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/09/18/81-consumer-credit-big-tech-and-ai-crime/">More <span class="screen-reader-text">81 &#8211; Consumer Credit, Big Tech and AI&#160;Crime</span></a>]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img loading="lazy" width="350" height="350" data-attachment-id="2864" data-permalink="https://algocracy.wordpress.com/nikita/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png" data-orig-size="350,350" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="nikita" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png?w=350" src="https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png?w=350" alt="" class="wp-image-2864" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png 350w, https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png?w=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png?w=300 300w" sizes="(max-width: 350px) 100vw, 350px" /></figure>



<p class="wp-block-paragraph">In today&#8217;s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of &#8216;too big to fail&#8217; tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law at Oxford, as well as a Research Associate at the Oxford Internet Institute&#8217;s Digital Ethics Lab. Her research examines the legal and ethical challenges due to emerging, data-driven technologies, with a particular focus on machine learning in consumer lending. Prior to entering academia, she was an attorney in the legal department of the International Monetary Fund, where she advised on financial sector law reform in the Euro area.</p>



<p class="wp-block-paragraph">You can listen to the episode below or download <a href="https://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3">here</a>. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>


<div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/nikita-aggarwal-master-18-09-2020-13.49" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div>



<h2 class="wp-block-heading">Show Notes</h2>



<p class="wp-block-paragraph">Topics discussed include:</p>



<ul class="wp-block-list"><li>The digitisation, datafication and disintermediation of consumer credit markets</li></ul>



<ul class="wp-block-list"><li>Algorithmic credit scoring</li></ul>



<ul class="wp-block-list"><li>The problems of risk and bias in credit scoring</li></ul>



<ul class="wp-block-list"><li>How law and regulation can address these problems</li></ul>



<ul class="wp-block-list"><li>Tech platforms that are too big to fail</li></ul>



<ul class="wp-block-list"><li>What should we do if Facebook fails?</li></ul>



<ul class="wp-block-list"><li>The forms of AI crime</li></ul>



<ul class="wp-block-list"><li>How to address the problem of AI crime</li></ul>



<h2 class="wp-block-heading">Relevant Links</h2>



<ul class="wp-block-list"><li><a href="https://www.law.ox.ac.uk/people/nikita-aggarwal">Nikita&#8217;s homepage</a></li></ul>



<ul class="wp-block-list"><li><a href="https://twitter.com/nikitaggarwal">Nikita on Twitter</a></li></ul>



<ul class="wp-block-list"><li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3569083">&#8216;The Norms of Algorithmic Credit Scoring&#8217;</a> by Nikita</li></ul>



<ul class="wp-block-list"><li>&#8216;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3494144">What if Facebook Goes Down? Ethical and Legal Considerations for the Demise of Big Tech Platforms</a>&#8216; by Carl Ohman and Nikita</li></ul>



<ul class="wp-block-list"><li>&#8216;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3183238">Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions</a>&#8216; by Thomas King, Nikita, Mariarosario Taddeo and Luciano Floridi</li></ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/09/18/81-consumer-credit-big-tech-and-ai-crime/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>
<enclosure length="91987928" type="audio/mpeg" url="http://archive.org/download/nikita-aggarwal-master-18-09-2020-13.49/Nikita%20Aggarwal%20Master%20-%2018%3A09%3A2020%2C%2013.49.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2856</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In today’s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of ‘too big to fail’ tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law … More 81 – Consumer Credit, Big Tech and AI Crime</itunes:summary>
<googleplay:description>In today’s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of ‘too big to fail’ tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law … More 81 – Consumer Credit, Big Tech and AI Crime</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/09/nikita.png?w=350"/>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In today&amp;#8217;s episode, I talk to Nikita Aggarwal about the legal and regulatory aspects of AI and algorithmic governance. We focus, in particular, on three topics: (i) algorithmic credit scoring; (ii) the problem of &amp;#8216;too big to fail&amp;#8217; tech platforms and (iii) AI crime. Nikita is a DPhil (PhD) candidate at the Faculty of Law &amp;#8230; More 81 &amp;#8211; Consumer Credit, Big Tech and AI&amp;#160;Crime</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>79 – Is There a Techno-Responsibility Gap?</title>
		<link>https://algocracy.wordpress.com/2020/08/05/79-is-there-a-techno-responsibility-gap/</link>
					<comments>https://algocracy.wordpress.com/2020/08/05/79-is-there-a-techno-responsibility-gap/#respond</comments>
		
		
		<pubDate>Wed, 05 Aug 2020 08:35:51 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2832</guid>

					<description><![CDATA[What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine&#8217;s actions? That&#8217;s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History &#38; Ethics of Medicine, at the Technical University of Munich. His current &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/08/05/79-is-there-a-techno-responsibility-gap/">More <span class="screen-reader-text">79 &#8211; Is There a Techno-Responsibility Gap?</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2836" data-permalink="https://algocracy.wordpress.com/2020/08/05/79-is-there-a-techno-responsibility-gap/daniel_tigard/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Daniel_Tigard" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg?w=512" class="alignnone  wp-image-2836" src="https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg" alt="Daniel_Tigard" width="310" height="310" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg?w=310&amp;h=310 310w, https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg 512w" sizes="(max-width: 310px) 100vw, 310px" /></p>
<p>What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine&#8217;s actions? That&#8217;s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History &amp; Ethics of Medicine, at the Technical University of Munich. His current work addresses issues of moral responsibility in emerging technology. He is the author of several papers on moral distress and responsibility in medical ethics as well as, more recently, papers on moral responsibility and autonomous systems.</p>
<p>You can download the episode <a href="https://ia601401.us.archive.org/11/items/daniel-tigard-master-04-08-2020-23.23/Daniel%20Tigard%20Master%20-%2004%3A08%3A2020%2C%2023.23.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p>&nbsp;</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/daniel-tigard-master-04-08-2020-23.23" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<p>&nbsp;</p>
<h3>Show Notes</h3>
<p>Topics discussed include:</p>
<ul>
<li>What is responsibility? Why is it so complex?</li>
<li>The three faces of responsibility: attribution, accountability and answerability</li>
<li>Why are people so worried about responsibility gaps for autonomous systems?</li>
<li>What are some of the alleged solutions to the &#8220;gap&#8221; problem?</li>
<li>Who are the techno-pessimists and who are the techno-optimists?</li>
<li>Why does Daniel think that there is no techno-responsibility gap?</li>
<li>Is our application of responsibility concepts to machines overly metaphorical?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.researchgate.net/profile/Daniel_Tigard">Daniel&#8217;s ResearchGATE profile</a></li>
<li><a href="https://philpapers.org/s/Daniel%20W.%20Tigard">Daniel&#8217;s papers on Philpapers</a></li>
<li><a href="https://link.springer.com/article/10.1007/s13347-020-00414-7">&#8220;There is no Techno-Responsibility Gap</a>&#8221; by Daniel</li>
<li><a href="https://link.springer.com/article/10.1007/s11948-019-00146-8">&#8220;Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability</a>&#8221; by Mark Coeckelbergh</li>
<li><a href="https://www.taylorfrancis.com/books/e/9781315201399/chapters/10.4324/9781315201399-4">Technologically blurred accountability?</a> by Kohler, Roughley and Sauer</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/08/05/79-is-there-a-techno-responsibility-gap/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="109349116" type="audio/mpeg" url="http://ia601401.us.archive.org/11/items/daniel-tigard-master-04-08-2020-23.23/Daniel%20Tigard%20Master%20-%2004%3A08%3A2020%2C%2023.23.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2832</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine’s actions? That’s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History &amp; Ethics of Medicine, at the Technical University of Munich. His current … More 79 – Is There a Techno-Responsibility Gap?</itunes:summary>
<googleplay:description>What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine’s actions? That’s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History &amp; Ethics of Medicine, at the Technical University of Munich. His current … More 79 – Is There a Techno-Responsibility Gap?</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/08/daniel_tigard.jpg">
			<media:title type="html">Daniel_Tigard</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>What happens if an autonomous machine does something wrong? Who, if anyone, should be held responsible for the machine&amp;#8217;s actions? That&amp;#8217;s the topic I discuss in this episode with Daniel Tigard. Daniel Tigard is a Senior Research Associate in the Institute for History &amp;#38; Ethics of Medicine, at the Technical University of Munich. His current &amp;#8230; More 79 &amp;#8211; Is There a Techno-Responsibility Gap?</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>78 – Humans and Robots: Ethics, Agency and Anthropomorphism</title>
		<link>https://algocracy.wordpress.com/2020/07/27/78-humans-and-robots-ethics-agency-and-anthropomorphism/</link>
					<comments>https://algocracy.wordpress.com/2020/07/27/78-humans-and-robots-ethics-agency-and-anthropomorphism/#respond</comments>
		
		
		<pubDate>Mon, 27 Jul 2020 20:50:09 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2827</guid>

					<description><![CDATA[&#160; Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today&#8217;s guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/07/27/78-humans-and-robots-ethics-agency-and-anthropomorphism/">More <span class="screen-reader-text">78 &#8211; Humans and Robots: Ethics, Agency and Anthropomorphism</span></a>]]></description>
										<content:encoded><![CDATA[<p>&nbsp;</p>
<p><img loading="lazy" data-attachment-id="2620" data-permalink="https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/sven-nyholm/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg" data-orig-size="1000,1500" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;Angeline_Swinkels&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1485871704&quot;,&quot;copyright&quot;:&quot;Angeline Swinkels | fotograaf&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Sven-Nyholm" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=683" class="alignnone  wp-image-2620" src="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg" alt="Sven-Nyholm" width="283" height="425" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=283&amp;h=424 283w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=566&amp;h=849 566w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=100&amp;h=150 100w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=200&amp;h=300 200w" sizes="(max-width: 283px) 100vw, 283px" /></p>
<p>Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today&#8217;s guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend of the show, having appeared twice before. In this episode, we are talking about his recent, great, book <i>Humans and Robots: Ethics, Agency and Anthropomorphism</i>.</p>
<p>You can download the episode here or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/sven-nyholm-master-27-07-2020-21.24" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes:</h3>
<p>Topics covered in this episode include:</p>
<ul>
<li>Why did Sven play football with a robot? Who won?</li>
<li>What is a robot?</li>
<li>What is an agent?</li>
<li>Why does it matter if robots are agents?</li>
<li>Why does Sven worry about a normative mismatch between humans and robots? What should we do about this normative mismatch?</li>
<li>Why are people worried about responsibility gaps arising as a result of the widespread deployment of robots?</li>
<li>How should we think about human-robot collaborations?</li>
<li>Why should human drivers be more like self-driving cars?</li>
<li>Can we be friends with a robot?</li>
<li>Why does Sven reject my theory of ethical behaviourism?</li>
<li>Should we be pessimistic about the future of roboethics?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.uu.nl/staff/SRNyholm">Sven&#8217;s Homepage</a></li>
<li><a href="https://philpeople.org/profiles/sven-nyholm">Sven on Philpapers</a></li>
<li><em><a href="https://www.amazon.co.uk/Humans-Robots-Anthropomorphism-Philosophy-Technology/dp/1786612275">Humans and Robots: Ethics, Agency and Anthropomorphism</a></em></li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs11948-019-00172-6">Can a robot be a good colleague?</a>&#8216; by Sven and Jilles Smids</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs11948-017-9943-x">Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci</a>&#8216; by Sven</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs10676-018-9445-9">Automated Cars Meet Human Drivers: Responsible Human-Robot Coordination and The Ethics of Mixed Traffic</a>&#8216; by Sven and Jilles Smids</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/07/27/78-humans-and-robots-ethics-agency-and-anthropomorphism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2827</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>  Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today’s guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend … More 78 – Humans and Robots: Ethics, Agency and Anthropomorphism</itunes:summary>
<googleplay:description>  Are robots like humans? Are they agents? Can we have relationships with them? These are just some of the questions I explore with today’s guest, Sven Nyholm. Sven is an assistant professor of philosophy at Utrecht University in the Netherlands. His research focuses on ethics, particularly the ethics of technology. He is a friend … More 78 – Humans and Robots: Ethics, Agency and Anthropomorphism</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg">
			<media:title type="html">Sven-Nyholm</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator></item>
		<item>
		<title>77 – Should AI be Explainable?</title>
		<link>https://algocracy.wordpress.com/2020/07/20/77-should-ai-be-explainable/</link>
					<comments>https://algocracy.wordpress.com/2020/07/20/77-should-ai-be-explainable/#respond</comments>
		
		
		<pubDate>Mon, 20 Jul 2020 11:03:21 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2821</guid>

					<description><![CDATA[If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/07/20/77-should-ai-be-explainable/">More <span class="screen-reader-text">77 &#8211; Should AI be&#160;Explainable?</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2824" data-permalink="https://algocracy.wordpress.com/2020/07/20/77-should-ai-be-explainable/scott-robbins/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg" data-orig-size="301,301" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="scott robbins" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg?w=301" class="alignnone  wp-image-2824" src="https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg" alt="scott robbins" width="265" height="265" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg?w=265&amp;h=265 265w, https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg 301w" sizes="(max-width: 265px) 100vw, 265px" /></p>
<p>If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California State University, Chico and an M.Sc. in Ethics of Technology from the University of Twente. He is a founding member of the Foundation for Responsible Robotics and a member of the 4TU Centre for Ethics and Technology. Scott is skeptical of AI as a grand solution to societal problems and argues that AI should be boring.</p>
<p>You can download the episode <a href="https://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/scott-robbins-master-20-07-2020-11.31" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<p>Topic covered include:</p>
<ul>
<li>Why do people worry about the opacity of AI?</li>
<li>What&#8217;s the difference between explainability and transparency?</li>
<li>What&#8217;s the moral value or function of explainable AI?</li>
<li>Must we distinguish between the ethical value of an explanation and its epistemic value?</li>
<li>Why is it so technically difficult to make AI explainable?</li>
<li>Will we ever have a technical solution to the explanation problem?</li>
<li>Why does Scott think there is Catch 22 involved in insisting on explainable AI?</li>
<li>When should we insist on explanations and when are they unnecessary?</li>
<li>Should we insist on using boring AI?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://scottrobbins.org/">Scotts&#8217;s webpage</a></li>
<li>Scott&#8217;s paper &#8220;<a href="https://scottrobbins.org/2019/10/270/">A Misdirected Principle with a Catch: Explicability for A</a>I&#8221;</li>
<li>Scott&#8217;s paper &#8220;<a href="https://scottrobbins.org/2017/08/the-value-of-transparency-bulk-data-and-authoritarianism/">The Value of Transparency: Bulk Data and Authorisation</a>&#8220;</li>
<li>&#8220;<a href="https://scholar.law.colorado.edu/articles/1227/">The Right to an Explanation Explained&#8221;</a> by Margot Kaminski</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/01/episode-36-wachter-on-algorithms.html">Episode 36 &#8211; Wachter on Algorithms and Explanations</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/07/20/77-should-ai-be-explainable/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>
<enclosure length="82179470" type="audio/mpeg" url="http://ia601407.us.archive.org/10/items/scott-robbins-master-20-07-2020-11.31/Scott%20Robbins%20Master%20-%2020%3A07%3A2020%2C%2011.31.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2821</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California … More 77 – Should AI be Explainable?</itunes:summary>
<googleplay:description>If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California … More 77 – Should AI be Explainable?</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/07/scott-robbins.jpg">
			<media:title type="html">scott robbins</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>If an AI system makes a decision, should its reasons for making that decision be explainable to you? In this episode, I chat to Scott Robbins about this issue. Scott is currently completing his PhD in the ethics of artificial intelligence at the Technical University of Delft. He has a B.Sc. in Computer Science from California &amp;#8230; More 77 &amp;#8211; Should AI be&amp;#160;Explainable?</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>76 – Surveillance, Privacy and COVID 19</title>
		<link>https://algocracy.wordpress.com/2020/04/18/76-surveillance-privacy-and-covid-19/</link>
					<comments>https://algocracy.wordpress.com/2020/04/18/76-surveillance-privacy-and-covid-19/#respond</comments>
		
		
		<pubDate>Sat, 18 Apr 2020 08:47:19 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2813</guid>

					<description><![CDATA[How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/04/18/76-surveillance-privacy-and-covid-19/">More <span class="screen-reader-text">76 &#8211; Surveillance, Privacy and COVID&#160;19</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2708" data-permalink="https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/carissa-veliz/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Carissa Veliz" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=512" class="alignnone  wp-image-2708" src="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg" alt="Carissa Veliz" width="391" height="391" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=391&amp;h=391 391w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg 512w" sizes="(max-width: 391px) 100vw, 391px" /></p>
<p>How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz about these questions. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics at Oxford and the Wellcome Centre for Ethics and Humanities, also at Oxford. She is the editor of the <em>Oxford Handbook of Digital Ethics </em>as well as two forthcoming solo-authored books <em>Privacy is Power </em>(Transworld) and <em>The Ethics of Privacy</em> (Oxford University Press).</p>
<p>You can download the episode <a href="https://ia801408.us.archive.org/13/items/carissa-veliz-covid-19-17-04-2020-09.35/Carissa%20Veliz%20-%20Covid%2019%20-%2017%3A04%3A2020%2C%2009.35.mp3">here</a> or listen below.You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/carissa-veliz-covid-19-17-04-2020-09.35" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<p>Topics discussed include</p>
<ul>
<li>The value of privacy</li>
<li>Do we balance privacy against other rights/values?</li>
<li>The significance of consent in debates about consent</li>
<li>Digital contact tracing and digital quarantines</li>
<li>The ethics of digital contact tracing</li>
<li>Is the value of digital contact tracing being oversold?</li>
<li>The relationship between testing and contact tracing</li>
<li>COVID 19 as an important moment in the fight for privacy</li>
<li>The data economy in light of COVID 19</li>
<li>The ethics of immunity passports</li>
<li>The importance of focusing on the right things in responding to COVID 19</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.carissaveliz.com/">Carissa&#8217;s Webpage</a></li>
<li><a href="https://twitter.com/carissaveliz">Carissa&#8217;s Twitter feed</a> (a treasure trove of links about privacy and surveillance)</li>
<li><a href="https://philpapers.org/rec/BROVOP-3">Views on Privacy: A Survey</a> by Sian Brooke and Carissa Véliz</li>
<li><a href="https://philpapers.org/rec/VLIPM">Data, Privacy and the Individual</a> by Carissa Véliz</li>
<li><a href="https://science.sciencemag.org/content/early/2020/04/09/science.abb6936">Science paper on the value of digital contact tracing</a></li>
<li><a href="https://www.theverge.com/2020/4/14/21220644/apple-googles-bluetooth-low-energy-le-coronavirus-tracking-contact-tracing">The Apple-Google proposal for digital contact tracing</a></li>
<li><a href="https://www.theguardian.com/world/2020/mar/09/the-new-normal-chinas-excessive-coronavirus-public-monitoring-could-be-here-to-stay">&#8221;The new normal&#8217;: China&#8217;s excessive coronavirus public monitoring could be here to stay&#8217; </a></li>
<li>&#8216;<a href="https://www.nytimes.com/2020/03/01/business/china-coronavirus-surveillance.html">In Coronavirus Fight, China Gives Citizens a Color Code, With Red Flags&#8217;</a></li>
<li>&#8216;<a href="https://www.economist.com/china/2020/02/29/to-curb-covid-19-china-is-using-its-high-tech-surveillance-tools">To curb covid-19, China is using its high-tech surveillance tools&#8217;</a></li>
<li><a href="https://www.amnesty.org/en/latest/news/2020/04/covid19-digital-surveillance-ngo/">&#8216;Digital surveillance to fight COVID-19 can only be justified if it respects human rights&#8217;</a></li>
<li>&#8216;<a href="https://medium.com/@cansucanca/why-mandatory-privacy-preserving-digital-contact-tracing-is-the-ethical-measure-against-covid-19-a0d143b7c3b6">Why ‘Mandatory Privacy-Preserving Digital Contact Tracing’ is the Ethical Measure against COVID-19&#8242;</a> by Cansu Canca</li>
<li>&#8216;<a href="https://www.bloomberg.com/opinion/articles/2020-04-15/the-covid-19-tracking-app-won-t-work">The COVID-19 Tracking App Won&#8217;t Work&#8217; </a></li>
<li>&#8216;<a href="https://thehill.com/changing-america/well-being/prevention-cures/492699-what-are-immunity-passports-and-how-could-they">What are &#8216;immunity passports&#8217; and could they help us end the coronavirus lockdown?&#8217;</a></li>
<li><a href="https://www.vox.com/2020/4/13/21215133/coronavirus-testing-covid-19-tests-screening">&#8216;The case for ending the Covid-19 pandemic with mass testing&#8217;</a></li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/04/18/76-surveillance-privacy-and-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="76580280" type="audio/mpeg" url="http://ia801408.us.archive.org/13/items/carissa-veliz-covid-19-17-04-2020-09.35/Carissa%20Veliz%20-%20Covid%2019%20-%2017%3A04%3A2020%2C%2009.35.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2813</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz … More 76 – Surveillance, Privacy and COVID 19</itunes:summary>
<googleplay:description>How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz … More 76 – Surveillance, Privacy and COVID 19</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg">
			<media:title type="html">Carissa Veliz</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>How do we get back to normal after the COVID-19 pandemic? One suggestion is that we use increased amounts of surveillance and tracking to identify and isolate infected and at-risk persons. While this might be a valid public health strategy it does raise some tricky ethical questions. In this episode I talk to Carissa Véliz &amp;#8230; More 76 &amp;#8211; Surveillance, Privacy and COVID&amp;#160;19</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>75 – The Vital Ethical Contexts of COVID 19</title>
		<link>https://algocracy.wordpress.com/2020/04/14/75-the-vital-ethical-contexts-for-covid-19/</link>
					<comments>https://algocracy.wordpress.com/2020/04/14/75-the-vital-ethical-contexts-for-covid-19/#respond</comments>
		
		
		<pubDate>Tue, 14 Apr 2020 13:37:37 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2804</guid>

					<description><![CDATA[There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/04/14/75-the-vital-ethical-contexts-for-covid-19/">More <span class="screen-reader-text">75 &#8211; The Vital Ethical Contexts of COVID&#160;19</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2808" data-permalink="https://algocracy.wordpress.com/2020/04/14/75-the-vital-ethical-contexts-for-covid-19/david-shaw/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg" data-orig-size="960,1280" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="David Shaw" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg?w=748" class="alignnone  wp-image-2808" src="https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg" alt="David Shaw" width="331" height="441" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg?w=331&amp;h=441 331w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg?w=662&amp;h=883 662w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg?w=113&amp;h=150 113w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg?w=225&amp;h=300 225w" sizes="(max-width: 331px) 100vw, 331px" /></p>
<p>There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on today&#8217;s show: David Shaw. David is a Senior Researcher at the Institute for Biomedical Ethics at the University of Basel and an Assistant Professor at the Care and Public Health Research Institute, Maastricht University. We discuss some recent writing David has been doing on the Journal of Medical Ethics blog about the coronavirus crisis.</p>
<p>You can download the episode <a href="https://ia601507.us.archive.org/6/items/davidshaw1404202013.56/David%20Shaw%20-%2014%3A04%3A2020%2C%2013.56.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/davidshaw1404202013.56" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3> Show Notes</h3>
<p>Topics discussed include&#8230;</p>
<ul>
<li>Why is it important to keep death rates and other data in context?</li>
<li>Is media reporting of deaths misleading?</li>
<li>Why do the media discuss &#8216;soaring&#8217; death rates and &#8216;grim&#8217; statistics</li>
<li>Are we ignoring the unintended health consequences of COVID 19?</li>
<li>Should we take the economic costs more seriously given the link between poverty/inequality and health outcomes?</li>
<li>Did the UK government mishandle the response to the crisis? Are they blameworthy for what they did?</li>
<li>Is it fair to criticise governments for their handling of the crisis?</li>
<li>Is it okay for governments to experiment on their populations in response to the crisis?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://ibmb.unibas.ch/en/persons/david-shaw/cv-of-david-martin-shaw/">David&#8217;s Profile Page at the University of Basel</a></li>
<li>&#8216;<a href="https://blogs.bmj.com/medical-ethics/2020/04/02/the-vital-contexts-of-coronavirus/">The Vital Contexts of Coronavirus&#8217;</a> by David</li>
<li>&#8216;<a href="https://blogs.bmj.com/medical-ethics/2020/03/28/the-slow-dragon-and-the-dim-sloth-what-can-the-world-learn-from-coronavirus-responses-in-italy-and-the-uk/">The Slow Dragon and the Dim Sloth: What can the world learn from coronavirus responses in Italy and the UK?&#8217;</a> by Marcello Ienca and David Shaw</li>
<li>&#8216;<a href="https://blogs.bmj.com/medical-ethics/2020/03/26/dont-let-the-ethics-of-despair-infect-the-intensive-care-unit/">Don&#8217;t let the ethics of despair infect the ICU</a>&#8216; by David Shaw, Dan Harvey and Dale Gardiner</li>
<li>&#8216;<a href="https://www.nytimes.com/interactive/2020/04/10/upshot/coronavirus-deaths-new-york-city.html">Deaths in New York City Are More Than Double the Usual Total</a>&#8216; in the NYT (getting the context right?!)</li>
<li><a href="https://www.technologyreview.com/2020/04/09/999015/blood-tests-show-15-of-people-are-now-immune-to-covid-19-in-one-town-in-germany/">Preliminary results from German Antibody tests in one town: 14% of the population infected</a></li>
<li><a href="https://www.nature.com/articles/d41586-019-00210-0">Do Death Rates Go Down in a Recession?</a></li>
<li><a href="https://www.thenational.scot/news/18373672.sun-faces-huge-online-backlash-good-friday-front-page/">The Sun&#8217;s Good Friday headline</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/04/14/75-the-vital-ethical-contexts-for-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="101403294" type="audio/mpeg" url="http://ia601507.us.archive.org/6/items/davidshaw1404202013.56/David%20Shaw%20-%2014%3A04%3A2020%2C%2013.56.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2804</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on … More 75 – The Vital Ethical Contexts of COVID 19</itunes:summary>
<googleplay:description>There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on … More 75 – The Vital Ethical Contexts of COVID 19</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/04/david-shaw.jpg">
			<media:title type="html">David Shaw</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>There is a lot of data and reporting out there about the COVID 19 pandemic. How should we make sense of that data? Do the media narratives misrepresent or mislead us as to the true risks associated with the disease? Have governments mishandled the response? These are the questions I discuss with my guest on &amp;#8230; More 75 &amp;#8211; The Vital Ethical Contexts of COVID&amp;#160;19</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>74 – How to Understand COVID 19</title>
		<link>https://algocracy.wordpress.com/2020/04/10/74-how-to-understand-covid-19/</link>
					<comments>https://algocracy.wordpress.com/2020/04/10/74-how-to-understand-covid-19/#respond</comments>
		
		
		<pubDate>Fri, 10 Apr 2020 08:42:21 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2796</guid>

					<description><![CDATA[I&#8217;m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the &#8216;classical&#8217; ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/04/10/74-how-to-understand-covid-19/">More <span class="screen-reader-text">74 &#8211; How to Understand COVID&#160;19</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2801" data-permalink="https://algocracy.wordpress.com/2020/04/10/74-how-to-understand-covid-19/katherine-furman/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg" data-orig-size="400,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Katherine Furman" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg?w=400" class="alignnone size-full wp-image-2801" src="https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg" alt="Katherine Furman" width="400" height="400" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg 400w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg?w=300&amp;h=300 300w" sizes="(max-width: 400px) 100vw, 400px" /></p>
<p>I&#8217;m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the &#8216;classical&#8217; ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this I will be talking to Katherine Furman. Katherine is a lecturer in philosophy at the University of Liverpool. Her research interests are at the intersection of Philosophy and Health Policy. She is interested in how laypeople understand issues of science, objectivity in the sciences and social sciences, and public trust in science. Her previous work has focused on the HIV/AIDs pandemic and the Ebola outbreak in West Africa in 2014-2015. We will be talking about the lessons we can draw from this work for how we think about the COVID-19 pandemic.</p>
<p>You can download the episode <a href="https://ia801501.us.archive.org/13/items/katherinefurman0904202021.27/Katherine%20Furman%20-%2009%3A04%3A2020%2C%2021.27.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/katherinefurman0904202021.27" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<p>Topics discussed include:</p>
<ul>
<li>The history of explaining the causes of disease</li>
<li>Mono-causal theories of disease</li>
<li>Multi-causal theories of disease</li>
<li>Lessons learned from the HIV/AIDs pandemic</li>
<li>The practical importance of understanding the causes of disease in the current pandemic</li>
<li>Is there an ethics of belief?</li>
<li>Do we have epistemic duties in relation to COVID-19?</li>
<li>Is it reasonable to believe &#8216;rumours&#8217; about the disease?</li>
<li>Lessons learned from the 2014-2015 Ebola outbreak</li>
<li>The importance of values in the public understanding of science</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://katherinefurman.com/">Katherine&#8217;s Homepage</a></li>
<li><a href="https://www.liverpool.ac.uk/philosophy/staff/katherine-furman/">Katherine @ University of Liverpool</a></li>
<li>&#8220;<a href="https://link.springer.com/article/10.1007%2Fs10912-017-9441-9">Mono-Causal and Multi-Causal Theories of Disease: How to Think Virally and Socially about the Aetiology of AIDS</a>&#8221; by Katherine</li>
<li>&#8220;<a href="https://www.tandfonline.com/doi/abs/10.1080/02691728.2018.1512173?journalCode=tsep20">Moral Responsibility, Culpable Ignorance, and Suppressed Disagreement</a>&#8221; by Katherine</li>
<li>&#8220;<a href="https://blogs.lse.ac.uk/africaatlse/2014/08/20/the-international-response-to-the-ebola-outbreak-has-excluded-africans-and-their-interests/">The international response to the Ebola outbreak has excluded Africans and their interests</a>&#8221; by Katherine</li>
<li><a href="https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf">Imperial College paper on COVID-19 scenarios</a></li>
<li><a href="https://www.medrxiv.org/content/10.1101/2020.03.24.20042291v1">Oxford Paper on possible exposure levels to novel Coronavirus</a></li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/04/10/74-how-to-understand-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="92344656" type="audio/mpeg" url="http://ia801501.us.archive.org/13/items/katherinefurman0904202021.27/Katherine%20Furman%20-%2009%3A04%3A2020%2C%2021.27.mp3"/>
<enclosure length="92344656" type="audio/mpeg" url="http://ia801501.us.archive.org/13/items/katherinefurman0904202021.27/Katherine%20Furman%20-%2009%3A04%3A2020%2C%2021.27.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2796</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>I’m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the ‘classical’ ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this … More 74 – How to Understand COVID 19</itunes:summary>
<googleplay:description>I’m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the ‘classical’ ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this … More 74 – How to Understand COVID 19</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/04/katherine-furman.jpg">
			<media:title type="html">Katherine Furman</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>I&amp;#8217;m still thinking a lot about the COVID-19 pandemic. In this episode I turn away from some of the &amp;#8216;classical&amp;#8217; ethical questions about the disease and talk more about how to understand it and form reasonable beliefs about the public health information that has been issued in response to it. To help me do this &amp;#8230; More 74 &amp;#8211; How to Understand COVID&amp;#160;19</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>73 – The Ethics of Healthcare Prioritisation during COVID 19</title>
		<link>https://algocracy.wordpress.com/2020/04/03/73-the-ethics-of-healthcare-prioritisation-during-covid-19/</link>
					<comments>https://algocracy.wordpress.com/2020/04/03/73-the-ethics-of-healthcare-prioritisation-during-covid-19/#respond</comments>
		
		
		<pubDate>Fri, 03 Apr 2020 14:39:26 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2790</guid>

					<description><![CDATA[We have a limited number of ventilators? Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/04/03/73-the-ethics-of-healthcare-prioritisation-during-covid-19/">More <span class="screen-reader-text">73 &#8211; The Ethics of Healthcare Prioritisation during COVID&#160;19</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2791" data-permalink="https://algocracy.wordpress.com/2020/04/03/73-the-ethics-of-healthcare-prioritisation-during-covid-19/lars_sandman/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg" data-orig-size="4000,3000" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;5.8&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Canon PowerShot SX40 HS&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1380213316&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;105.366&quot;,&quot;iso&quot;:&quot;800&quot;,&quot;shutter_speed&quot;:&quot;0.01&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Lars_Sandman" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=748" class="alignnone  wp-image-2791" src="https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg" alt="Lars_Sandman" width="401" height="301" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=401&amp;h=301 401w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=802&amp;h=602 802w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=150&amp;h=113 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=300&amp;h=225 300w, https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg?w=768&amp;h=576 768w" sizes="(max-width: 401px) 100vw, 401px" /></p>
<p>We have a limited number of ventilators? Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care procedures. We are going to be talking about the ethics of healthcare prioritisation in the midst of the COVID 19 pandemic, focusing specifically on some principles Lars, along with others, developed for the Swedish government.</p>
<p>You download the episode <a href="https://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/larssandman" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>The prioritisation challenges we currently face</li>
<li>Ethical principles for prioritisation in healthcare</li>
<li>Problems with applying ethical theories in practice</li>
<li>Swedish legal principles on healthcare prioritisation</li>
<li>Principles for access to ICU during the COVID 19 pandemic</li>
<li>Do we prioritise younger people?</li>
<li>Chronological age versus biological age</li>
<li>Could we use a lottery principle?</li>
<li>Should we prioritise healthcare workers?</li>
<li>Impact of COVID 19 prioritisation on other healthcare priorities</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://liu.se/en/employee/larsa09">Lar&#8217;s Webpage</a></li>
<li><a href="https://liu.se/en/article/the-ethical-platform-for-priority-setting">Swedish Legal Principles</a></li>
<li><a href="http://liu.diva-portal.org/smash/get/diva2:759770/FULLTEXT01.pdf">Background to the Swedish Law</a></li>
<li><a href="https://philosophicalcomment.blogspot.com/2020/03/new-swedish-guidelines-for-icu-priority.html">New priority principles in Sweden</a> (English Translation by Christian Munthe)</li>
<li>&#8220;<a href="https://www.academia.edu/454991/Principles_for_Allocation_of_Scarce_Medical_Interventions">Principles for allocation of scarce medical interventions</a>&#8221; by Persad, Werthheimer and Emanuel (good overview of the ethical debate)</li>
<li><a href="https://www.vox.com/coronavirus-covid19/2020/3/31/21199721/coronavirus-covid-19-hospitals-triage-rationing-italy-new-york">The grim ethical dilemma of rationing medical care, explained</a> &#8211; Vox.com</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/04/03/73-the-ethics-of-healthcare-prioritisation-during-covid-19/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="101368186" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3"/>
<enclosure length="101368186" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3"/>
<enclosure length="101368186" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3"/>
<enclosure length="101368186" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3"/>
<enclosure length="101368186" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/larssandman/Lars%20Sandman.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2790</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>We have a limited number of ventilators? Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care … More 73 – The Ethics of Healthcare Prioritisation during COVID 19</itunes:summary>
<googleplay:description>We have a limited number of ventilators? Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care … More 73 – The Ethics of Healthcare Prioritisation during COVID 19</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/04/lars_sandman.jpeg">
			<media:title type="html">Lars_Sandman</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>We have a limited number of ventilators? Who should get access to them? In this episode I talk to Lars Sandman. Lars is a Professor of Healthcare Ethics at Linköping University, Sweden. Lars’s research involves studying ethical aspects of distributing scarce resources within health care and studying and developing methods for ethical analyses of health-care &amp;#8230; More 73 &amp;#8211; The Ethics of Healthcare Prioritisation during COVID&amp;#160;19</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>72 – Grief in the Time of a Pandemic</title>
		<link>https://algocracy.wordpress.com/2020/03/30/72-grief-in-the-time-of-a-pandemic/</link>
					<comments>https://algocracy.wordpress.com/2020/03/30/72-grief-in-the-time-of-a-pandemic/#respond</comments>
		
		
		<pubDate>Mon, 30 Mar 2020 15:04:21 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2781</guid>

					<description><![CDATA[Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/03/30/72-grief-in-the-time-of-a-pandemic/">More <span class="screen-reader-text">72 &#8211; Grief in the Time of a&#160;Pandemic</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2785" data-permalink="https://algocracy.wordpress.com/2020/03/30/72-grief-in-the-time-of-a-pandemic/mcholbi-head-tight-269x300/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png" data-orig-size="269,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="MCholbi-Head-Tight-269&amp;#215;300" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png?w=269" class="alignnone  wp-image-2785" src="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png" alt="MCholbi-Head-Tight-269x300" width="281" height="313" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png 269w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png?w=135&amp;h=150 135w" sizes="(max-width: 281px) 100vw, 281px" /></p>
<p>Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and the philosophy of death and dying. We discus the nature of grief, the ethics of grief and how grief might change in the midst of a pandemic.</p>
<p>You can download the episode <a href="https://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/michaelcholbi3003202009.55" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<p>Topics discussed include&#8230;</p>
<ul>
<li>What is grief?</li>
<li>What are the different forms of grief?</li>
<li>Is grief always about death?</li>
<li>Is grief a good thing?</li>
<li>Is grief a bad thing?</li>
<li>Does the cause of death make a difference to grief?</li>
<li>How does the COVID 19 pandemic disrupt grief?</li>
<li>What are the politics of grief?</li>
<li>Will future societies memorialise the deaths of people in the pandemic?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://michael.cholbi.com/">Michael&#8217;s Homepage</a></li>
<li><a href="https://philpapers.org/rec/CHORRA-2">Regret, Resilience and the Nature of Grief</a> by Michael</li>
<li><a href="https://philpapers.org/rec/CHOFTG">Finding the Good in Grief</a> by Michael</li>
<li><a href="https://philpapers.org/rec/CHOGRB">Grief&#8217;s Rationality, Backward and Forward</a> by Michael</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/05/coping-with-grief-series-index.html">Coping with Grief: A Series of Philosophical Disquisitions</a> by me</li>
<li><a href="https://www.ft.com/content/5e30a130-d62a-4c4c-a81f-b89f2448d9c8">Grieving alone — coronavirus upends funeral rites</a> (Financial Times)</li>
<li><a href="https://www.bbc.com/news/health-52031539">Coronavirus: How Covid-19 is denying dignity to the dead in Italy</a> (BBC)</li>
<li><a href="https://wellcomecollection.org/articles/W7TfGRAAAP5F0eKS">Why the 1918 Spanish flu defied both memory and imagination</a></li>
<li><a href="https://theconversation.com/100-years-later-why-dont-we-commemorate-the-victims-and-heroes-of-spanish-flu-109885">100 years later, why don’t we commemorate the victims and heroes of ‘Spanish flu’?</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/03/30/72-grief-in-the-time-of-a-pandemic/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>
<enclosure length="102081015" type="audio/mpeg" url="http://ia801506.us.archive.org/1/items/michaelcholbi3003202009.55/Michael%20Cholbi%20-%2030%3A03%3A2020%2C%2009.55.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2781</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and … More 72 – Grief in the Time of a Pandemic</itunes:summary>
<googleplay:description>Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and … More 72 – Grief in the Time of a Pandemic</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mcholbi-head-tight-269x300-1.png">
			<media:title type="html">MCholbi-Head-Tight-269x300</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Lots of people are dying right now. But people die all the time. How should we respond to all this death? In this episode I talk to Michael Cholbi about the philosophy of grief. Michael Cholbi is Professor of Philosophy at the University of Edinburgh. He has published widely in ethical theory, practical ethics, and &amp;#8230; More 72 &amp;#8211; Grief in the Time of a&amp;#160;Pandemic</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>71 – COVID 19 and the Ethics of Infectious Disease Control</title>
		<link>https://algocracy.wordpress.com/2020/03/25/71-covid-19-and-the-ethics-of-infectious-disease-control/</link>
					<comments>https://algocracy.wordpress.com/2020/03/25/71-covid-19-and-the-ethics-of-infectious-disease-control/#respond</comments>
		
		
		<pubDate>Wed, 25 Mar 2020 23:25:56 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2771</guid>

					<description><![CDATA[As nearly half the world&#8217;s population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/03/25/71-covid-19-and-the-ethics-of-infectious-disease-control/">More <span class="screen-reader-text">71 &#8211; COVID 19 and the Ethics of Infectious Disease&#160;Control</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2777" data-permalink="https://algocracy.wordpress.com/2020/03/25/71-covid-19-and-the-ethics-of-infectious-disease-control/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg" data-orig-size="800,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Mercado_de_mariscos_de_Wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_Nuevo_Coronavirus_" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg?w=748" class="alignnone size-full wp-image-2777" src="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg" alt="Mercado_de_mariscos_de_Wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_Nuevo_Coronavirus_" width="800" height="450" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg 800w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg?w=150&amp;h=84 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg?w=300&amp;h=169 300w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg?w=768&amp;h=432 768w" sizes="(max-width: 800px) 100vw, 800px" /></p>
<p>As nearly half the world&#8217;s population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre for Practical Ethics in Oxford, about this very issue. We talk about the moral principles that should apply to our evaluation of infectious disease control and some of the typical objections to it. Throughout we focus specifically on some of different interventions that are being applied to tackle COVID-19.</p>
<p>You can download the episode <a href="https://ia801407.us.archive.org/8/items/tomdouglasandjonathanpugh2503202022.40/Tom%20Douglas%20and%20Jonathan%20Pugh%20-%2025%3A03%3A2020%2C%2022.40.mp3">here </a>or listen below. You can also subscribe to the podcast on <a href="http://apple%20podcasts/">Apple Podcasts</a>, <a href="http://spotify/">Spotify</a>, <a href="http://stitcher/">Stitcher</a> and many other <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/tomdouglasandjonathanpugh2503202022.40" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<p>Topics covered include:</p>
<ul>
<li>Methods of infectious disease control</li>
<li>Consequentialist justifications for disease control</li>
<li>Non-consequentialist justifications</li>
<li>The proportionality of disease control measures</li>
<li>Could these measures stigmatise certain populations?</li>
<li>Could they exacerbate inequality or fuel discrimination?</li>
<li>Must we err on the side of precaution in the midst of a novel pandemic?</li>
<li>Is ethical evaluation a luxury at a time like this?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://jonathanpughethics.wordpress.com/">Jonathan Pugh&#8217;s Homepage</a></li>
<li><a href="https://sites.google.com/view/tomdouglas">Tom Douglas&#8217;s Homepage</a></li>
<li>&#8216;<a href="http://blog.practicalethics.ox.ac.uk/2020/03/pandemic-ethics-infectious-pathogen-control-measures-and-moral-philosophy/">Pandemic Ethics: Infectious Pathogen Control Measures and Moral Philosophy&#8217;</a> by Jonathan and Tom</li>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5312796/#EN0021">&#8216;Justifications for Non-Consensual Medical Intervention: From Infectious Disease Control to Criminal Rehabilitation</a>&#8216; by Jonathan and Tom</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007/s40592-019-00103-y">Infection Control for Third-Party Benefit: Lessons from Criminal Justice</a>&#8216; by Tom</li>
<li><a href="https://www.statnews.com/2020/03/20/understanding-what-works-how-some-countries-are-beating-back-the-coronavirus/">How Different Asian Countries Responded to COVID 19</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/03/25/71-covid-19-and-the-ethics-of-infectious-disease-control/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="108989253" type="audio/mpeg" url="http://ia801407.us.archive.org/8/items/tomdouglasandjonathanpugh2503202022.40/Tom%20Douglas%20and%20Jonathan%20Pugh%20-%2025%3A03%3A2020%2C%2022.40.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2771</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>As nearly half the world’s population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre … More 71 – COVID 19 and the Ethics of Infectious Disease Control</itunes:summary>
<googleplay:description>As nearly half the world’s population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre … More 71 – COVID 19 and the Ethics of Infectious Disease Control</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/03/mercado_de_mariscos_de_wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_nuevo_coronavirus_.jpg">
			<media:title type="html">Mercado_de_mariscos_de_Wuhan_cerrado_tras_detectarse_ahi_por_primera_vez_el_Nuevo_Coronavirus_</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>As nearly half the world&amp;#8217;s population is now under some form of quarantine or lockdown, it seems like an apt time to consider the ethics of infectious disease control measures of this sort. In this episode, I chat to Jonathan Pugh and Tom Douglas, both of whom are Senior Research Fellows at the Uehiro Centre &amp;#8230; More 71 &amp;#8211; COVID 19 and the Ethics of Infectious Disease&amp;#160;Control</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>70 – Ethics in the time of Corona</title>
		<link>https://algocracy.wordpress.com/2020/03/17/70-ethics-in-the-time-of-corona/</link>
					<comments>https://algocracy.wordpress.com/2020/03/17/70-ethics-in-the-time-of-corona/#respond</comments>
		
		
		<pubDate>Tue, 17 Mar 2020 22:59:19 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2763</guid>

					<description><![CDATA[Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/03/17/70-ethics-in-the-time-of-corona/">More <span class="screen-reader-text">70 &#8211; Ethics in the time of&#160;Corona</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2767" data-permalink="https://algocracy.wordpress.com/2020/03/17/70-ethics-in-the-time-of-corona/coronavirus_3-696x477/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg" data-orig-size="696,477" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Coronavirus_3-696&amp;#215;477" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg?w=696" class="alignnone size-full wp-image-2767" src="https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg" alt="Coronavirus_3-696x477" width="696" height="477" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg 696w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg?w=150&amp;h=103 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg?w=300&amp;h=206 300w" sizes="(max-width: 696px) 100vw, 696px" /></p>
<p>Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to people on Twitter and Jeff Sebo kindly volunteered himself to join me for a conversation. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University. Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. <span style="color:var(--color-text);">This episode was put together in a hurry but I think it covers a lot of important ground.</span><span style="color:var(--color-text);"> I hope you find it informative and useful. Be safe!</span></p>
<p>You can download the episode <a href="https://ia601502.us.archive.org/28/items/covid19ethics1703202016.58/COVID%2019%20ETHICS%20-%2017%3A03%3A2020%2C%2016.58.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="http://Apple Podcasts">Apple Podcasts</a>, <a href="http://Spotify">Spotify</a>, <a href="http://Stitcher">Stitcher</a> and many over <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/covid19ethics1703202016.58" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<p> </p>
<h3>Show Notes</h3>
<p>Topics covered include:</p>
<ul>
<li>Individual duties and responsibilities to stop the spread</li>
<li>Medical ethics and medical triage</li>
<li>Balancing short-term versus long-term interests</li>
<li>Health versus well-being and other goods</li>
<li>State responsibilities and the social safety net</li>
<li>The duties of politicians and public officials</li>
<li>The risk of authoritarianism and the erosion of democratic values</li>
<li>Global justice and racism/xenophobia</li>
<li>Our duties to frontline workers and vulnerable members of society</li>
<li>Animal ethics and the risks of industrial agriculture</li>
<li>The ethical upside of the pandemic: will this lead to more solidarity and sustainability?</li>
<li>Pandemics and global catastrophic risks</li>
<li>What should we be doing right now?</li>
</ul>
<p> </p>
<h3>Some Relevant Links</h3>
<ul>
<li><a href="https://jeffsebo.net/">Jeff&#8217;s webpage</a></li>
<li><a href="https://graphics.reuters.com/CHINA-HEALTH-SOUTHKOREA-CLUSTERS/0100B5G33SB/index.html">Patient 31 in South Korea</a></li>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6267229/">The Duty to Vaccinate and collective action problems</a></li>
<li><a href="http://www.siaarti.it/SiteAssets/News/COVID19%20-%20documenti%20SIAARTI/SIAARTI%20-%20Covid19%20-%20Raccomandazioni%20di%20etica%20clinica.pdf">Italian medical ethics recommendations</a></li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2020/03/covid-19-and-impossibility-of-morality.html">COVID 19 and the Impossibility of Morality</a></li>
<li><a href="https://unherd.com/2020/03/the-scientific-case-against-herd-immunity/">The problem with the UK government&#8217;s (former) &#8216;herd immunity&#8217; approach</a></li>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2862342/">A history of the Spanish Flu</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/03/17/70-ethics-in-the-time-of-corona/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="154474289" type="audio/mpeg" url="http://ia601502.us.archive.org/28/items/covid19ethics1703202016.58/COVID%2019%20ETHICS%20-%2017%3A03%3A2020%2C%2016.58.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2763</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to … More 70 – Ethics in the time of Corona</itunes:summary>
<googleplay:description>Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to … More 70 – Ethics in the time of Corona</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/03/coronavirus_3-696x477-1.jpg">
			<media:title type="html">Coronavirus_3-696x477</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>Like almost everyone else, I have been obsessing over the novel coronavirus pandemic for the past few months. Given the dramatic escalation in the pandemic in the past week, and the tricky ethical questions it raises for everyone, I thought it was about time to do an episode about it. So I reached out to &amp;#8230; More 70 &amp;#8211; Ethics in the time of&amp;#160;Corona</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>69 – Wood on Sustainable Superabundance</title>
		<link>https://algocracy.wordpress.com/2020/02/24/69-wood-on-sustainable-superabundance/</link>
					<comments>https://algocracy.wordpress.com/2020/02/24/69-wood-on-sustainable-superabundance/#respond</comments>
		
		
		<pubDate>Mon, 24 Feb 2020 18:40:41 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2758</guid>

					<description><![CDATA[In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/02/24/69-wood-on-sustainable-superabundance/">More <span class="screen-reader-text">69 &#8211; Wood on Sustainable&#160;Superabundance</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2760" data-permalink="https://algocracy.wordpress.com/2020/02/24/69-wood-on-sustainable-superabundance/david-wood/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="David Wood" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg?w=512" class="alignnone  wp-image-2760" src="https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg" alt="David Wood" width="284" height="284" srcset="https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg?w=284&amp;h=284 284w, https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg 512w" sizes="(max-width: 284px) 100vw, 284px" /></p>
<p>In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author or lead editor of nine books including, &#8220;RAFT 2035&#8221;, &#8220;The Abolition of Aging&#8221;, &#8220;Transcending Politics&#8221;, and &#8220;Sustainable Superabundance&#8221;. We chat about the last book on this list &#8212; Sustainable Superabundance &#8212; and its case for an optimistic future.</p>
<p>You can download the episode <a href="https://ia801501.us.archive.org/32/items/davidwood2402202018.19/David%20Wood%20-%2024%3A02%3A2020%2C%2018.19.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://open.spotify.com/show/2WSuOPqOUR4pJnYXigG2zW">Spotify</a> and other podcasting services (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/davidwood2402202018.19" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:40 &#8211; Who are the London Futurists? What do they do?</li>
<li>3:34 &#8211; Why did David write <em>Sustainable Superabundance</em>?</li>
<li>7:22 &#8211; What is sustainable superabundance?</li>
<li>11:05 &#8211; Seven spheres of flourishing and seven types of superabundance?</li>
<li>16:16 &#8211; Why is David a transhumanist?</li>
<li>20:20 &#8211; Dealing with two criticisms of transhumanism: (i) isn&#8217;t it naive and polyannish? (ii) isn&#8217;t it elitist, inegalitarian and dangerous?</li>
<li>30:00 &#8211; Key principles of transhumanism</li>
<li>34:52 &#8211; How will we address energy needs of the future?</li>
<li>40:35 &#8211; How optimistic can we really be about the future of energy?</li>
<li>46:20 &#8211; Dealing with pessimism about food production?</li>
<li>52:48 &#8211; Are we heading for another AI winter?</li>
<li>1:01:08 &#8211; The politics of superabundance &#8211; what needs to change?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://twitter.com/dw2">David Wood on Twitter</a></li>
<li><a href="https://londonfuturists.com/">London Futurists website</a></li>
<li><a href="https://www.youtube.com/channel/UCEOIGoSFzsjgrPHdbOMsIAQ">London Futurists Youtube</a></li>
<li><a href="https://transpolitica.org/projects/abundance-manifesto/">Sustainable Superabundance by David</a></li>
<li><a href="https://transpolitica.org/">Other books in the Transpolitica series</a></li>
<li><a href="https://www.amazon.com/Be-Machine-Adventures-Utopians-Futurists/dp/0385540418">To be a machine by Mark O&#8217;Connell</a></li>
<li><a href="https://algocracy.wordpress.com/2016/05/03/episode-2-james-hughes-on-the-transhumanist-political-project/">Previous episode with James Hughes about techno-progressive transhumanism</a></li>
<li><a href="https://algocracy.wordpress.com/2016/10/06/episode-12-rick-searle-on-the-dark-side-of-transhumanism/">Previous episode with Rick Searle about the dark side of transhumanism</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/02/24/69-wood-on-sustainable-superabundance/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="97658589" type="audio/mpeg" url="http://ia801501.us.archive.org/32/items/davidwood2402202018.19/David%20Wood%20-%2024%3A02%3A2020%2C%2018.19.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2758</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author … More 69 – Wood on Sustainable Superabundance</itunes:summary>
<googleplay:description>In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author … More 69 – Wood on Sustainable Superabundance</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2020/02/david-wood.jpg">
			<media:title type="html">David Wood</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to David Wood. David is currently the chair of the London Futurists group and a full-time futurist speaker, analyst, commentator, and writer. He studied the philosophy of science at Cambridge University. He has a background in designing, architecting, implementing, supporting, and avidly using smart mobile devices. He is the author &amp;#8230; More 69 &amp;#8211; Wood on Sustainable&amp;#160;Superabundance</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>68 – Earp on the Ethics of Love Drugs</title>
		<link>https://algocracy.wordpress.com/2020/02/06/68-earp-on-the-ethics-of-love-drugs/</link>
					<comments>https://algocracy.wordpress.com/2020/02/06/68-earp-on-the-ethics-of-love-drugs/#respond</comments>
		
		
		<pubDate>Thu, 06 Feb 2020 11:00:46 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2752</guid>

					<description><![CDATA[In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2020/02/06/68-earp-on-the-ethics-of-love-drugs/">More <span class="screen-reader-text">68 &#8211; Earp on the Ethics of Love&#160;Drugs</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2626" data-permalink="https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/brian-earp/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg" data-orig-size="1142,853" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;2.8&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Canon EOS 5D Mark III&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1421458316&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;70&quot;,&quot;iso&quot;:&quot;800&quot;,&quot;shutter_speed&quot;:&quot;0.004&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Brian Earp" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=748" class="alignnone  wp-image-2626" src="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg" alt="Brian Earp" width="470" height="351" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=470&amp;h=351 470w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=940&amp;h=702 940w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=150&amp;h=112 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=300&amp;h=224 300w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=768&amp;h=574 768w" sizes="(max-width: 470px) 100vw, 470px" /></p>
<p>In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about his latest book, co-authored with Julian Savulescu, on love drugs.</p>
<p>You can listen to the episode below or download it <a href="https://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3">here</a>. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="http://Stitcher">Stitcher</a>, <a href="http://Spotify">Spotify</a> and other leading <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/brianearpmaster0602202010.35" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<p>&nbsp;</p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:17 &#8211; What is love? (Baby don&#8217;t hurt me) What is a love drug?</li>
<li>7:30 &#8211; What are the biological underpinnings of love?</li>
<li>10:00 &#8211; How constraining is the biological foundation to love?</li>
<li>13:45 &#8211; So we&#8217;re not natural born monogamists or polyamorists?</li>
<li>17:48 &#8211; Examples of actual love drugs</li>
<li>23:32 &#8211; MDMA in couples therapy</li>
<li>27:55 &#8211; The situational ethics of love drugs</li>
<li>33:25 &#8211; The non-specific nature of love drugs</li>
<li>39:00 &#8211; The basic case in favour of love drugs</li>
<li>40:48 &#8211; The ethics of anti-love drugs</li>
<li>44:00 &#8211; The ethics of conversion therapy</li>
<li>48:15 &#8211; Individuals vs systemic change</li>
<li>50:20 &#8211; Do love drugs undermine autonomy or authenticity?</li>
<li>54:20 &#8211; The Vice of In-Principlism</li>
<li>56:30 &#8211; The future of love drugs</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://oxford.academia.edu/BrianDEarp">Brian&#8217;s Academia.edu page</a> (freely accessible papers)</li>
<li><a href="https://www.researchgate.net/profile/Brian_Earp">Brian&#8217;s Researchgate page</a> (freely accessible papers)</li>
<li><a href="https://www.youtube.com/watch?v=UuuTOpZxwRk">Brian asking Sam Harris a question</a></li>
<li>The book: <em><a href="https://www.sup.org/books/title/?id=27130">Love Drugs</a></em> or <em><a href="https://manchesteruniversitypress.co.uk/9781526145413/">Love is the Drug</a></em></li>
<li>&#8216;<a href="https://www.academia.edu/38747629/Love_and_enhancement_technology">Love and enhancement technology&#8217;</a>by Brian Earp</li>
<li>&#8216;<a href="https://www.academia.edu/4703716/The_Vice_of_In-Principlism_and_the_Harmfulness_of_Love">The Vice of In-principlism and the Harmfulness of Love&#8217;</a> by me</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2020/02/06/68-earp-on-the-ethics-of-love-drugs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>
<enclosure length="85639545" type="audio/mpeg" url="http://ia601506.us.archive.org/13/items/brianearpmaster0602202010.35/Brian%20Earp%20Master%20-%2006%3A02%3A2020%2C%2010.35.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2752</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the … More 68 – Earp on the Ethics of Love Drugs</itunes:summary>
<googleplay:description>In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the … More 68 – Earp on the Ethics of Love Drugs</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg">
			<media:title type="html">Brian Earp</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk (again) to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the &amp;#8230; More 68 &amp;#8211; Earp on the Ethics of Love&amp;#160;Drugs</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>67 – Rini on Deepfakes and the Epistemic Backstop</title>
		<link>https://algocracy.wordpress.com/2019/12/17/67-rini-on-deepfakes-and-the-epistemic-backstop/</link>
					<comments>https://algocracy.wordpress.com/2019/12/17/67-rini-on-deepfakes-and-the-epistemic-backstop/#respond</comments>
		
		
		<pubDate>Tue, 17 Dec 2019 15:07:53 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2746</guid>

					<description><![CDATA[In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/12/17/67-rini-on-deepfakes-and-the-epistemic-backstop/">More <span class="screen-reader-text">67 &#8211; Rini on Deepfakes and the Epistemic&#160;Backstop</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2749" data-permalink="https://algocracy.wordpress.com/2019/12/17/67-rini-on-deepfakes-and-the-epistemic-backstop/reginarini/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg" data-orig-size="2061,2575" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;4.6&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;DMC-TZ5&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1260109261&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;12.2&quot;,&quot;iso&quot;:&quot;400&quot;,&quot;shutter_speed&quot;:&quot;0.04&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="reginarini" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg?w=748" class="alignnone  wp-image-2749" src="https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg" alt="reginarini" width="276" height="345" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg?w=276&amp;h=345 276w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg?w=552&amp;h=690 552w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg?w=120&amp;h=150 120w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg?w=240&amp;h=300 240w" sizes="(max-width: 276px) 100vw, 276px" /></p>
<p>In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow at the NYU Center for Bioethics, a postdoctoral research fellow in philosophy at Oxford University and a junior research fellow of Jesus College Oxford. We talk about the political and epistemological consequences of deepfakes. This is a fascinating and timely conversation.</p>
<p>You can download this episode <a href="https://ia601501.us.archive.org/19/items/reginarinimaster1312201921.22/Regina%20Rini%20Master%20-%2013%3A12%3A2019%2C%2021.22.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a <a href="http://other podcasting services">variety of other</a> podcasting services (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed here)</a>.</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/reginarinimaster1312201921.22" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>3:20 &#8211; What are deepfakes?</li>
<li>7:35 &#8211; What is the academic justification for creating deepfakes (if any)?</li>
<li>11:35 &#8211; The different uses of deepfakes: Porn versus Politics</li>
<li>16:00 &#8211; The epistemic backstop and the role of audiovisual recordings</li>
<li>22:50 &#8211; Two ways that recordings regulate our testimonial practices</li>
<li>26:00 &#8211; But recordings aren&#8217;t a window onto the truth, are they?</li>
<li>34:34 &#8211; Is the Golden Age of recordings over?</li>
<li>39:36 &#8211; Will the rise of deepfakes lead to the rise of epistemic elites?</li>
<li>44:32 &#8211; How will deepfakes fuel political partisanship?</li>
<li>50:28 &#8211; Deepfakes and the end of public reason</li>
<li>54:15 &#8211; Is there something particularly disruptive about deepfakes?</li>
<li>58:25 &#8211; What can be done to address the problem?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://reginarini.net/">Regina&#8217;s Homepage</a></li>
<li><a href="https://philpeople.org/profiles/regina-rini">Regina&#8217;s Philpapers Page</a></li>
<li>&#8220;<a href="https://philpapers.org/archive/RINDAT.pdf">Deepfakes and the Epistemic Backstop</a>&#8221; by Regina</li>
<li><a href="https://philpapers.org/rec/RINFNA">&#8220;Fake News and Partisan Epistemology&#8221;</a> by Regina</li>
<li><a href="https://www.youtube.com/watch?v=EkfnjAeHFAk">Jeremy Corbyn and Boris Johnson Deepfake Video</a></li>
<li>&#8220;<a href="https://www.wired.com/story/opinion-californias-anti-deepfake-law-is-far-too-feeble/#">California’s Anti-Deepfake Law Is Far Too Feeble</a>&#8221; Op-Ed in Wired</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/12/17/67-rini-on-deepfakes-and-the-epistemic-backstop/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="93449949" type="audio/mpeg" url="http://ia601501.us.archive.org/19/items/reginarinimaster1312201921.22/Regina%20Rini%20Master%20-%2013%3A12%3A2019%2C%2021.22.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2746</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow … More 67 – Rini on Deepfakes and the Epistemic Backstop</itunes:summary>
<googleplay:description>In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow … More 67 – Rini on Deepfakes and the Epistemic Backstop</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/12/reginarini.jpg">
			<media:title type="html">reginarini</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Dr Regina Rini. Dr Rini currently teaches in the Philosophy Department at York University, Toronto where she holds the Canada Research Chair in Philosophy of Moral and Social Cognition. She has a PhD from NYU and before coming to York in 2017 was an Assistant Professor / Faculty Fellow &amp;#8230; More 67 &amp;#8211; Rini on Deepfakes and the Epistemic&amp;#160;Backstop</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>66 – Wong on Confucianism, Robots and Moral Deskillling</title>
		<link>https://algocracy.wordpress.com/2019/12/06/66-wong-on-confucianism-robots-and-moral-deskillling/</link>
					<comments>https://algocracy.wordpress.com/2019/12/06/66-wong-on-confucianism-robots-and-moral-deskillling/#respond</comments>
		
		
		<pubDate>Fri, 06 Dec 2019 13:07:32 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2741</guid>

					<description><![CDATA[In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/12/06/66-wong-on-confucianism-robots-and-moral-deskillling/">More <span class="screen-reader-text">66 &#8211; Wong on Confucianism, Robots and Moral&#160;Deskillling</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2744" data-permalink="https://algocracy.wordpress.com/2019/12/06/66-wong-on-confucianism-robots-and-moral-deskillling/pak-hang-wong/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg" data-orig-size="400,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Pak-Hang Wong" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg?w=400" class="alignnone  wp-image-2744" src="https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg" alt="Pak-Hang Wong" width="309" height="309" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg?w=309&amp;h=309 309w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg 400w" sizes="(max-width: 309px) 100vw, 309px" /></p>
<p>In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford and Hong Kong. In 2017, he joined the Research Group for Ethics in Information Technology, at the Department of Informatics, Universitat Hamburg. We talk about the robotic disruption of morality and how it affects our capacity to develop moral virtues. Pak argues for a distinctive Confucian approach to this topic and so provides something of a masterclass on Confucian virtue ethics in the course of our conversation.</p>
<p>You can download the episode <a href="https://ia801505.us.archive.org/3/items/pakhangwong0412201919.14/Pak-Hang%20Wong%20-%2004%3A12%3A2019%2C%2019.14.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/pakhangwong0412201919.14" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:56 &#8211; How do robots disrupt our moral lives?</li>
<li>7:18 &#8211; Robots and Moral Deskilling</li>
<li>12:52 &#8211; The Folk Model of Virtue Acquisition</li>
<li>21:16 &#8211; The Confucian approach to Ethics</li>
<li>24:28 &#8211; Confucianism versus the European approach</li>
<li>29:05 &#8211; Confucianism and situationism</li>
<li>34:00 &#8211; The Importance of Rituals</li>
<li>39:39 &#8211; A Confucian Response to Moral Deskilling</li>
<li>43:37 &#8211; Criticisms (moral silencing)</li>
<li>46:48 &#8211; Generalising the Confucian approach</li>
<li>50:00 &#8211; Do we need new Confucian rituals?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.inf.uni-hamburg.de/en/inst/ab/eit/team/wong.html">Pak&#8217;s homepage</a> at the University of Hamburg</li>
<li><a href="https://philpeople.org/profiles/pak-hang-wong">Pak&#8217;s Philpeople Profile</a></li>
<li>&#8220;<a href="https://philpapers.org/rec/WONRAM">Rituals and Machines: A Confucian Response to Technology Driven Moral Deskilling</a>&#8221; by Pak</li>
<li>&#8220;<a href="https://philpapers.org/rec/WONRIF">Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?</a>&#8221; by Pak</li>
<li>&#8220;<a href="https://philpapers.org/rec/WONCTG">Consenting to Geoengineering</a>&#8221; by Pak</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/09/episode-45-vallor-on-virtue-ethics-and.html">Episode 45 with Shannon Vallor on Technology and the Virtues</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/12/06/66-wong-on-confucianism-robots-and-moral-deskillling/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="76885599" type="audio/mpeg" url="http://ia801505.us.archive.org/3/items/pakhangwong0412201919.14/Pak-Hang%20Wong%20-%2004%3A12%3A2019%2C%2019.14.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2741</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford … More 66 – Wong on Confucianism, Robots and Moral Deskillling</itunes:summary>
<googleplay:description>In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford … More 66 – Wong on Confucianism, Robots and Moral Deskillling</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/12/pak-hang-wong.jpg">
			<media:title type="html">Pak-Hang Wong</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Dr Pak-Hang Wong. Pak is a philosopher of technology and works on ethical and political issues of emerging technologies. He is currently a research associate at the Universitat Hamburg. He received his PhD in Philosophy from the University of Twente in 2012, and then held academic positions in Oxford &amp;#8230; More 66 &amp;#8211; Wong on Confucianism, Robots and Moral&amp;#160;Deskillling</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>65 – Vold on How We Can Extend Our Minds With AI</title>
		<link>https://algocracy.wordpress.com/2019/11/22/65-vold-on-how-we-can-extend-our-minds-with-ai/</link>
					<comments>https://algocracy.wordpress.com/2019/11/22/65-vold-on-how-we-can-extend-our-minds-with-ai/#respond</comments>
		
		
		<pubDate>Fri, 22 Nov 2019 10:11:23 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2733</guid>

					<description><![CDATA[In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/11/22/65-vold-on-how-we-can-extend-our-minds-with-ai/">More <span class="screen-reader-text">65 &#8211; Vold on How We Can Extend Our Minds With&#160;AI</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2736" data-permalink="https://algocracy.wordpress.com/2019/11/22/65-vold-on-how-we-can-extend-our-minds-with-ai/karina-vold/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg" data-orig-size="225,225" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Karina Vold" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg?w=225" class="alignnone  wp-image-2736" src="https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg" alt="Karina Vold" width="257" height="257" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg 225w, https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg?w=150&amp;h=150 150w" sizes="(max-width: 257px) 100vw, 257px" /></p>
<p>In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research Fellow at the Faculty of Philosophy, and a Digital Charter Fellow at the Alan Turing Institute. We talk about the ethics extended cognition and how it pertains to the use of artificial intelligence. This is a fascinating topic because it addresses one of the oft-overlooked effects of AI on the human mind.</p>
<p>You can download the episode <a href="https://ia601500.us.archive.org/13/items/karianvoldinterview2111201914.10/Karian%20Vold%20Interview%20-%2021%3A11%3A2019%2C%2014.10.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a range of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/karianvoldinterview2111201914.10" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:55 &#8211; Some examples of AI cognitive extension</li>
<li>13:07 &#8211; Defining cognitive extension</li>
<li>17:25 &#8211; Extended cognition versus extended mind</li>
<li>19:44 &#8211; The Coupling-Constitution Fallacy</li>
<li>21:50 &#8211; Understanding different theories of situated cognition</li>
<li>27:20 &#8211; The Coupling-Constitution Fallacy Redux</li>
<li>30:20 &#8211; What is distinctive about AI-based cognitive extension?</li>
<li>34:20 &#8211; The three/four different ways of thinking about human interactions with AI</li>
<li>40:04 &#8211; Problems with this framework</li>
<li>49:37 &#8211; The Problem of Cognitive Atrophy</li>
<li>53:31 &#8211; The Moral Status of AI Extenders</li>
<li>57:12 &#8211; The Problem of Autonomy and Manipulation</li>
<li>58:55 &#8211; The policy implications of recognising AI cognitive extension</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.kkvd.com/">Karina&#8217;s homepage</a></li>
<li><a href="http://lcfi.ac.uk/team/karina-vold/">Karina at the Leverhulme Centre for the Future of Intelligence</a></li>
<li>&#8220;<a href="http://lcfi.ac.uk/media/uploads/files/AIES-19_paper_Vold_Hernandez_Orallo_zaIgfNF.pdf">AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI</a>&#8221; by José Hernández Orallo and Karina Vold</li>
<li>&#8220;<a href="https://www.ingentaconnect.com/content/imp/jcs/2015/00000022/F0020003/art00002">The Parity Argument for Extended Consciousness</a>&#8221; by Karina</li>
<li>&#8220;<a href="https://aeon.co/ideas/are-you-just-inside-your-skin-or-is-your-smartphone-part-of-you">Are ‘you’ just inside your skin or is your smartphone part of you?</a>&#8221; by Karina</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2017/11/episode-32-clark-and-palermos-on.html">Episode 32 &#8211; Carter and Palermos on Extended Cognition and Extended Assault</a></li>
<li>&#8220;<a href="http://www.alice.id.tue.nl/references/clark-chalmers-1998.pdf">The Extended Mind&#8221;</a> by Clark and Chalmers</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2015/11/theory-and-application-of-extended-mind.html">Theory and Application of the Extended Mind</a> (series by me)</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/11/22/65-vold-on-how-we-can-extend-our-minds-with-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="93236163" type="audio/mpeg" url="http://ia601500.us.archive.org/13/items/karianvoldinterview2111201914.10/Karian%20Vold%20Interview%20-%2021%3A11%3A2019%2C%2014.10.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2733</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research … More 65 – Vold on How We Can Extend Our Minds With AI</itunes:summary>
<googleplay:description>In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research … More 65 – Vold on How We Can Extend Our Minds With AI</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/11/karina-vold.jpeg">
			<media:title type="html">Karina Vold</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Dr Karina Vold. Karina is a philosopher of mind, cognition, and artificial intelligence. She works on the ethical and societal impacts of emerging technologies and their effects on human cognition. Dr Vold is currently a postdoctoral Research Associate at the Leverhulme Centre for the Future of Intelligence, a Research &amp;#8230; More 65 &amp;#8211; Vold on How We Can Extend Our Minds With&amp;#160;AI</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#64 – Munthe on the Precautionary Principle and Existential Risk</title>
		<link>https://algocracy.wordpress.com/2019/09/19/64-munthe-on-the-precautionary-principle-and-existential-risk/</link>
					<comments>https://algocracy.wordpress.com/2019/09/19/64-munthe-on-the-precautionary-principle-and-existential-risk/#respond</comments>
		
		
		<pubDate>Thu, 19 Sep 2019 15:43:16 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2728</guid>

					<description><![CDATA[In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science &#38; technology, the environment and society. He is probably best-known for his work on the &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/09/19/64-munthe-on-the-precautionary-principle-and-existential-risk/">More <span class="screen-reader-text">#64 &#8211; Munthe on the Precautionary Principle and Existential&#160;Risk</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2729" data-permalink="https://algocracy.wordpress.com/2019/09/19/64-munthe-on-the-precautionary-principle-and-existential-risk/christian-munthe/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg" data-orig-size="225,225" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Christian Munthe" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg?w=225" class="alignnone size-full wp-image-2729" src="https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg" alt="Christian Munthe" width="225" height="225" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg 225w, https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg?w=150&amp;h=150 150w" sizes="(max-width: 225px) 100vw, 225px" /></p>
<p>In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science &amp; technology, the environment and society. He is probably best-known for his work on the precautionary principle and its uses in ethical and policy debates. This was the central topic of his 2011 book <em>The Price of Precaution and the Ethics of Risk</em>. We talk about the problems with the practical application of the precautionary principle and how they apply to the debate about existential risk.</p>
<p>You can download the episode <a href="https://ia601507.us.archive.org/34/items/christianmunthe1909201913.58/Christian%20Munthe%20-%2019%3A09%3A2019%2C%2013.58.mp3">here</a> or listen below. You can also subscribe on <a href="http://via iTunes here">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a variety of other podcasting services (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/christianmunthe1909201913.58" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3></h3>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:35 &#8211; What is the precautionary principle? Where did it come from?</li>
<li>6:08 &#8211; The key elements of the precautionary principle</li>
<li>9:35 &#8211; Precaution vs. Cost Benefit Analysis</li>
<li>15:40 &#8211; The Problem of the Knowledge Gap in Existential Risk</li>
<li>21:52 &#8211; How do we fill the knowledge gap?</li>
<li>27:04 &#8211; Why can&#8217;t we fill the knowledge gap in the existential risk debate?</li>
<li>30:12 &#8211; Understanding the Black Hole Challenge</li>
<li>35:22 &#8211; Is it a black hole or total decisional paralysis?</li>
<li>39:14 &#8211; Why does precautionary reasoning have a &#8216;price&#8217;?</li>
<li>44:18 &#8211; Can we develop a normative theory of precautionary reasoning? Is there such a thing as a morally good precautionary reasoner?</li>
<li>52:20 &#8211; Are there important practical limits to precautionary reasoning?</li>
<li>1:01:38 &#8211; Existential risk and the conservation of value</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.gu.se/english/about_the_university/staff/?userId=xmuntc">Christian&#8217;s Academic Homepage</a></li>
<li><a href="https://twitter.com/christianmunthe">Christian&#8217;s Twitter account</a></li>
<li>&#8220;<a href="https://www.researchgate.net/publication/308748395_The_Black_Hole_Challenge_Precaution_Existential_Risks_and_the_Problem_of_Knowledge_Gaps">The Black Hole Challenge: Precaution, Existential Risks and the Problem of Knowledge Gaps&#8221;</a> by Christian</li>
<li><a href="https://www.springer.com/gp/book/9789400713291"><em>The Price of Precaution and the Ethics of</em> <em>Risk</em></a> by Christian</li>
<li>Hans Jonas&#8217;s <em><a href="https://www.press.uchicago.edu/ucp/books/book/chicago/I/bo5953283.html">The Imperative of Responsibility</a></em></li>
<li><a href="http://www.gdrc.org/u-gov/precaution-7.html">The Precautionary Approach</a> from the Rio Declaration</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2019/07/62-haggstrom-on-ai-motivations-and-risk.html">Episode 62 with Olle Häggström</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/09/19/64-munthe-on-the-precautionary-principle-and-existential-risk/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="96942625" type="audio/mpeg" url="http://ia601507.us.archive.org/34/items/christianmunthe1909201913.58/Christian%20Munthe%20-%2019%3A09%3A2019%2C%2013.58.mp3"/>
<enclosure length="96942625" type="audio/mpeg" url="http://ia601507.us.archive.org/34/items/christianmunthe1909201913.58/Christian%20Munthe%20-%2019%3A09%3A2019%2C%2013.58.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2728</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science &amp; technology, the environment and society. He is probably best-known for his work on the … More #64 – Munthe on the Precautionary Principle and Existential Risk</itunes:summary>
<googleplay:description>In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science &amp; technology, the environment and society. He is probably best-known for his work on the … More #64 – Munthe on the Precautionary Principle and Existential Risk</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/09/christian-munthe.jpeg">
			<media:title type="html">Christian Munthe</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Christian Munthe. Christian is a Professor of Practical Philosophy at the University of Gothenburg, Sweden. He conducts research and expert consultation on ethics, value and policy issues arising in the intersection of health, science &amp;#38; technology, the environment and society. He is probably best-known for his work on the &amp;#8230; More #64 &amp;#8211; Munthe on the Precautionary Principle and Existential&amp;#160;Risk</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#63 – Reagle on the Ethics of Life Hacking</title>
		<link>https://algocracy.wordpress.com/2019/08/28/63-reagle-on-the-ethics-of-life-hacking/</link>
					<comments>https://algocracy.wordpress.com/2019/08/28/63-reagle-on-the-ethics-of-life-hacking/#respond</comments>
		
		
		<pubDate>Wed, 28 Aug 2019 15:59:32 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2720</guid>

					<description><![CDATA[In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/08/28/63-reagle-on-the-ethics-of-life-hacking/">More <span class="screen-reader-text">#63 &#8211; Reagle on the Ethics of Life&#160;Hacking</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2723" data-permalink="https://algocracy.wordpress.com/2019/08/28/63-reagle-on-the-ethics-of-life-hacking/joseph-reagle/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png" data-orig-size="300,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Joseph Reagle" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png?w=300" class="alignnone size-full wp-image-2723" src="https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png" alt="Joseph Reagle" width="300" height="300" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png?w=150&amp;h=150 150w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p>In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the social implications of digital technology. Our conversation focuses on his most recent book: <em>Hacking Life: Systematized Living and its Discontents</em> (MIT Press 2019).</p>
<p>You can download the episode <a href="https://ia801400.us.archive.org/16/items/josephreaglemaster2808201913.29/Joseph%20Reagle%20Master%20-%2028%3A08%3A2019%2C%2013.29.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a variety of other podcasting services (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/josephreaglemaster2808201913.29" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<p> </p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:52 &#8211; What is life-hacking? The four features of life-hacking</li>
<li>4:20 &#8211; Life Hacking as Self Help for the 21st Century</li>
<li>7:00 &#8211; How does technology facilitate life hacking?</li>
<li>12:12 &#8211; How can we hack time?</li>
<li>20:00 &#8211; How can we hack motivation?</li>
<li>27:00 &#8211; How can we hack our relationships?</li>
<li>31:00 &#8211; The Problem with Pick-Up Artists</li>
<li>34:10 &#8211; Hacking Health and Meaning</li>
<li>39:12 &#8211; The epistemic problems of self-experimentation</li>
<li>49:05 &#8211; The dangers of metric fixation</li>
<li>54:20 &#8211; The social impact of life-hacking</li>
<li>57:35 &#8211; Is life hacking too individualistic? Should we focus more on systemic problems?</li>
<li>1:03:15 &#8211; Does life hacking encourage a less intuitive and less authentic mode of living?</li>
<li>1:08:40 &#8211; Conclusion (with some further thoughts on inequality)</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://reagle.org/joseph/">Joseph&#8217;s Homepage</a></li>
<li><a href="https://reagle.org/joseph/pelican/">Joseph&#8217;s Blog</a></li>
<li><a href="https://mitpress.mit.edu/books/hacking-life">Hacking Life: Systematized Living and Its Discontents</a> (including open access <a href="https://hackinglife.mitpress.mit.edu/">HTML version</a>)</li>
<li><a href="https://lifehacker.com/">The Lifehacker Website</a></li>
<li><a href="https://quantifiedself.com/">The Quantified Self Website</a></li>
<li><a href="https://observer.com/2014/04/seth-roberts-final-column-butter-makes-me-smarter/">Seth Roberts&#8217; first and final column: Butter Makes me Smarter</a></li>
<li><a href="https://www.nbcnews.com/business/consumer/couple-pays-each-other-put-kids-bed-n13021">The Couple that Pays Each Other to Put the Kids to Bed</a> (story about the founders of the Beeminder App)</li>
<li>&#8216;<a href="https://philpapers.org/archive/DANTQR.pdf">The Quantified Relationship</a>&#8216; by Danaher, Nyholm and Earp</li>
<li><a href="https://algocracy.wordpress.com/2016/06/27/episode-6-deborah-lupton-on-the-quantified-self/">Episode 6 &#8211; The Quantified Self with Deborah Lupton</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/08/28/63-reagle-on-the-ethics-of-life-hacking/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="104896597" type="audio/mpeg" url="http://ia801400.us.archive.org/16/items/josephreaglemaster2808201913.29/Joseph%20Reagle%20Master%20-%2028%3A08%3A2019%2C%2013.29.mp3"/>
<enclosure length="104896597" type="audio/mpeg" url="http://ia801400.us.archive.org/16/items/josephreaglemaster2808201913.29/Joseph%20Reagle%20Master%20-%2028%3A08%3A2019%2C%2013.29.mp3"/>
<enclosure length="104896597" type="audio/mpeg" url="http://ia801400.us.archive.org/16/items/josephreaglemaster2808201913.29/Joseph%20Reagle%20Master%20-%2028%3A08%3A2019%2C%2013.29.mp3"/>
<enclosure length="104896597" type="audio/mpeg" url="http://ia801400.us.archive.org/16/items/josephreaglemaster2808201913.29/Joseph%20Reagle%20Master%20-%2028%3A08%3A2019%2C%2013.29.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2720</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the … More #63 – Reagle on the Ethics of Life Hacking</itunes:summary>
<googleplay:description>In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the … More #63 – Reagle on the Ethics of Life Hacking</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/08/joseph-reagle.png">
			<media:title type="html">Joseph Reagle</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Joseph Reagle. Joseph is an Associate Professor of Communication Studies at Northeastern University and a former fellow (in 1998 and 2010) and faculty associate at the Berkman Klein Center for Internet and Society at Harvard. He is the author of several books and papers about digital media and the &amp;#8230; More #63 &amp;#8211; Reagle on the Ethics of Life&amp;#160;Hacking</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#62 – Häggström on AI Motivations and Risk Denialism</title>
		<link>https://algocracy.wordpress.com/2019/07/03/62-haggstrom-on-ai-motivations-and-risk-denialism/</link>
					<comments>https://algocracy.wordpress.com/2019/07/03/62-haggstrom-on-ai-motivations-and-risk-denialism/#respond</comments>
		
		
		<pubDate>Wed, 03 Jul 2019 04:48:00 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2715</guid>

					<description><![CDATA[In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/07/03/62-haggstrom-on-ai-motivations-and-risk-denialism/">More <span class="screen-reader-text">#62 &#8211; Häggström on AI Motivations and Risk&#160;Denialism</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2716" data-permalink="https://algocracy.wordpress.com/2019/07/03/62-haggstrom-on-ai-motivations-and-risk-denialism/olle_ha%cc%88ggstro%cc%88m/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg" data-orig-size="220,301" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Olle_Häggström" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg?w=220" class="alignnone  wp-image-2716" src="https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg" alt="Olle_Häggström" width="254" height="348" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg 220w, https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg?w=110&amp;h=150 110w" sizes="(max-width: 254px) 100vw, 254px" /></p>
<p>In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years he has broadened his research interests to focus applied statistics, philosophy, climate science, artificial intelligence and social consequences of future technologies. He is the author of <em><a href="https://global.oup.com/academic/product/here-be-dragons-9780198723547?cc=at&amp;lang=en&amp;">Here be Dragons: Science, Technology and the Future of Humanity</a></em> (OUP 2016). We talk about AI motivations, specifically the Omohundro-Bostrom theory of AI motivation and its weaknesses. We also discuss AI risk denialism.</p>
<p>You can download the episode <a href="https://ia601409.us.archive.org/29/items/OlleHaggstrom2406201920.54/Olle%20Haggstrom%20-%2024%3A06%3A2019%2C%2020.54.mp3">here</a> or listen below. You can also subscribe to the podcast on Apple Podcasts, Stitcher and a variety of other podcasting services (the RSS feed is here).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/OlleHaggstrom2406201920.54" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:02 &#8211; Do we need to define AI?</li>
<li>4:15 &#8211; The Omohundro-Bostrom theory of AI motivation</li>
<li>7:46 &#8211; Key concepts in the Omohundro-Bostrom Theory: Final Goals vs Instrumental Goals</li>
<li>10:50 &#8211; The Orthogonality Thesis</li>
<li>14:47 &#8211; The Instrumental Convergence Thesis</li>
<li>20:16 &#8211; Resource Acquisition as an Instrumental Goal</li>
<li>22:02 &#8211; The importance of goal-content integrity</li>
<li>25:42 &#8211; Deception as an Instrumental Goal</li>
<li>29:17 &#8211; How the doomsaying argument works</li>
<li>31:46 &#8211; Critiquing the theory: the problem of self-referential final goals</li>
<li>36:20 &#8211; The problem of incoherent goals</li>
<li>42:44 &#8211; Does the truth of moral realism undermine the orthogonality thesis?</li>
<li>50:50 &#8211; Problems with the distinction between instrumental goals and final goals</li>
<li>57:52 &#8211; Why do some people deny the problem of AI risk?</li>
<li>1:04:10 &#8211; Strong versus Weak AI Scepticism</li>
<li>1:09:00 &#8211; Is it difficult to be taken seriously on this topic?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://haggstrom.blogspot.com/">Olle&#8217;s Blog </a></li>
<li><a href="http://www.math.chalmers.se/~olleh/">Olle&#8217;s webpage at Chalmers University</a></li>
<li>&#8216;<a href="https://www.emeraldinsight.com/doi/abs/10.1108/FS-04-2018-0039?journalCode=fs">Challenges to the Omohundro-Bostrom framework for AI Motivations</a>&#8216; by Olle (highly recommended)</li>
<li>&#8216;<a href="https://nickbostrom.com/superintelligentwill.pdf">The Superintelligent Will</a>&#8216; by Nick Bostrom</li>
<li>&#8216;<a href="https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf">The Basic AI Drives&#8217;</a> by Stephen Omohundro</li>
<li><a href="https://www.youtube.com/watch?v=xryAN1N0RBg">Olle Häggström: Science, Technology, and the Future of Humanity</a> (video)</li>
<li><a href="http://video.itu.dk/video/51311859/forskningens-dogn-kunstig">Olle Häggström and Thore Husveldt debate AI Risk</a> (video)</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-0-series.html">Summary of Bostrom&#8217;s theory </a>(by me)</li>
<li>&#8216;<a href="https://philpapers.org/archive/DANTEC-2.pdf">Why AI doomsayers are like sceptical theists and why it matters</a>&#8216; by me</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/07/03/62-haggstrom-on-ai-motivations-and-risk-denialism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="108427516" type="audio/mpeg" url="http://ia601409.us.archive.org/29/items/OlleHaggstrom2406201920.54/Olle%20Haggstrom%20-%2024%3A06%3A2019%2C%2020.54.mp3"/>
<enclosure length="108427516" type="audio/mpeg" url="http://ia601409.us.archive.org/29/items/OlleHaggstrom2406201920.54/Olle%20Haggstrom%20-%2024%3A06%3A2019%2C%2020.54.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2715</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years … More #62 – Häggström on AI Motivations and Risk Denialism</itunes:summary>
<googleplay:description>In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years … More #62 – Häggström on AI Motivations and Risk Denialism</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/07/olle_hacc88ggstrocc88m.jpg">
			<media:title type="html">Olle_Häggström</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Olle Häggström. Olle is a professor of mathematical statistics at Chalmers University of Technology and a member of the Royal Swedish Academy of Sciences (KVA) and of the Royal Swedish Academy of Engineering Sciences (IVA). Olle’s main research is in probability theory and statistical mechanics, but in recent years &amp;#8230; More #62 &amp;#8211; Häggström on AI Motivations and Risk&amp;#160;Denialism</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#61 – Yampolskiy on Machine Consciousness and AI Welfare</title>
		<link>https://algocracy.wordpress.com/2019/06/20/61-yampolskiy-on-machine-consciousness-and-ai-welfare/</link>
					<comments>https://algocracy.wordpress.com/2019/06/20/61-yampolskiy-on-machine-consciousness-and-ai-welfare/#respond</comments>
		
		
		<pubDate>Thu, 20 Jun 2019 12:33:34 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2711</guid>

					<description><![CDATA[In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/06/20/61-yampolskiy-on-machine-consciousness-and-ai-welfare/">More <span class="screen-reader-text">#61 &#8211; Yampolskiy on Machine Consciousness and AI&#160;Welfare</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2712" data-permalink="https://algocracy.wordpress.com/2019/06/20/61-yampolskiy-on-machine-consciousness-and-ai-welfare/roman-yampolskiy/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg" data-orig-size="264,370" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Roman Yampolskiy" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg?w=264" class="alignnone size-full wp-image-2712" src="https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg" alt="Roman Yampolskiy" width="264" height="370" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg 264w, https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg?w=107&amp;h=150 107w" sizes="(max-width: 264px) 100vw, 264px" /></p>
<p>In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security and ethics, including Artificial Superintelligence: a Futuristic Approach. We talk about how you might test for machine consciousness and the first steps towards a science of AI welfare.</p>
<p>You can listen below or download <a href="https://ia801507.us.archive.org/11/items/RomanYampolskiy2006201910.52/Roman%20Yampolskiy%20-%2020%3A06%3A2019%2C%2010.52.mp3">here</a>. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a variety of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed</a> is here).</p>
<p style="text-align:left;"><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/RomanYampolskiy2006201910.52" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:30 &#8211; Artificial minds versus Artificial Intelligence</li>
<li>6:35 &#8211; Why talk about machine consciousness now when it seems far-fetched?</li>
<li>8:55 &#8211; What is phenomenal consciousness?</li>
<li>11:04 &#8211; Illusions as an insight into phenomenal consciousness</li>
<li>18:22 &#8211; How to create an illusion-based test for machine consciousness</li>
<li>23:58 &#8211; Challenges with operationalising the test</li>
<li>31:42 &#8211; Does AI already have a minimal form of consciousness?</li>
<li>34:08 &#8211; Objections to the proposed test and next steps</li>
<li>37:12 &#8211; Towards a science of AI welfare</li>
<li>40:30 &#8211; How do we currently test for animal and human welfare</li>
<li>44:10 &#8211; Dealing with the problem of deception</li>
<li>47:00 &#8211; How could we test for welfare in AI?</li>
<li>52:39 &#8211; If an AI can suffer, do we have a duty not to create it?</li>
<li>56:48 &#8211; Do people take these ideas seriously in computer science?</li>
<li>58:08 &#8211; What next?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://cecs.louisville.edu/ry/">Roman&#8217;s homepage</a></li>
<li>&#8216;<a href="https://www.researchgate.net/publication/321761318_Detecting_Qualia_in_Natural_and_Artificial_Agents">Detecting Qualia in Natural and Artificial Agents</a>&#8216; by Roman</li>
<li>&#8216;<a href="https://www.researchgate.net/publication/329960580_Towards_AI_Welfare_Science_and_Policies">Towards AI Welfare Science and Policies</a>&#8216; by Soenke Ziesche and Roman Yampolskiy</li>
<li><a href="https://www.iep.utm.edu/hard-con/">The Hard Problem of Consciousness</a></li>
<li><a href="https://list25.com/25-incredible-optical-illusions/">25 famous optical illusions</a></li>
<li><a href="https://www.sciencemag.org/news/2018/04/could-artificial-intelligence-get-depressed-and-have-hallucinations">Could AI get depressed and have hallucinations?</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/06/20/61-yampolskiy-on-machine-consciousness-and-ai-welfare/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="88347294" type="audio/mpeg" url="http://ia801507.us.archive.org/11/items/RomanYampolskiy2006201910.52/Roman%20Yampolskiy%20-%2020%3A06%3A2019%2C%2010.52.mp3"/>
<enclosure length="88347294" type="audio/mpeg" url="http://ia801507.us.archive.org/11/items/RomanYampolskiy2006201910.52/Roman%20Yampolskiy%20-%2020%3A06%3A2019%2C%2010.52.mp3"/>
<enclosure length="88347294" type="audio/mpeg" url="http://ia801507.us.archive.org/11/items/RomanYampolskiy2006201910.52/Roman%20Yampolskiy%20-%2020%3A06%3A2019%2C%2010.52.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2711</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security … More #61 – Yampolskiy on Machine Consciousness and AI Welfare</itunes:summary>
<googleplay:description>In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security … More #61 – Yampolskiy on Machine Consciousness and AI Welfare</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/06/roman-yampolskiy.jpg">
			<media:title type="html">Roman Yampolskiy</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Roman Yampolskiy. Roman is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books and papers on AI security &amp;#8230; More #61 &amp;#8211; Yampolskiy on Machine Consciousness and AI&amp;#160;Welfare</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#60 – Véliz on How to Improve Online Speech with Pseudonymity</title>
		<link>https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/</link>
					<comments>https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/#respond</comments>
		
		
		<pubDate>Mon, 20 May 2019 18:37:15 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2707</guid>

					<description><![CDATA[In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/">More <span class="screen-reader-text">#60 &#8211; Véliz on How to Improve Online Speech with&#160;Pseudonymity</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2708" data-permalink="https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/carissa-veliz/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Carissa Veliz" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=512" class="alignnone  wp-image-2708" src="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg" alt="Carissa Veliz" width="379" height="379" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=379&amp;h=379 379w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg 512w" sizes="(max-width: 379px) 100vw, 379px" /></p>
<p>In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research programme &#8216;Data, Privacy, and the Individual&#8217; at the IE&#8217;s Center for the Governance of Change&#8217;. We talk about the problems with online speech and how to use pseudonymity to address them.</p>
<p>You can download the episode <a href="https://archive.org/download/CarissaVelizMaster2005201917.56/Carissa%20Veliz%20Master%20-%2020%3A05%3A2019%2C%2017.56.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, and a variety of <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">other podcasting services </a>(the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p style="text-align:left;"><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/CarissaVelizMaster2005201917.56" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3> Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:25 &#8211; The problems with online speech</li>
<li>4:55 &#8211; Anonymity vs Identifiability</li>
<li>9:10 &#8211; The benefits of anonymous speech</li>
<li>16:12 &#8211; The costs of anonymous speech &#8211; The online Ring of Gyges</li>
<li>23:20 &#8211; How digital platforms mediate speech and make things worse</li>
<li>28:00 &#8211; Is speech more trustworthy when the speaker is identifiable?</li>
<li>30:50 &#8211; Solutions that don&#8217;t work</li>
<li>35:46 &#8211; How pseudonymity could address the problems with online speech</li>
<li>41:15 &#8211; Three forms of pseudonymity and how they should be used</li>
<li>44:00 &#8211; Do we need an organisation to manage online pseudonyms?</li>
<li>49:00 &#8211; Thoughts on the Journal of Controversial Ideas</li>
<li>54:00 &#8211; Will people use pseudonyms to deceive us?</li>
<li>57:30 &#8211; How pseudonyms could address the issues with un-PC speech</li>
<li>1:02:04 &#8211; Should we be optimistic or pessimistic about the future of online speech?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.carissaveliz.com/">Carissa&#8217;s Webpage</a></li>
<li><a href="https://onlinelibrary.wiley.com/doi/pdf/10.1111/japp.12342">&#8220;Online Masquerade: Redesigning the Internet for Free Speech Through the Use of Pseudonyms</a>&#8221; by Carissa</li>
<li>&#8220;<a href="https://www.independent.co.uk/student/student-life/technology-gaming/why-you-might-want-to-think-twice-about-surrendering-online-privacy-for-the-sake-of-convenience-a7529401.html">Why you might want to think twice about surrendering online privacy for the sake of convenience</a>&#8221; by Carissa</li>
<li>&#8220;<a href="https://hbr.org/2018/11/what-if-banks-were-the-main-protectors-of-customers-private-data#">What If Banks Were the Main Protectors of Customers’ Private Data?</a>&#8221; by Carissa</li>
<li><em><a href="https://www.amazon.co.uk/Secret-Barrister-Stories-Law-Broken/dp/1509841105/ref=sr_1_1?adgrpid=65254895504&amp;gclid=Cj0KCQjwoInnBRDDARIsANBVyAToPG2ZXoZjkWExTQsyqWNSc1JD7vVtsn2qtiAzTVmL_bAKcWLC7-caAm2uEALw_wcB&amp;hvadid=291369792209&amp;hvdev=c&amp;hvlocphy=1007876&amp;hvnetw=g&amp;hvpos=1t1&amp;hvqmt=b&amp;hvrand=6557032855947417034&amp;hvtargid=kwd-425075866799&amp;hydadcr=8242_1756996&amp;keywords=secret+barrister+book&amp;qid=1558376998&amp;s=gateway&amp;sr=8-1">The Secret Barrister</a></em></li>
<li><em><a href="https://www.amazon.co.uk/Delete-Virtue-Forgetting-Digital-Age/dp/0691150362/ref=sr_1_6?keywords=delete&amp;qid=1558377020&amp;s=gateway&amp;sr=8-6">Delete: The Virtue of Forgetting in the Digital Age</a></em> by Viktor Mayer-Schönberger</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/11/mills-argument-for-free-speech-guide.html">Mill&#8217;s Argument for Free Speech: A Guide</a></li>
<li>&#8216;<a href="https://www.chronicle.com/article/Here-Comes-The-Journal-of/245068">Here Comes the Journal of Controversial Ideas. Cue the Outcry</a>&#8216; by Bartlett</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/05/20/60-veliz-on-how-to-improve-online-speech-with-pseudonymity/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="91343434" type="audio/mpeg" url="http://archive.org/download/CarissaVelizMaster2005201917.56/Carissa%20Veliz%20Master%20-%2020%3A05%3A2019%2C%2017.56.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2707</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research … More #60 – Véliz on How to Improve Online Speech with Pseudonymity</itunes:summary>
<googleplay:description>In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research … More #60 – Véliz on How to Improve Online Speech with Pseudonymity</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/05/carissa-veliz.jpg">
			<media:title type="html">Carissa Veliz</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Carissa Véliz. Carissa is a Research Fellow at the Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities at the University of Oxford. She works on digital ethics, practical ethics more generally, political philosophy, and public policy. She is also the Director of the research &amp;#8230; More #60 &amp;#8211; Véliz on How to Improve Online Speech with&amp;#160;Pseudonymity</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#59 – Torres on Existential Risk, Omnicidal Agents and Superintelligence</title>
		<link>https://algocracy.wordpress.com/2019/05/09/59-torres-on-existential-risk-omnicidal-agents-and-superintelligence/</link>
					<comments>https://algocracy.wordpress.com/2019/05/09/59-torres-on-existential-risk-omnicidal-agents-and-superintelligence/#respond</comments>
		
		
		<pubDate>Thu, 09 May 2019 12:27:37 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2704</guid>

					<description><![CDATA[In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/05/09/59-torres-on-existential-risk-omnicidal-agents-and-superintelligence/">More <span class="screen-reader-text">#59 &#8211; Torres on Existential Risk, Omnicidal Agents and Superintelligence</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2705" data-permalink="https://algocracy.wordpress.com/2019/05/09/59-torres-on-existential-risk-omnicidal-agents-and-superintelligence/phil-torres/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg" data-orig-size="200,200" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Phil Torres" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg?w=200" class="alignnone  wp-image-2705" src="https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg" alt="Phil Torres" width="258" height="258" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg 200w, https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg?w=150&amp;h=150 150w" sizes="(max-width: 258px) 100vw, 258px" /></p>
<p>In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin of the Atomic Scientists, Futures, Erkenntnis, Metaphilosophy, Foresight, Journal of Future Studies, and the Journal of Evolution and Technology. He is the author of several books, including most recently Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks. We talk about the problem of apocalyptic terrorists, the proliferation dual-use technology and the governance problem that arises as a result. This is both a fascinating and potentially terrifying discussion.</p>
<p>You can download the episode <a href="https://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3">here</a> or listen below. You can also subscribe on<a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2"> Apple Podcasts</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a<a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html"> variety of other podcasting services</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/PhilTorres0905201912.37" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 – Introduction</li>
<li>3:14 – What is existential risk? Why should we care?</li>
<li>8:34 – The four types of agential risk/omnicidal terrorists</li>
<li>17:51 – Are there really omnicidal terror agents?</li>
<li>20:45 – How dual-use technology give apocalyptic terror agents the means to their desired ends</li>
<li>27:54 – How technological civilisation is uniquely vulernable to omnicidal agents</li>
<li>32:00 – Why not just stop creating dangerous technologies?</li>
<li>36:47 – Making the case for mass surveillance</li>
<li>41:08 – Why mass surveillance must be asymmetrical</li>
<li>45:02 – Mass surveillance, the problem of false positives and dystopian governance</li>
<li>56:25 – Making the case for benevolent superintelligent governance</li>
<li>1:02:51 – Why advocate for something so fantastical?</li>
<li>1:06:42 – Is an anti-tech solution any more fantastical than a benevolent AI solution?</li>
<li>1:10:20 – Does it all just come down to values: are you a techno-optimist or a techno-pessimist?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.xriskology.com/">Phil’s webpage</a></li>
<li><a href="https://docs.wixstatic.com/ugd/d9aaad_34d10a04399e4547978bb834d65cbcba.pdf">‘Superintelligence and the Future of Governance: On Prioritizing the Control Problem at the End of History</a>’ by Phil</li>
<li><a href="https://www.amazon.com/Morality-Foresight-Human-Flourishing-Introduction/dp/1634311426">Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks</a> by Phil</li>
<li>‘<a href="https://nickbostrom.com/papers/vulnerable.pdf">The Vulnerable World Hypothesis</a>” by Nick Bostrom</li>
<li><a href="https://www.lesswrong.com/posts/XdgyJ8CchteDLYgPz/the-post-singularity-social-contract-and-bostrom-s">Phil’s comparison of his paper with Bostrom’s paper</a></li>
<li><a href="https://www.theguardian.com/science/2006/jun/23/weaponstechnology.guardianweekly">The Guardian orders the small-pox genome</a></li>
<li><a href="https://www.sciencealert.com/chilling-drone-video-shows-a-disturbing-vision-of-an-ai-controlled-future">Slaughterbots</a></li>
<li><a href="https://www.amazon.com/Future-Violence-Robots-Hackers-Confronting/dp/0465089747">The Future of Violence</a> by Ben Wittes and Gabriela Blum</li>
<li><a href="http://www.futurecrimesbook.com/">Future Crimes</a> by Marc Goodman &#8211;</li>
<li><a href="https://en.wikipedia.org/wiki/2016_Dyn_cyberattack">The Dyn Cyberattack </a>&#8211;</li>
<li>Autonomous Technology by Langdon Winner</li>
<li><a href="https://arxiv.org/pdf/1709.01149.pdf">&#8216;Biotechnology and the Lifetime of Technological Civilisations’</a> by JG Sotos &#8211;</li>
<li>The God Machine Thought Experiment (Persson and Savulescu)</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/05/09/59-torres-on-existential-risk-omnicidal-agents-and-superintelligence/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>
<enclosure length="113527663" type="audio/mpeg" url="http://ia601404.us.archive.org/14/items/PhilTorres0905201912.37/Phil%20Torres%20%20-%2009%3A05%3A2019%2C%2012.37.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2704</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin … More #59 – Torres on Existential Risk, Omnicidal Agents and Superintelligence</itunes:summary>
<googleplay:description>In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin … More #59 – Torres on Existential Risk, Omnicidal Agents and Superintelligence</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/05/phil-torres.jpg">
			<media:title type="html">Phil Torres</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Phil Torres. Phil is an author and researcher who primarily focuses on existential risk. He is currently a visiting researcher at the Centre for the Study of Existential Risk at Cambridge University. He has published widely on emerging technologies, terrorism, and existential risks, with articles appearing in the Bulletin &amp;#8230; More #59 &amp;#8211; Torres on Existential Risk, Omnicidal Agents and Superintelligence</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#58 – Neely on Augmented Reality, Ethics and Property Rights</title>
		<link>https://algocracy.wordpress.com/2019/04/25/58-neely-on-augmented-reality-ethics-and-property-rights/</link>
					<comments>https://algocracy.wordpress.com/2019/04/25/58-neely-on-augmented-reality-ethics-and-property-rights/#respond</comments>
		
		
		<pubDate>Thu, 25 Apr 2019 18:20:45 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2699</guid>

					<description><![CDATA[In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/04/25/58-neely-on-augmented-reality-ethics-and-property-rights/">More <span class="screen-reader-text">#58 &#8211; Neely on Augmented Reality, Ethics and Property&#160;Rights</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2700" data-permalink="https://algocracy.wordpress.com/2019/04/25/58-neely-on-augmented-reality-ethics-and-property-rights/erica-neely/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg" data-orig-size="825,809" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;4.2&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;NIKON D80&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1267788046&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;31&quot;,&quot;iso&quot;:&quot;200&quot;,&quot;shutter_speed&quot;:&quot;0.016666666666667&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="erica neely" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg?w=748" class="alignnone  wp-image-2700" src="https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg" alt="erica neely" width="303" height="298" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg?w=303&amp;h=297 303w, https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg?w=606&amp;h=594 606w, https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg?w=150&amp;h=147 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg?w=300&amp;h=294 300w" sizes="(max-width: 303px) 100vw, 303px" /></p>
<p>In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and augmented reality. We chat about the ethics of augmented reality, with a particular focus on property rights and the problems that arise when we blend virtual and physical reality together in augmented reality platforms.</p>
<p>You can download the episode <a href="https://ia601503.us.archive.org/25/items/EricaNeely2504201911.21/Erica%20Neely%20-%2025%3A04%3A2019%2C%2011.21.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">Apple Podcasts</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a variety o<a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">f other services</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/EricaNeely2504201911.21" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:00 &#8211; What is augmented reality (AR)?</li>
<li>5:55 &#8211; Is augmented reality overhyped?</li>
<li>10:36 &#8211; What are property rights?</li>
<li>14:22 &#8211; Justice and autonomy in the protection of property rights</li>
<li>16:47 &#8211; Are we comfortable with property rights over virtual spaces/objects?</li>
<li>22:30 &#8211; The blending problem: why augmented reality poses a unique problem for the protection of property rights</li>
<li>27:00 &#8211; The different modalities of augmented reality: single-sphere or multi-sphere?</li>
<li>30:45 &#8211; Scenario 1: Single-sphere AR with private property</li>
<li>34:28 &#8211; Scenario 2: Multi-sphere AR with private property</li>
<li>37:30 &#8211; Other ethical problems in scenario 2</li>
<li>43:25 &#8211; Augmented reality vs imagination</li>
<li>47:15 &#8211; Public property as contested space</li>
<li>49:38 &#8211; Scenario 3: Multi-sphere AR with public property</li>
<li>54:30 &#8211; Scenario 4: Single-sphere AR with public property</li>
<li>1:00:28 &#8211; Must the owner of the single-sphere AR platform be regulated as a public utility/entity?</li>
<li>1:02:25 &#8211; Other important ethical issues that arise from the use of AR</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.ericaneely.com/">Erica&#8217;s Homepage</a></li>
<li><a href="http://www.ericaneely.com/wp-content/uploads/2018/12/Augmented-Reality-Augmented-Ethics-final.pdf">&#8216;Augmented Reality, Augmented Ethics: Who Has the Right to Augment a Particular Physical Space?</a>&#8216; by Erica</li>
<li>&#8216;<a href="http://www.ericaneely.com/wp-content/uploads/2018/12/choice-in-vgs-posting-copy.pdf">The Ethics of Choice in Single Player Video Games</a>&#8216; by Erica</li>
<li>&#8216;<a href="http://www.ericaneely.com/wp-content/uploads/2016/08/Risks-of-Revolution-Paper-Erica-Neely-final-version.pdf">The Risks of Revolution: Ethical Dilemmas in 3D Printing from a US Perspective</a>&#8216; by Erica</li>
<li>&#8216;<a href="http://www.ericaneely.com/wp-content/uploads/2016/08/Machines-and-the-Moral-Community-Erica-Neely-final-draft.pdf">Machines and the Moral Community</a>&#8216; by Erica</li>
<li><a href="https://highlights.ikea.com/2017/ikea-place/">IKEA Place augmented </a>reality app</li>
<li><a href="https://www.wearable-technologies.com/2018/06/loreal-trying-augmented-reality-to-bring-the-makeup-counter-into-your-home/">L&#8217;Oreal&#8217;s use of augmented reality make-up apps</a></li>
<li><a href="https://www.thedailybeast.com/holocaust-museum-bans-pokemon-go">Holocaust Museum Bans Pokemon Go</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/04/25/58-neely-on-augmented-reality-ethics-and-property-rights/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="97049831" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/EricaNeely2504201911.21/Erica%20Neely%20-%2025%3A04%3A2019%2C%2011.21.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2699</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and … More #58 – Neely on Augmented Reality, Ethics and Property Rights</itunes:summary>
<googleplay:description>In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and … More #58 – Neely on Augmented Reality, Ethics and Property Rights</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/04/erica-neely.jpg">
			<media:title type="html">erica neely</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Erica Neely. Erica is an Associate Professor of Philosophy at Ohio Northern University specializing in philosophy of technology and computer ethics. Her work focuses is on the ethical ramifications of emerging technologies. She has written a number of papers on 3D printing, the ethics of video games, robotics and &amp;#8230; More #58 &amp;#8211; Neely on Augmented Reality, Ethics and Property&amp;#160;Rights</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#57 – Sorgner on Nietzschean Transhumanism</title>
		<link>https://algocracy.wordpress.com/2019/04/10/57-lorenz-sorgner-on-nietzschean-transhumanism/</link>
					<comments>https://algocracy.wordpress.com/2019/04/10/57-lorenz-sorgner-on-nietzschean-transhumanism/#respond</comments>
		
		
		<pubDate>Wed, 10 Apr 2019 11:35:56 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2693</guid>

					<description><![CDATA[In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/04/10/57-lorenz-sorgner-on-nietzschean-transhumanism/">More <span class="screen-reader-text">#57 &#8211; Sorgner on Nietzschean&#160;Transhumanism</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2694" data-permalink="https://algocracy.wordpress.com/2019/04/10/57-lorenz-sorgner-on-nietzschean-transhumanism/stefan-lorenz-sorgner/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg" data-orig-size="407,299" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Stefan Lorenz Sorgner" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg?w=407" class="alignnone size-full wp-image-2694" src="https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg" alt="Stefan Lorenz Sorgner" width="407" height="299" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg 407w, https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg?w=150&amp;h=110 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg?w=300&amp;h=220 300w" sizes="(max-width: 407px) 100vw, 407px" /></p>
<p>In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow at the Ethics Centre of the Friedrich-Schiller-University in Jena. His main fields of research are Nietzsche, the philosophy of music, bioethics and meta-, post- and transhumanism. We talk about his case for a Nietzschean form of transhumanism.</p>
<p>You can download the episode <a href="https://ia801405.us.archive.org/16/items/StefanLorenzSorgner1004201910.57/Stefan%20Lorenz-Sorgner%20-%2010%3A04%3A2019%2C%2010.57.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">variety of other podcasting apps</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p style="text-align:left;"><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/StefanLorenzSorgner1004201910.57" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3 style="text-align:left;">Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:12 &#8211; Recent commentary on Stefan&#8217;s book <em>Ubermensch</em></li>
<li>3:41 &#8211; Understanding transhumanism &#8211; getting away from the &#8220;humanism on steroids&#8221; ideal</li>
<li>10:33 &#8211; Transhumanism as an attitude of experimentation and not a destination?</li>
<li>13:34 &#8211; Have we always been transhumanists?</li>
<li>16:51 &#8211; Understanding Nietzsche</li>
<li>22:30 &#8211; The Will to Power in Nietzschean philosophy</li>
<li>26:41 &#8211; How to understand &#8220;power&#8221; in Nietzschean terms</li>
<li>30:40 &#8211; The importance of perspectivalism and the abandonment of universal truth</li>
<li>36:40 &#8211; Is it possible for a Nietzschean to consistently deny absolute truth?</li>
<li>39:55 &#8211; The idea of the Ubermensch (Overhuman)</li>
<li>45:48 &#8211; Making the case for a Nietzschean form of transhumanism</li>
<li>51:00 &#8211; What about the negative associations of Nietzsche?</li>
<li>1:02:17 &#8211; The problem of moral relativism for transhumanists</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.sorgner.de/">Stefan&#8217;s homepage</a></li>
<li><a href="http://shop.schwabe.ch/buecher/buchdetails/uebermensch-31738/"><em>The</em> <em>Ubermensch: A Plea for a Nietzschean Transhumanism</em></a> &#8211; Stefan&#8217;s new book (in German)</li>
<li><em><a href="https://www.amazon.co.uk/Post-Transhumanism-Introduction-Posthumanism-Posthumanismus-ebook/dp/B076FCQS48/ref=sr_1_6?keywords=stefan+lorenz+sorgner&amp;qid=1554895716&amp;s=gateway&amp;sr=8-6">Posthumanism and Transhumanism: An Introduction </a>&#8211;</em> edited by Stefan and Robert Ranisch</li>
<li>&#8220;<a href="https://jetpress.org/v20/sorgner.htm">Nietzsche, the Overhuman and Tranhumanism</a>&#8221; by Stefan (open access)</li>
<li>&#8220;<a href="https://jetpress.org/v20/sorgner.htm">Beyond Humanism: Reflections on Trans and Post-humanism</a>&#8221; by Stefan (a response to critics of the previous article)</li>
<li><a href="https://plato.stanford.edu/entries/nietzsche/">Nietzsche at the Stanford Encyclopedia of Philosophy</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/04/10/57-lorenz-sorgner-on-nietzschean-transhumanism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="106608140" type="audio/mpeg" url="http://ia801405.us.archive.org/16/items/StefanLorenzSorgner1004201910.57/Stefan%20Lorenz-Sorgner%20-%2010%3A04%3A2019%2C%2010.57.mp3"/>
<enclosure length="106608140" type="audio/mpeg" url="http://ia801405.us.archive.org/16/items/StefanLorenzSorgner1004201910.57/Stefan%20Lorenz-Sorgner%20-%2010%3A04%3A2019%2C%2010.57.mp3"/>
<enclosure length="106608140" type="audio/mpeg" url="http://ia801405.us.archive.org/16/items/StefanLorenzSorgner1004201910.57/Stefan%20Lorenz-Sorgner%20-%2010%3A04%3A2019%2C%2010.57.mp3"/>
<enclosure length="106608140" type="audio/mpeg" url="http://ia801405.us.archive.org/16/items/StefanLorenzSorgner1004201910.57/Stefan%20Lorenz-Sorgner%20-%2010%3A04%3A2019%2C%2010.57.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2693</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow … More #57 – Sorgner on Nietzschean Transhumanism</itunes:summary>
<googleplay:description>In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow … More #57 – Sorgner on Nietzschean Transhumanism</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/04/stefan-lorenz-sorgner.jpg">
			<media:title type="html">Stefan Lorenz Sorgner</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk Stefan Lorenz Sorgner. Stefan teaches philosophy at John Cabot University in Rome. He is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET), Research Fellow at the Ewha Institute for the Humanities at Ewha Womans University in Seoul, and Visting Fellow &amp;#8230; More #57 &amp;#8211; Sorgner on Nietzschean&amp;#160;Transhumanism</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#56 – Turner on Rules for Robots</title>
		<link>https://algocracy.wordpress.com/2019/03/30/56-turner-on-rules-for-robots/</link>
					<comments>https://algocracy.wordpress.com/2019/03/30/56-turner-on-rules-for-robots/#respond</comments>
		
		
		<pubDate>Sat, 30 Mar 2019 16:16:19 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2690</guid>

					<description><![CDATA[In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI. You can download here or listen below. You can also subscribe to the show on &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/03/30/56-turner-on-rules-for-robots/">More <span class="screen-reader-text">#56 &#8211; Turner on Rules for&#160;Robots</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2691" data-permalink="https://algocracy.wordpress.com/2019/03/30/56-turner-on-rules-for-robots/jacob-turner/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg" data-orig-size="485,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Jacob Turner" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg?w=485" class="alignnone  wp-image-2691" src="https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg" alt="Jacob Turner" width="290" height="269" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg?w=290&amp;h=269 290w, https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg?w=150&amp;h=139 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg?w=300&amp;h=278 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg 485w" sizes="(max-width: 290px) 100vw, 290px" /></p>
<p>In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, <em>Robot Rules: Regulating Artificial Intelligence</em> (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI.</p>
<p>You can download <a href="https://ia601407.us.archive.org/24/items/JacobTurner3003201915.33/Jacob%20Turner%20-%2030%3A03%3A2019%2C%2015.33.mp3">here</a> or listen below. You can also subscribe to the show <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and a variety of<a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html"> other services</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/JacobTurner3003201915.33" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:33 &#8211; Why did Jacob write <em>Robot Rules?</em></li>
<li>2:47 &#8211; Do we need special legal rules for AI?</li>
<li>6:34 &#8211; The responsibility &#8216;gap&#8217; problem</li>
<li>11:50 &#8211; Private law vs criminal law: why it&#8217;s important to remember the distinction</li>
<li>14:08 &#8211; Is is easy to plug the responsibility gap in private law?</li>
<li>23:07 &#8211; Do we need to think about the criminal law responsibility gap?</li>
<li>26:14 &#8211; Is it absurd to hold AI criminally responsible?</li>
<li>30:24 &#8211; The problem with holding proximate humans responsible</li>
<li>36:40 &#8211; The positive side of responsibility: lessons from the Monkey selfie case</li>
<li>41:50 &#8211; What is legal personhood and what it mean to grant it to an AI?</li>
<li>48:57 &#8211; Pragmatic reasons for granting an AI legal personhood</li>
<li>51:48 &#8211; Is this a slippery slope?</li>
<li>56:00 &#8211; Explainability and AI: Why is this important?</li>
<li>1:02:38 &#8211; Is there are right to explanation under EU law?</li>
<li>1:06:16 &#8211; Is explainability something that requires a technical solution not a legal solution?</li>
<li>1:08:32 &#8211; The danger of fetishising explainability</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><em><a href="https://www.palgrave.com/us/book/9783319962344">Robot Rules: Regulating Artificial Intelligence</a></em></li>
<li><a href="https://www.robot-rules.com/">Website for the book</a></li>
<li><a href="https://twitter.com/RobotRules">Jacob on Twitter</a></li>
<li><a href="https://www.youtube.com/watch?v=I4pQ9lW3dVw">Jacob giving a lecture about the book at the University of Law</a></li>
<li>&#8220;<a href="https://link.springer.com/article/10.1007/s10676-016-9403-3">Robots, Law and the Retribution Gap</a>&#8221; by John Danaher</li>
<li><a href="https://www.theguardian.com/world/2015/apr/22/swiss-police-release-robot-random-darknet-shopper-ecstasy-deep-web">The Darknet Shopper Case</a></li>
<li><a href="https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute">The Monkey Selfie Case</a></li>
<li><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2954173">Algorithmic Entities</a> by Lynn LoPucki (discussing Shawn Bayern&#8217;s argument)</li>
<li><a href="http://www.lawandai.com/2017/05/14/is-ai-personhood-already-possible-under-current-u-s-laws-dont-count-on-it-part-one/">Matthew Scherer&#8217;s critique of Bayern&#8217;s claim that AI&#8217;s can already acquire legal personhood</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/03/30/56-turner-on-rules-for-robots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="105609426" type="audio/mpeg" url="http://ia601407.us.archive.org/24/items/JacobTurner3003201915.33/Jacob%20Turner%20-%2030%3A03%3A2019%2C%2015.33.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2690</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI. You can download here or listen below. You can also subscribe to the show on … More #56 – Turner on Rules for Robots</itunes:summary>
<googleplay:description>In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI. You can download here or listen below. You can also subscribe to the show on … More #56 – Turner on Rules for Robots</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/03/jacob-turner.jpg">
			<media:title type="html">Jacob Turner</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Jacob Turner. Jacob is a barrister and author. We chat about his new book, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018), which discusses how to address legal responsibility, rights and ethics for AI. You can download here or listen below. You can also subscribe to the show on &amp;#8230; More #56 &amp;#8211; Turner on Rules for&amp;#160;Robots</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>#55 – Baum on the Long-Term Future of Human Civilisation</title>
		<link>https://algocracy.wordpress.com/2019/03/13/55-baum-on-the-long-term-future-of-human-civilisation/</link>
					<comments>https://algocracy.wordpress.com/2019/03/13/55-baum-on-the-long-term-future-of-human-civilisation/#respond</comments>
		
		
		<pubDate>Wed, 13 Mar 2019 22:31:04 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2687</guid>

					<description><![CDATA[In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/03/13/55-baum-on-the-long-term-future-of-human-civilisation/">More <span class="screen-reader-text">#55 &#8211; Baum on the Long-Term Future of Human&#160;Civilisation</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2688" data-permalink="https://algocracy.wordpress.com/2019/03/13/55-baum-on-the-long-term-future-of-human-civilisation/seth_baum/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg" data-orig-size="220,260" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Seth_Baum" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg?w=220" class="alignnone size-full wp-image-2688" src="https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg" alt="Seth_Baum" width="220" height="260" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg 220w, https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg?w=127&amp;h=150 127w" sizes="(max-width: 220px) 100vw, 220px" /></p>
<p>In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global Catastrophic Risk Institute. He is also a Research Affiliate of the University of Cambridge Centre for the Study of Existential Risk. We talk about the importance of studying the long-term future of human civilisation, and map out four possible trajectories for the long-term future.</p>
<p>You can download the episode <a href="https://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3">here</a> or listen below. You can also subscribe on a variety of different platforms, including <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a>, <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a>, <a href="https://overcast.fm/itunes447661909/philosophical-disquisitions">Overcast</a>, <a href="http://podbay.fm/show/447661909">Podbay</a>, <a href="https://player.fm/series/philosophical-disquisitions">Player FM</a> and <a href="https://philosophicaldisquisitions.blogspot.com/p/podcast.html">more</a>. The RSS feed is available <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>.</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/SethBaumMaster1303201921.29" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:39 &#8211; Why did Seth write about the long-term future of human civilisation?</li>
<li>5:15 &#8211; Why should we care about the long-term future? What is the long-term future?</li>
<li>13:12 &#8211; How can we scientifically and ethically study the long-term future?</li>
<li>16:04 &#8211; Is it all too speculative?</li>
<li>20:48 &#8211; Four possible futures, briefly sketched: (i) status quo; (ii) catastrophe; (iii) technological transformation; and (iv) astronomical</li>
<li>23:08 &#8211; The Status Quo Trajectory &#8211; Keeping things as they are</li>
<li>28:45 &#8211; Should we want to maintain the status quo?</li>
<li>33:50 &#8211; The Catastrophe Trajectory &#8211; Awaiting the likely collapse of civilisation</li>
<li>38:58 &#8211; How could we restore civilisation post-collapse? Should we be working on this now?</li>
<li>44:00 &#8211; Are we under-investing in research into post-collapse restoration?</li>
<li>49:00 &#8211; The Technological Transformation Trajectory &#8211; Radical change through technology</li>
<li>52:35 &#8211; How desirable is radical technological change?</li>
<li>56:00 &#8211; The Astronomical Trajectory &#8211; Colonising the solar system and beyond</li>
<li>58:40 &#8211; Is the colonisation of space the best hope for humankind?</li>
<li>1:07:22 &#8211; How should the study of the long-term future proceed from here?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://sethbaum.com/">Seth&#8217;s homepage</a></li>
<li><a href="https://gcrinstitute.org/">The Global Catastrophic Risk Institute</a></li>
<li>&#8220;<a href="http://gcrinstitute.org/papers/trajectories.pdf">Long-Term Trajectories for Human Civilisation</a>&#8221; by Baum et al</li>
<li>&#8220;<a href="http://www.bbc.com/future/story/20190109-the-perils-of-short-termism-civilisations-greatest-threat">The Perils of Short-Termism: Civilisation&#8217;s Greatest Threat</a>&#8221; by Fisher, <em>BBC News</em></li>
<li><a href="https://www.amazon.com/Knowledge-Rebuild-Civilization-Aftermath-Cataclysm/dp/0143127047">The Knowledge</a> by Lewis Dartnell</li>
<li>&#8220;<a href="http://cosmos.nautil.us/short/87/space-colonization-and-the-meaning-of-life">Space Colonization and the Meaning of Life</a>&#8221; by Baum, <em>Nautilus</em></li>
<li>&#8220;<a href="https://philpapers.org/rec/BOSAWT">Astronomical Waste: The Opportunity Cost of Delayed Technological Development</a>&#8221; by Nick Bostrom</li>
<li>&#8220;<a href="https://foundational-research.org/superintelligence-cause-cure-risks-astronomical-suffering/">Superintelligence as a Cause or Cure for Risks of Astronomical Suffering</a>&#8221; by Kaj Sotala and Lucas Gloor</li>
<li>&#8220;<a href="https://docs.wixstatic.com/ugd/d9aaad_5c9b881731054ee8bca5fd30699e7df9.pdf">Space Colonization and Suffering Risks</a>&#8221; by Phil Torres</li>
<li>&#8220;<a href="https://philosophicaldisquisitions.blogspot.com/2018/08/thomas-hobbes-in-space-problem-of.html">Thomas Hobbes in Space: The Problem of Intergalactic War</a>&#8221; by John Danaher</li>
</ul>
<p> </p>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/03/13/55-baum-on-the-long-term-future-of-human-civilisation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>
<enclosure length="100687957" type="audio/mpeg" url="http://ia601402.us.archive.org/34/items/SethBaumMaster1303201921.29/Seth%20Baum%20Master%20-%2013%3A03%3A2019%2C%2021.29.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2687</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global … More #55 – Baum on the Long-Term Future of Human Civilisation</itunes:summary>
<googleplay:description>In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global … More #55 – Baum on the Long-Term Future of Human Civilisation</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/03/seth_baum.jpg">
			<media:title type="html">Seth_Baum</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Seth Baum. Seth is an interdisciplinary researcher working across a wide range of fields in natural and social science, engineering, philosophy, and policy. His primary research focus is global catastrophic risk. He also works in astrobiology. He is the Co-Founder (with Tony Barrett) and Executive Director of the Global &amp;#8230; More #55 &amp;#8211; Baum on the Long-Term Future of Human&amp;#160;Civilisation</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #54 – Sebo on the Moral Problem of Other Minds</title>
		<link>https://algocracy.wordpress.com/2019/02/28/episode-54-sebo-on-the-moral-problem-of-other-minds/</link>
					<comments>https://algocracy.wordpress.com/2019/02/28/episode-54-sebo-on-the-moral-problem-of-other-minds/#respond</comments>
		
		
		<pubDate>Thu, 28 Feb 2019 14:43:19 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2684</guid>

					<description><![CDATA[In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/02/28/episode-54-sebo-on-the-moral-problem-of-other-minds/">More <span class="screen-reader-text">Episode #54 &#8211; Sebo on the Moral Problem of Other&#160;Minds</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2685" data-permalink="https://algocracy.wordpress.com/2019/02/28/episode-54-sebo-on-the-moral-problem-of-other-minds/jeff-sebo/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg" data-orig-size="202,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;5.6&quot;,&quot;credit&quot;:&quot;George Brooks&quot;,&quot;camera&quot;:&quot;Canon EOS 5D Mark III&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1379067690&quot;,&quot;copyright&quot;:&quot;George Brooks 2012&quot;,&quot;focal_length&quot;:&quot;100&quot;,&quot;iso&quot;:&quot;800&quot;,&quot;shutter_speed&quot;:&quot;0.005&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Jeff Sebo" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg?w=202" class="alignnone size-full wp-image-2685" src="https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg" alt="Jeff Sebo" width="202" height="300" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg 202w, https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg?w=101&amp;h=150 101w" sizes="(max-width: 202px) 100vw, 202px" /></p>
<p>In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books <em>Chimpanzee Rights</em> and <em>Food, Animals, and the Environment</em>. We talk about something Jeff calls the &#8216;moral problem of other minds&#8217;, which is roughly the problem of what we should to if we aren&#8217;t sure whether another being is sentient or not.</p>
<p>You can download the episode <a href="https://ia601406.us.archive.org/18/items/JeffSeboMaster2802201913.51/Jeff%20Sebo%20Master%20-%2028%3A02%3A2019%2C%2013.51.mp3">here</a> or listen below. You can also subscribe to the show on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> and <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed<a href="http://feeds.feedburner.com/philosophicaldiscursions"> is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/JeffSeboMaster2802201913.51" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:38 &#8211; What inspired Jeff to think about the moral problem of other minds?</li>
<li>7:55 &#8211; The importance of sentience and our uncertainty about it</li>
<li>12:32 &#8211; The three possible responses to the moral problem of other minds: (i) the incautionary principle; (ii) the precautionary principle and (iii) the expected value principle</li>
<li>15:26 &#8211; Understanding the Incautionary Principle</li>
<li>20:09 &#8211; Problems with the Incautionary Principle</li>
<li>23:14 &#8211; Understanding the Precautionary Principle: More plausible than the incautionary principle?</li>
<li>29:20 &#8211; Is morality a zero-sum game? Is there a limit to how much we can care about other beings?</li>
<li>35:02 &#8211; The problem of demandingness in moral theory</li>
<li>37:06 &#8211; Other problems with the precautionary principle</li>
<li>41:41 &#8211; The Utilitarian Version of the Expected Value Principle</li>
<li>47:36 &#8211; The problem of anthropocentrism in moral reasoning</li>
<li>53:22 &#8211; The Kantian Version of the Expected Value Principle</li>
<li>59:08 &#8211; Problems with the Kantian principle</li>
<li>1:03:54 &#8211; How does the moral problem of other minds transfer over to other cases, e.g. abortion and uncertainty about the moral status of the foetus?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://jeffsebo.net/">Jeff&#8217;s Homepage</a></li>
<li><a href="https://jeffsebodotnet.files.wordpress.com/2018/09/the-moral-problem-of-other-minds.pdf">&#8216;The Moral Problem of Other Minds&#8217;</a> by Jeff</li>
<li><a href="https://www.routledge.com/Chimpanzee-Rights-The-Philosophers-Brief/Andrews-Comstock-GKD-Donaldson-Fenton-John-Johnson-Jones-Kymlicka-Meynell-Nobis-Pena-Guzman-Sebo-Gruen-Wise/p/book/9781138618664"><em>Chimpanzee Ethics</em></a> by Jeff and ors</li>
<li><em><a href="https://www.routledge.com/Food-Animals-and-the-Environment-An-Ethical-Approach/Schlottmann-Sebo/p/book/9781138801127">Food, Animals and the Environment</a></em> by Jeff and Christopher Schlottman</li>
<li>&#8216;<a href="http://www.columbia.edu/~col8/lobsterarticle.pdf">Consider the Lobster</a>&#8216; by David Foster Wallace</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2017/12/ethical-behaviourism-in-age-of-robot.html">&#8216;Ethical Behaviourism in the Age of the Robot&#8217;</a> by John Danaher</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/10/episode-48-gunkel-on-robot-rights.html">Episode 48 with David Gunkel on Robot Rights</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/02/28/episode-54-sebo-on-the-moral-problem-of-other-minds/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="99047258" type="audio/mpeg" url="http://ia601406.us.archive.org/18/items/JeffSeboMaster2802201913.51/Jeff%20Sebo%20Master%20-%2028%3A02%3A2019%2C%2013.51.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2684</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and … More Episode #54 – Sebo on the Moral Problem of Other Minds</itunes:summary>
<googleplay:description>In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and … More Episode #54 – Sebo on the Moral Problem of Other Minds</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/02/jeff-sebo.jpg">
			<media:title type="html">Jeff Sebo</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Jeff Sebo. Jeff is a Clinical Assistant Professor of Environmental Studies, Affiliated Professor of Bioethics, Medical Ethics, and Philosophy, and Director of the Animal Studies M.A. Program at New York University.  Jeff’s research focuses on bioethics, animal ethics, and environmental ethics. He has two co-authored books Chimpanzee Rights and &amp;#8230; More Episode #54 &amp;#8211; Sebo on the Moral Problem of Other&amp;#160;Minds</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #53 – Christin on How Algorithms Actually Impact Workers</title>
		<link>https://algocracy.wordpress.com/2019/02/18/episode-53-christin-on-how-algorithms-actually-impact-workers/</link>
					<comments>https://algocracy.wordpress.com/2019/02/18/episode-53-christin-on-how-algorithms-actually-impact-workers/#respond</comments>
		
		
		<pubDate>Mon, 18 Feb 2019 16:32:06 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2679</guid>

					<description><![CDATA[In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/02/18/episode-53-christin-on-how-algorithms-actually-impact-workers/">More <span class="screen-reader-text">Episode #53 &#8211; Christin on How Algorithms Actually Impact&#160;Workers</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2680" data-permalink="https://algocracy.wordpress.com/2019/02/18/episode-53-christin-on-how-algorithms-actually-impact-workers/angele-christin/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg" data-orig-size="1280,800" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;4&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Canon EOS 5D Mark II&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1535605586&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;105&quot;,&quot;iso&quot;:&quot;250&quot;,&quot;shutter_speed&quot;:&quot;0.008&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Angele Christin" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg?w=748" class="alignnone size-full wp-image-2680" src="https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg" alt="Angele Christin" width="1280" height="800" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg 1280w, https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg?w=150&amp;h=94 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg?w=300&amp;h=188 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg?w=768&amp;h=480 768w, https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg?w=1024&amp;h=640 1024w" sizes="(max-width: 1280px) 100vw, 1280px" /></p>
<p>In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently working on a book on the use of audience metrics in web journalism and a project on the use of risk assessment algorithms in criminal justice. We talk about both.</p>
<p>You can download the episode <a href="https://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3">here</a> or listen below. You can also subscribe to the show <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/AngeleChristinV11802201913.48" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:30 &#8211; What&#8217;s missing from the current debate about algorithmic governance? What does Angèle&#8217;s ethnographic perspective add?</li>
<li>5:10 &#8211; How does ethnography work? What does an ethnographer do?</li>
<li>8:30 &#8211; What are the limitations of ethnographic studies?</li>
<li>12:33 &#8211; Why did Angèle focus on the use of algorithms in criminal justice and web journalism?</li>
<li>23:06 &#8211; What were Angèle&#8217;s two key research findings? Decoupling and Buffering</li>
<li>24:40 &#8211; What is &#8216;decoupling&#8217; and how does it happen?</li>
<li>30:00 &#8211; Different attitudes to algorithmic tools in the US and France (French journalists, perhaps surprisingly, more obsessed with real time analytics than their American counterparts)</li>
<li>39:20 &#8211; What explains the ambivalent attitude to metrics in different professions?</li>
<li>44:42 &#8211; What is &#8216;buffering&#8217; and how does it arise?</li>
<li>54:30 &#8211; How people who worry about algorithms might misunderstand the practical realities of criminal justice</li>
<li>57:47 &#8211; Does the resistance/acceptance of an algorithmic tool depend on the nature of the tool and the nature of the workplace? What might the relevant variables be?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.angelechristin.com/">Angèle&#8217;s Homepage</a></li>
<li>&#8220;<a href="https://journals.sagepub.com/doi/pdf/10.1177/2053951717718855">Algorithms in Practice: Comparing Web Journalism and Criminal Justice</a>&#8221; by Angèle</li>
<li>&#8220;<a href="https://www.journals.uchicago.edu/doi/abs/10.1086/696137?journalCode=ajs">Counting Clicks: Quantification and Variation in Web Journalism in the United States and France</a>&#8221; by Angèle</li>
<li>&#8220;<a href="http://www.datacivilrights.org/pubs/2015-1027/Courts_and_Predictive_Algorithms.pdf">Courts and Predictive Algorithms</a>&#8221; by Christin, Rosenblat and Boyd</li>
<li>&#8220;<a href="https://logicmag.io/03-the-mistrials-of-algorithmic-sentencing/">The Mistrials of Algorithmic Sentencing</a>&#8221; by Angèle</li>
<li><a href="https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/">Episode 41 with Reuben Binns</a> (covering the debate about the Compas algorithm and bias)</li>
<li><a href="https://algocracy.wordpress.com/2017/02/19/episode-19-andrew-g-ferguson-on-predictive-policing/">Episode 19 with Andrew Ferguson</a> on big data and policing</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/02/18/episode-53-christin-on-how-algorithms-actually-impact-workers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>
<enclosure length="103505420" type="audio/mpeg" url="http://ia601408.us.archive.org/4/items/AngeleChristinV11802201913.48/Angele%20Christin%20V1%20-%2018%3A02%3A2019%2C%2013.48.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2679</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently … More Episode #53 – Christin on How Algorithms Actually Impact Workers</itunes:summary>
<googleplay:description>In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently … More Episode #53 – Christin on How Algorithms Actually Impact Workers</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/02/angele-christin.jpg">
			<media:title type="html">Angele Christin</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Angèle Christin. Angèle is an assistant professor in the Department of Communication at Stanford University, where she is also affiliated with the Sociology Department and Program in Science, Technology, and Society. Her research focuses on how algorithms and analytics transform professional values, expertise, and work practices. She is currently &amp;#8230; More Episode #53 &amp;#8211; Christin on How Algorithms Actually Impact&amp;#160;Workers</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #52 – Devlin on Sex Robots and Moral Panics</title>
		<link>https://algocracy.wordpress.com/2019/01/30/episode-52-devlin-on-sex-robots-and-moral-panics/</link>
					<comments>https://algocracy.wordpress.com/2019/01/30/episode-52-devlin-on-sex-robots-and-moral-panics/#respond</comments>
		
		
		<pubDate>Wed, 30 Jan 2019 11:04:38 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2675</guid>

					<description><![CDATA[In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King&#8217;s College London. Kate&#8217;s research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/01/30/episode-52-devlin-on-sex-robots-and-moral-panics/">More <span class="screen-reader-text">Episode #52 &#8211; Devlin on Sex Robots and Moral&#160;Panics</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2676" data-permalink="https://algocracy.wordpress.com/2019/01/30/episode-52-devlin-on-sex-robots-and-moral-panics/kate-devlin-001/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg" data-orig-size="1024,768" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="kate devlin.001" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg?w=748" class="alignnone size-full wp-image-2676" src="https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg" alt="Kate Devlin.001.jpeg" width="1024" height="768" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg 1024w, https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg?w=150&amp;h=113 150w, https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg?w=300&amp;h=225 300w, https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg?w=768&amp;h=576 768w" sizes="(max-width: 1024px) 100vw, 1024px" /></p>
<p>In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King&#8217;s College London. Kate&#8217;s research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future technologies will affect us and the society in which we live. Kate has become a driving force in the field of intimacy and technology, running the UK&#8217;s first sex tech hackathon in 2016. She has also become the face of sex robots – quite literally in the case of one mis-captioned tabloid photograph. We talk about her recent, excellent book <em>Turned On: Science, Sex and Robots</em>, which covers the past, present and future of sex technology.</p>
<p>You download the episode <a href="https://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3">here</a> or listen below. You can also subscribe <a href="http://via iTunes here">on iTunes</a> or <a href="http://Stitcher here">Stitcher</a> (the RSS feed <a href="http://available here.">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/KateDevlinMaster" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:08 &#8211; Why did Kate talk about sex robots in the House of Lords?</li>
<li>3:01 &#8211; How did Kate become the face of sex robots?</li>
<li>5:34 &#8211; Are sex robots really a thing? Should academics be researching them?</li>
<li>11:10 &#8211; The important link between archaeology and sex technology</li>
<li>15:00 &#8211; The myth of hysteria and the origin of the vibrator</li>
<li>17:36 &#8211; What was the most interesting thing Kate learned while researching this book? (Ans: owners of sex dolls are not creepy isolationists)</li>
<li>23:03 &#8211; Is there are moral panic about sex robots? And are we talking about robots or dolls?</li>
<li>30:41 &#8211; What are the arguments made by defenders of the &#8216;moral panic&#8217; view?</li>
<li>38:05 &#8211; What could be the social benefits of sex robots? Do men and women want different things from sex tech?</li>
<li>47:57 &#8211; Why is Kate so interested in &#8216;non-anthropomorphic&#8217; sex robots?</li>
<li>55:15 &#8211; Is the media fascination with this topic destructive or helpful?</li>
<li>57:32 &#8211; What question does Kate get asked most often and what does she say in response?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.drkatedevlin.com/">Kate&#8217;s Webpage</a></li>
<li><a href="https://www.kcl.ac.uk/people/person.aspx?id=e90f8477-e4ce-4dc5-811c-b76b29b3bdee">Kate&#8217;s Academic Homepage</a></li>
<li><em><a href="https://www.bloomsbury.com/us/turned-on-9781472950871/">Turned On: Science, Sex and Robots</a></em> by Kate Devlin</li>
<li><a href="https://www.youtube.com/watch?v=yE8Z1VwPGnQ">Kate and I in conversation</a> at the <em>Virtual Futures</em> Salon in London</li>
<li><a href="http://journalofpositivesexuality.org/wp-content/uploads/2018/08/Failure-of-Academic-Quality-Control-Technology-of-Orgasm-Lieberman-Schatzberg.pdf">&#8216;A Failure of Academic Quality Control: <em>The Technology of the Orgasm</em></a>&#8216; by Hallie Lieberman and Eric Schatzberg (on the myth that vibrators were used to treat hysteria)</li>
<li><a href="https://en.wikipedia.org/wiki/Protesilaus#Laodamia">Laodamia</a> &#8211; Owner of the world&#8217;s first sex doll?</li>
<li>&#8216;<a href="https://theconversation.com/in-defence-of-sex-machines-why-trying-to-ban-sex-robots-is-wrong-47641">In Defence of Sex Machines: Why trying to ban sex robots is wrong?</a>&#8216; by Kate</li>
<li>&#8216;<a href="https://www.huffingtonpost.com/entry/samantha-sex-robot-molested_us_59cec9f9e4b06791bb10a268">Sex robot molested at electronics festival&#8217; </a>at <em>Huffington Post</em></li>
<li>&#8216;<a href="https://metro.co.uk/2018/10/01/first-tester-made-love-to-sex-robot-so-furiously-it-actually-broke-7994164/">First tester made love to sex robot so furiously it actually broke</a>&#8216; at <em>Metro.co.uk</em></li>
<li><a href="https://goldsmiths.tech/sex">The 2nd London Sex Tech Hackathon</a></li>
<li><a href="https://mitpress.mit.edu/books/robot-sex"><em>Robot Sex: Social and Ethical</em> </a><em><a href="https://mitpress.mit.edu/books/robot-sex">Implications</a> </em>edited by Danaher and McArthur</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/01/30/episode-52-devlin-on-sex-robots-and-moral-panics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>
<enclosure length="87562994" type="audio/mpeg" url="http://ia601500.us.archive.org/9/items/KateDevlinMaster/Kate%20Devlin%20Master%20.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2675</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King’s College London. Kate’s research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future … More Episode #52 – Devlin on Sex Robots and Moral Panics</itunes:summary>
<googleplay:description>In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King’s College London. Kate’s research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future … More Episode #52 – Devlin on Sex Robots and Moral Panics</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/01/kate-devlin.001.jpeg">
			<media:title type="html">Kate Devlin.001.jpeg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Kate Devlin. Kate is a Senior Lecturer in the Department of Digital Humanities at King&amp;#8217;s College London. Kate&amp;#8217;s research is in the fields of Human Computer Interaction (HCI) and Artificial Intelligence (AI), investigating how people interact with and react to technology in order to understand how emerging and future &amp;#8230; More Episode #52 &amp;#8211; Devlin on Sex Robots and Moral&amp;#160;Panics</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #51 – Moen on the Unabomber’s Ethics</title>
		<link>https://algocracy.wordpress.com/2019/01/15/episode-51-moen-on-the-unabombers-ethics/</link>
					<comments>https://algocracy.wordpress.com/2019/01/15/episode-51-moen-on-the-unabombers-ethics/#respond</comments>
		
		
		<pubDate>Tue, 15 Jan 2019 23:18:29 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2672</guid>

					<description><![CDATA[In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2019/01/15/episode-51-moen-on-the-unabombers-ethics/">More <span class="screen-reader-text">Episode #51 &#8211; Moen on the Unabomber&#8217;s Ethics</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2673" data-permalink="https://algocracy.wordpress.com/2019/01/15/episode-51-moen-on-the-unabombers-ethics/ole-martin-moen/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg" data-orig-size="255,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="ole martin moen" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg?w=255" class="alignnone size-full wp-image-2673" src="https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg" alt="ole martin moen" width="255" height="300" srcset="https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg 255w, https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg?w=128&amp;h=150 128w" sizes="(max-width: 255px) 100vw, 255px" /></p>
<p>In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by the Research Council of Norway. In the past, he has written articles about the ethics of prostitution, the desirability of cryonics, the problem of wild animal suffering and the case for philosophical hedonism. Along with his collaborator, Aksel Braanen Sterri, he runs a podcast, <a href="https://moralistene.no/">Moralistene</a> (in Norwegian), and he regularly discusses moral issues behind the news on Norwegian national radio. We talk about a potentially controversial topic: the anti-tech philosophy of the Unabomber, Ted Kaczysnki, and what&#8217;s wrong with it.</p>
<p>You can download the episode <a href="https://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">via iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/OleMartinMoenMaster1501201922.29" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3></h3>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:05 &#8211; Should we even be talking about Ted Kaczynski&#8217;s ethics? Does it not lend legitimacy to his views?</li>
<li>6:32 &#8211; Are we unnecessarily anti-rational when it comes to discussing dangerous ideas?</li>
<li>8:32 &#8211; The Evolutionary Mismatch Argument</li>
<li>12:43 &#8211; The Surrogate Activities Argument</li>
<li>20:20 &#8211; The Helplessness/Complexity Argument</li>
<li>23:08 &#8211; The Unstoppability Argument</li>
<li>26:45 &#8211; The Domesticated Animals Argument</li>
<li>30:45 &#8211; Why does Ole Martin overlook Kaczynski&#8217;s criticisms of &#8216;leftists&#8217; in his analysis?</li>
<li>34:03 &#8211; What&#8217;s original in Kaczynski&#8217;s arguments?</li>
<li>36:31 &#8211; Ae philosophers who write about Kaczynski engaging in a motte and bailey fallacy?</li>
<li>38:36 &#8211; Ole Martin&#8217;s main critique of Kaczynski: the evaluative double standard</li>
<li>42:20 &#8211; How this double standard works in practice</li>
<li>47:27 &#8211; Why not just drop out of industrial society instead of trying to overthrow it?</li>
<li>55:04 &#8211; Is Kaczynski a revolutionary nihilist?</li>
<li>58:59 &#8211; Similarities and differences between Kaczynski&#8217;s argument and the work of Nick Bostrom, Ingmar Persson and Julian Savulescu</li>
<li>1:04:21 &#8211; Where should we go from here? Should there be more papers on this topic?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.olemartinmoen.com/">Ole Martin&#8217;s Homepage</a></li>
<li>&#8216;<a href="http://theanarchistlibrary.org/library/ole-martin-moen-the-unabomber-s-ethics">The Unabomber&#8217;s Ethics&#8217;</a> by Ole Martin Moen</li>
<li><a href="http://www.olemartinmoen.com/wp-content/uploads/BrightNewWorld.pdf">&#8220;Bright New World</a>&#8221; and &#8220;<a href="http://www.olemartinmoen.com/wp-content/uploads/BrightNewWorld.pdf">Smarter Babies&#8221;</a> by Ole Martin Moen</li>
<li>&#8220;<a href="http://www.olemartinmoen.com/wp-content/uploads/TheCaseForCryonics.pdf">The Case for Cryonics</a>&#8221; by Ole Martin Moen</li>
<li><a href="https://en.wikipedia.org/wiki/Ted_Kaczynski">Ted Kaczynski</a> on Wikipedia (includes links to relevant writings)</li>
<li>&#8220;<a href="https://www.chronicle.com/article/The-Unabombers-Pen-Pal/131892">The Unabomber&#8217;s Penpal</a>&#8221; &#8211; article about the philosopher David Skrbina who has corresponded with Kaczynski for some time</li>
<li>&#8220;<a href="http://www.oxfordscholarship.com/view/10.1093/oso/9780190652951.001.0001/oso-9780190652951-chapter-24">The Unabomber on Robots</a>&#8221; &#8211; by Jai Galliott (article appearing in <a href="https://global.oup.com/academic/product/robot-ethics-20-9780190652951?cc=ie&amp;lang=en&amp;">Robot Ethics 2.0</a> edited by Lin et al)</li>
<li><a href="https://philpapers.org/rec/PERUFT-3"><em>Unfit for the</em> <em>Future</em></a> by Ingmar Persson and Julian Savulescu</li>
<li><a href="https://nickbostrom.com/">Nick Bostrom&#8217;s Homepage</a> (check out his recent paper &#8216;<a href="https://nickbostrom.com/papers/vulnerable.pdf">The Vulnerable World Hypothesis</a>&#8220;)</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2019/01/15/episode-51-moen-on-the-unabombers-ethics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>
<enclosure length="97442295" type="audio/mpeg" url="http://ia801502.us.archive.org/35/items/OleMartinMoenMaster1501201922.29/Ole%20Martin%20Moen%20Master%20-%2015%3A01%3A2019%2C%2022.29.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2672</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by … More Episode #51 – Moen on the Unabomber’s Ethics</itunes:summary>
<googleplay:description>In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by … More Episode #51 – Moen on the Unabomber’s Ethics</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2019/01/ole-martin-moen.jpg">
			<media:title type="html">ole martin moen</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Ole Martin Moen. Ole Martin is a Research Fellow in Philosophy at the University of Oslo. He works on how to think straight about thorny issues in applied ethics. He is the Principal Investigator of “What should not be bought and sold?”, a $1 million research project funded by &amp;#8230; More Episode #51 &amp;#8211; Moen on the Unabomber&amp;#8217;s Ethics</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #50 – Loi on Facebook, Justice and Data as the New Oil</title>
		<link>https://algocracy.wordpress.com/2018/12/21/episode-50-loi-on-facebook-justice-and-data-as-the-new-oil/</link>
					<comments>https://algocracy.wordpress.com/2018/12/21/episode-50-loi-on-facebook-justice-and-data-as-the-new-oil/#respond</comments>
		
		
		<pubDate>Fri, 21 Dec 2018 16:34:04 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2667</guid>

					<description><![CDATA[In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/12/21/episode-50-loi-on-facebook-justice-and-data-as-the-new-oil/">More <span class="screen-reader-text">Episode #50 &#8211; Loi on Facebook, Justice and Data as the New&#160;Oil</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2669" data-permalink="https://algocracy.wordpress.com/2018/12/21/episode-50-loi-on-facebook-justice-and-data-as-the-new-oil/michele_loi-2/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Michele_Loi" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg?w=512" class="alignnone  wp-image-2669" src="https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg" alt="Michele_Loi.jpg" width="284" height="284" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg?w=284&amp;h=284 284w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg 512w" sizes="(max-width: 284px) 100vw, 284px" /></p>
<p>In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the Swiss MIDATA cooperative (2016). He is interested in bringing insights from ethics and political philosophy to bear on big data, proposing more ethical forms of institutional organization, firm behavior, and legal-political arrangements concerning data. We talk about how you can use Rawls&#8217;s theory of justice to evaluate the role of dominant tech platforms (particularly Facebook) in modern life.</p>
<p>You download the show <a href="https://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/MicheleLoi2112201815.40" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:29 &#8211; Why use Rawls to assess data platforms?</li>
<li>2:58 &#8211; Does the analogy between data and oil hold up to scrutiny?</li>
<li>7:04 &#8211; The First Key Idea: Rawls&#8217;s Basic Social Structures</li>
<li>11:20 &#8211; The Second Key Idea: Dominant Tech Platforms as Basic Social Structures</li>
<li>15:02 &#8211; Is Facebook a Dominant Tech Platform?</li>
<li>19:58 &#8211; How Zuckerberg&#8217;s recent memo highlights Facebook&#8217;s status as a basic social structure</li>
<li>23:10 &#8211; A brief primer on Rawls&#8217;s two principles of justice</li>
<li>29:18 &#8211; Dominant tech platforms and respect for the basic liberties (particularly free speech)</li>
<li>36:48 &#8211; Facebook: Media Company or Nudging Platform? Does it matter from the perspective of justice?</li>
<li>41:43 &#8211; Why Facebook might have a duty to ensure that we don&#8217;t get trapped in a filter bubble</li>
<li>44:32 &#8211; Is it fair to impose such a duty on Facebook as a private enterprise?</li>
<li>51:18 &#8211; Would it be practically difficult for Facebook to fulfil this duty?</li>
<li>53:02 &#8211; Is data-mining and monetisation exploitative?</li>
<li>56:14 &#8211; Is it possible to explore other economic models for the data economy?</li>
<li>59:44 &#8211; Can regulatory frameworks (e.g. the GDPR) incentivise alternative business models?</li>
<li>1:01:50 &#8211; Is there hope for the future?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://twitter.com/Michele_Loi_UZH">Michele on Twitter</a></li>
<li><a href="https://www.researchgate.net/profile/Michele_Loi">Michele on Research Gate</a></li>
<li>&#8216;<a href="http://fqp.luiss.it/files/2018/10/PPI_06_Loi-Dehaye_vol7_n2_2017def.pdf">If data is the new oil, when is the extraction of value from data unjust?</a>&#8216; by Loi and Dehaye</li>
<li>&#8216;<a href="https://www.researchgate.net/publication/281087903_Technological_unemployment_and_human_disenhancement">Technological Unemployment and Human Disenhancement&#8217;</a> by Michele Loi</li>
<li>&#8216;<a href="https://www.researchgate.net/publication/325704224_The_Digital_Phenotype_a_Philosophical_and_Ethical_Exploration">The Digital Phenotype: A Philosophical and Ethical Exploration</a>&#8216; by Michele Loi</li>
<li>&#8216;<a href="https://m.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/">A Blueprint for content governance and enforcement</a>&#8216; by Mark Zuckerberg</li>
<li>&#8216;<a href="http://philosophicaldisquisitions.blogspot.com/2015/04/should-libertarians-hate-internet.html">Should libertarians hate the internet? A Nozickian Argument Against Social Networks</a>&#8216; by John Danaher</li>
<li><a href="https://plato.stanford.edu/entries/rawls/#TwoPriJusFai">John Rawls&#8217;s Two Principles of Justice, explained</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/12/21/episode-50-loi-on-facebook-justice-and-data-as-the-new-oil/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>
<enclosure length="91592956" type="audio/mpeg" url="http://ia601505.us.archive.org/33/items/MicheleLoi2112201815.40/Michele%20Loi%20-%2021%3A12%3A2018%2C%2015.40.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2667</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the … More Episode #50 – Loi on Facebook, Justice and Data as the New Oil</itunes:summary>
<googleplay:description>In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the … More Episode #50 – Loi on Facebook, Justice and Data as the New Oil</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/12/Michele_Loi-1.jpg">
			<media:title type="html">Michele_Loi.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Michele Loi. Michele is a political philosopher turned bioethicist turned digital ethicist. He is currently (2017-2020) working on two interdisciplinary projects, one of which is about the ethical implications of big data at the University of Zurich. In the past, he developed an ethical framework of governance for the &amp;#8230; More Episode #50 &amp;#8211; Loi on Facebook, Justice and Data as the New&amp;#160;Oil</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #49 – Maas on AI and the Future of International Law</title>
		<link>https://algocracy.wordpress.com/2018/12/02/episode-49-maas-on-ai-and-the-future-of-international-law/</link>
					<comments>https://algocracy.wordpress.com/2018/12/02/episode-49-maas-on-ai-and-the-future-of-international-law/#respond</comments>
		
		
		<pubDate>Sun, 02 Dec 2018 16:36:40 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2660</guid>

					<description><![CDATA[In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen&#8217;s &#8216;AI and Legal Disruption&#8217; research unit, and a research affiliate with the Governance of AI Program at Oxford University&#8217;s Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/12/02/episode-49-maas-on-ai-and-the-future-of-international-law/">More <span class="screen-reader-text">Episode #49 &#8211; Maas on AI and the Future of International&#160;Law</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2665" data-permalink="https://algocracy.wordpress.com/2018/12/02/episode-49-maas-on-ai-and-the-future-of-international-law/img_6583-1/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg" data-orig-size="2848,4272" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;8&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Canon EOS 1100D&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1533734072&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;29&quot;,&quot;iso&quot;:&quot;100&quot;,&quot;shutter_speed&quot;:&quot;0.0125&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="IMG_6583 (1)" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=683" class="alignnone  wp-image-2665" src="https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=4272" alt="IMG_6583 (1).JPG" width="258" height="387" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=258 258w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=516 516w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=100 100w, https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=200 200w" sizes="(max-width: 258px) 100vw, 258px" /></p>
<p>In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen&#8217;s &#8216;AI and Legal Disruption&#8217; research unit, and a research affiliate with the Governance of AI Program at Oxford University&#8217;s Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI systems. This involves, in part, a study of the requirements and pitfalls of international regimes for technology arms control, non-proliferation and the conditions under which these are legitimate and effective. We talk about the phenomenon of &#8216;globally disruptive AI&#8217; and the effect it will have on the international legal order.</p>
<p>You can download the episode <a href="https://ia601500.us.archive.org/27/items/MatthisMaas0212201811.19/Matthis%20Maas%20-%2002%3A12%3A2018%2C%2011.19.mp3">here</a> or listen below. You can also subscribe <a href="http://via iTunes here">via iTunes</a> or <a href="http://Stitcher here">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here)</a>.</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/MatthisMaas0212201811.19" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:11 &#8211; International Law 101</li>
<li>6:38 &#8211; How technology has repeatedly shaped the content of international law</li>
<li>10:43 &#8211; The phenomenon of &#8216;globally disruptive artificial intelligence&#8217; (GDAI)</li>
<li>15:20 &#8211; GDAI and the development of international law</li>
<li>18:05 &#8211; Will we need new laws?</li>
<li>19:50 &#8211; Will GDAI result in lots of legal uncertainty?</li>
<li>21:57 &#8211; Will the law be under/over-inclusive of GDAI?</li>
<li>25:21 &#8211; Will GDAI render international law obsolete?</li>
<li>31:00 &#8211; Could we have a tech-neutral international law?</li>
<li>34:10 &#8211; Could we automate the monitoring and enforcement of international law?</li>
<li>44:35 &#8211; Could we replace international legal institutions with technological systems of management?</li>
<li>47:35 &#8211; Could GDAI lead to the end of the international legal order?</li>
<li>57:23 &#8211; Could GDAI result in more isolationism and less multi-lateralism</li>
<li>1:00:40 &#8211; So what will the future be?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://twitter.com/matthijsMmaas">Follow Matthijs on Twitter</a></li>
<li><a href="https://jura.ku.dk/ai-led/">Artificial Intelligence and Legal Disruption research group</a> (University of Copenhagen)</li>
<li><a href="http://governance.ai">Governance of AI Program</a> (University of Oxford)</li>
<li>Dafoe, Allan. “<a href="https://www.fhi.ox.ac.uk/govaiagenda/">AI Governance: A Research Agenda</a>.” Oxford: Governance of AI Program, Future of Humanity Institute, 2018.</li>
<li>On history of technology and international law: Picker, Colin B. “<a href="https://papers.ssrn.com/abstract=987524">A View from 40,000 Feet: International Law and the Invisible Hand of Technology.</a>” Cardozo Law Review 23 (2001): 151–219.</li>
<li>Brownsword, Roger. “<a href="https://doi.org/10.1080/17579961.2015.1052642">In the Year 2061: From Law to Technological Management</a>.” Law, Innovation and Technology 7, no. 1 (January 2, 2015): 1–51.</li>
<li>Boutin, Berenice. “<a href="https://grojil.org/2018/10/22/technologies-for-international-law-international-law-for-technologies/">Technologies for International Law &amp; International Law for Technologies</a>.” Groningen Journal of International Law (blog), October 22, 2018.</li>
<li>Moses, Lyria Bennett. “<a href="http://www.austlii.edu.au/au/journals/UNSWLRS/2007/21.html">Recurring Dilemmas: The Law’s Race to Keep Up With Technological Change</a>.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, April 11, 2007.</li>
<li>On establishing legal &#8216;artificially intelligent entities&#8217;, etc:<br />
Burri, Thomas. “<a href="https://doi.org/10.2139/ssrn.3060191">International Law and Artificial Intelligence.</a>” SSRN Electronic Journal, 2017.</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/12/02/episode-49-maas-on-ai-and-the-future-of-international-law/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="90451928" type="audio/mpeg" url="http://ia601500.us.archive.org/27/items/MatthisMaas0212201811.19/Matthis%20Maas%20-%2002%3A12%3A2018%2C%2011.19.mp3"/>
<enclosure length="90451928" type="audio/mpeg" url="http://ia601500.us.archive.org/27/items/MatthisMaas0212201811.19/Matthis%20Maas%20-%2002%3A12%3A2018%2C%2011.19.mp3"/>
<enclosure length="90451928" type="audio/mpeg" url="http://ia601500.us.archive.org/27/items/MatthisMaas0212201811.19/Matthis%20Maas%20-%2002%3A12%3A2018%2C%2011.19.mp3"/>
<enclosure length="90451928" type="audio/mpeg" url="http://ia601500.us.archive.org/27/items/MatthisMaas0212201811.19/Matthis%20Maas%20-%2002%3A12%3A2018%2C%2011.19.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2660</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen’s ‘AI and Legal Disruption’ research unit, and a research affiliate with the Governance of AI Program at Oxford University’s Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI … More Episode #49 – Maas on AI and the Future of International Law</itunes:summary>
<googleplay:description>In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen’s ‘AI and Legal Disruption’ research unit, and a research affiliate with the Governance of AI Program at Oxford University’s Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI … More Episode #49 – Maas on AI and the Future of International Law</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/12/img_6583-1.jpg?w=4272">
			<media:title type="html">IMG_6583 (1).JPG</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Matthijs Maas. Matthijs is a doctoral researcher at the University of Copenhagen&amp;#8217;s &amp;#8216;AI and Legal Disruption&amp;#8217; research unit, and a research affiliate with the Governance of AI Program at Oxford University&amp;#8217;s Future of Humanity Institute. His research focuses on safe and beneficial global governance strategies for emerging, transformative AI &amp;#8230; More Episode #49 &amp;#8211; Maas on AI and the Future of International&amp;#160;Law</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #48 – Gunkel on Robot Rights</title>
		<link>https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/</link>
					<comments>https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/#respond</comments>
		
		
		<pubDate>Wed, 31 Oct 2018 21:41:57 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2655</guid>

					<description><![CDATA[In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/">More <span class="screen-reader-text">Episode #48 &#8211; Gunkel on Robot&#160;Rights</span></a>]]></description>
										<content:encoded><![CDATA[<p>
<a href='https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/gunkel_headshot1/'><img width="100" height="150" src="https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg?w=100" class="attachment-thumbnail size-thumbnail" alt="" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg?w=100 100w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg?w=200 200w" sizes="(max-width: 100px) 100vw, 100px" data-attachment-id="2656" data-permalink="https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/gunkel_headshot1/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg" data-orig-size="300,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;5.6&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;NIKON D3100&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1448730163&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;32&quot;,&quot;iso&quot;:&quot;3200&quot;,&quot;shutter_speed&quot;:&quot;0.033333333333333&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="gunkel_headshot1" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg?w=300" /></a>
<a href='https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/_collidbooks_covers_0isbn9780262038621type/'><img width="101" height="150" src="https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg?w=101" class="attachment-thumbnail size-thumbnail" alt="" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg?w=101 101w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg?w=202 202w" sizes="(max-width: 101px) 100vw, 101px" data-attachment-id="2657" data-permalink="https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/_collidbooks_covers_0isbn9780262038621type/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg" data-orig-size="550,814" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="_collid=books_covers_0&amp;amp;isbn=9780262038621&amp;amp;type=" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg?w=550" /></a>
</p>
<p>In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political structures in the information age and much much more. He is the author of several books, including Hacking Cyberspace, The Machine Question, Of Remixology, Gaming the System and, most recently, Robot Rights. We have a long debate/conversation about whether or not robots should/could have rights.</p>
<p> </p>
<p>You can download the episode <a href="https://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3">here</a> or listen below. You can also subscribe to the show <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the<a href="http://feeds.feedburner.com/philosophicaldiscursions"> RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/GunkelRobotRights3110201821.23" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li><strong>0:00 </strong>&#8211; Introduction</li>
<li>1:52 &#8211; Isn&#8217;t the idea of robot rights ridiculous?</li>
<li>3:37 &#8211; What is a robot anyway? Is the concept too nebulous/diverse?</li>
<li>7:43 &#8211; Has science fiction undermined our ability to think about robots clearly?</li>
<li>11:01 &#8211; What would it mean to grant a robot rights? (A precis of Hohfelds theory of rights)</li>
<li>18:32 &#8211; The four positions/modalities one could take on the idea of robot rights</li>
<li>21:32 &#8211; The First Modality: Robots Can&#8217;t Have Rights therefore Shouldn&#8217;t</li>
<li>23:37 &#8211; The EPSRC guidelines on robotics as an example of this modality</li>
<li>26:04 &#8211; Criticisms of the EPSRC approach</li>
<li>28:27 &#8211; Other problems with the first modality</li>
<li>31:32 &#8211; Europe vs Japan: why the Japanese might be more open to robot &#8216;others&#8217;</li>
<li>34:00 &#8211; The Second Modality: Robots Can Have Rights therefore Should (some day)</li>
<li>39:53 &#8211; A debate between myself and David about the second modality (why I&#8217;m in favour it and he&#8217;s against it)</li>
<li>47:17 &#8211; The Third Modality: Robots Can Have Rights but Shouldn&#8217;t (Bryson&#8217;s view)</li>
<li>53:48 &#8211; Can we dehumanise/depersonalise robots?</li>
<li>58:10 &#8211; The Robot-Slave Metaphor and its Discontents</li>
<li>1:04:30 &#8211; The Fourth Modality: Robots Cannot Have Rights but Should (Darling&#8217;s view)</li>
<li>1:07:53 &#8211; Criticism&#8217;s of the fourth modality</li>
<li>1:12:05 &#8211; The &#8216;Thinking Otherwise&#8217; Approach (David&#8217;s preferred approach)</li>
<li>1:16:23 &#8211; When can robots take on a face?</li>
<li>1:19:44 &#8211; Is there any possibility of reconciling my view with David&#8217;s?</li>
<li>1:24:42 &#8211; So did David waste his time writing this book?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://gunkelweb.com/">David&#8217;s Homepage</a></li>
<li><em><a href="https://mitpress.mit.edu/books/robot-rights">Robot Rights</a></em> from MIT Press, 2018 (and <a href="https://www.amazon.com/Robot-Rights-Press-David-Gunkel/dp/0262038625">on Amazon</a>)</li>
<li><a href="https://algocracy.wordpress.com/2016/08/27/episode-10-david-gunkel-on-robots-and-cyborgs/">Episode 10 &#8211; Gunkel on Robots and Cyborgs</a></li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs10676-017-9442-4">The other question: can and should robots have rights?</a>&#8216; by David Gunkel</li>
<li>&#8216;<a href="http://gunkelweb.com/articles/facing_animals.pdf">Facing Animals: A Relational Other-Oriented Approach to Moral Standing</a>&#8216; by Gunkel and Coeckelbergh</li>
<li><a href="http://philosophicaldisquisitions.blogspot.com/2018/09/the-robot-rights-debate-index.html">The Robot Rights Debate (Index)</a> &#8211; everything I&#8217;ve written or said on the topic of robot rights</li>
<li><a href="https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/">EPSRC Principles of Robotics</a></li>
<li><a href="https://algocracy.wordpress.com/2017/06/07/episode-24-bryson-on-why-robots-should-be-slaves/">Episode 24 &#8211; Joanna Bryson on Why Robots Should be Slaves</a></li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007/s10676-018-9448-6">Patiency is not a virtue: the design of intelligent systems and systems of ethics</a>&#8216; by Joanna Bryson</li>
<li><a href="https://www.ucpress.edu/book/9780520283206/robo-sapiens-japanicus">Robo Sapiens Japanicus</a> &#8211; by Jennifer Robertson</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/10/31/episode-48-gunkel-on-robot-rights/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="125939170" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3"/>
<enclosure length="125939170" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3"/>
<enclosure length="125939170" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3"/>
<enclosure length="125939170" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3"/>
<enclosure length="125939170" type="audio/mpeg" url="http://ia601503.us.archive.org/25/items/GunkelRobotRights3110201821.23/Gunkel%20Robot%20Rights%20-%2031%3A10%3A2018%2C%2021.23.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2655</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political … More Episode #48 – Gunkel on Robot Rights</itunes:summary>
<googleplay:description>In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political … More Episode #48 – Gunkel on Robot Rights</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/10/gunkel_headshot1.jpg?w=100"/>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/10/collidbooks_covers_0isbn9780262038621type.jpg?w=101"/>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to David Gunkel. David is a repeat guest, having first appeared on the show in Episode 10. David a Professor of Communication Studies at Northern Illinois University. He is a leading scholar in the philosophy of technology, having written extensively about cyborgification, robot rights and responsibilities, remix cultures, new political &amp;#8230; More Episode #48 &amp;#8211; Gunkel on Robot&amp;#160;Rights</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #47 – Eubanks on Automating Inequality</title>
		<link>https://algocracy.wordpress.com/2018/10/20/episode-47-eubanks-on-automating-inequality/</link>
					<comments>https://algocracy.wordpress.com/2018/10/20/episode-47-eubanks-on-automating-inequality/#respond</comments>
		
		
		<pubDate>Sat, 20 Oct 2018 13:44:27 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2651</guid>

					<description><![CDATA[In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/10/20/episode-47-eubanks-on-automating-inequality/">More <span class="screen-reader-text">Episode #47 &#8211; Eubanks on Automating&#160;Inequality</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2653" data-permalink="https://algocracy.wordpress.com/2018/10/20/episode-47-eubanks-on-automating-inequality/malo2e3h_400x400-2/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg" data-orig-size="400,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="MalO2E3H_400x400" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg?w=400" class="alignnone  wp-image-2653" src="https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg" alt="MalO2E3H_400x400.jpg" width="374" height="374" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg?w=374&amp;h=374 374w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg?w=300&amp;h=300 300w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg 400w" sizes="(max-width: 374px) 100vw, 374px" /></p>
<p>In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including <em><a href="https://www.amazon.com/Automating-Inequality-High-Tech-Profile-Police-ebook/dp/B0739MF8VF">Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor</a></em> and <em>Digital Dead End: Fighting for Social Justice in the Information Age</em>. Her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She has worked for two decades in community technology and economic justice movements. We talk about the history of poverty management in the US and how it is now being infiltrated and affected by tools for algorithmic governance.</p>
<p>You can download the episode <a href="https://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3">here</a> or listen below. You can also subscribe to the show on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/VirginiaEubanks2010201813.45" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:39 &#8211; The future is unevenly distributed but not in the way you might think</li>
<li>7:05 &#8211; Virginia&#8217;s personal encounter with the tools for automating inequality</li>
<li>12:33 &#8211; Automated helplessness?</li>
<li>14:11 &#8211; The history of poverty management: denial and moralisation</li>
<li>22:40 &#8211; Technology doesn&#8217;t disrupt our ideology of poverty; it amplifies it</li>
<li>24:16 &#8211; The problem of poverty myths: it&#8217;s not just something that happens to other people</li>
<li>28:23 &#8211; The Indiana Case Study: Automating the system for claiming benefits</li>
<li>33:15 &#8211; The problem of automated defaults in the Indiana Case</li>
<li>37:32 &#8211; What happened in the end?</li>
<li>41:38 &#8211; The L.A. Case Study: A &#8220;match.com&#8221; for the homeless</li>
<li>45:40 &#8211; The Allegheny County Case Study: Managing At-Risk Children</li>
<li>52:46 &#8211; Doing the right things but still getting it wrong?</li>
<li>58:44 &#8211; The need to design an automated system that addresses institutional bias</li>
<li>1:07:45 &#8211; The problem of technological solutions in search of a problem</li>
<li>1:10:46 &#8211; The key features of the digital poorhouse</li>
</ul>
<p> </p>
<h3><strong>Relevant Links</strong></h3>
<ul>
<li><a href="https://virginia-eubanks.com">Virginia&#8217;s Homepage</a></li>
<li><a href="https://twitter.com/PopTechWorks?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor">Virginia on Twitter</a></li>
<li><em><a href="https://www.amazon.com/Automating-Inequality-High-Tech-Profile-Police-ebook/dp/B0739MF8VF">Automating Inequality</a></em></li>
<li>&#8216;<a href="https://www.wired.com/story/excerpt-from-automating-inequality/">A Child Abuse Prediction Model Fails Poor Families&#8217;</a> by Virginia in <em>Wired</em></li>
<li><a href="https://www.alleghenycounty.us/Human-Services/News-Events/Accomplishments/Allegheny-Family-Screening-Tool.aspx">The Allegheny County Family Screening Tool</a> (official webpage &#8211; includes a critical response to Virginia&#8217;s Wired article)</li>
<li><a href="https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.html">&#8216;Can an Algorithm Tell when Kids Are in Danger?&#8217;</a> by Dan Hurley (generally positive story about the family screening tool in the New York Times).</li>
<li><a href="https://virginia-eubanks.com/2018/02/16/a-response-to-allegheny-county-dhs/">&#8216;A Response to Allegheny County DHS&#8217;</a> by Virginia (a response to Allegheny County&#8217;s defence of the family screening tool)</li>
<li><a href="https://philosophicaldisquisitions.blogspot.com/2018/07/episode-41-binns-on-fairness-in.html">Episode 41 with Reuben Binns on Fairness in Algorithmic Decision-Making</a></li>
<li><a href="https://algocracy.wordpress.com/2017/02/19/episode-19-andrew-g-ferguson-on-predictive-policing/">Episode 19 with Andrew Ferguson about Predictive Policing</a></li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/10/20/episode-47-eubanks-on-automating-inequality/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>
<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>
<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>
<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>
<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>
<enclosure length="152177189" type="audio/mpeg" url="http://ia801503.us.archive.org/34/items/VirginiaEubanks2010201813.45/Virginia%20Eubanks%20-%2020%3A10%3A2018%2C%2013.45.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2651</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing … More Episode #47 – Eubanks on Automating Inequality</itunes:summary>
<googleplay:description>In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing … More Episode #47 – Eubanks on Automating Inequality</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/10/malo2e3h_400x4001.jpg">
			<media:title type="html">MalO2E3H_400x400.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Virginia Eubanks. Virginia is an Associate Professor of Political Science at the University at Albany, SUNY. She is the author of several books, including Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor and Digital Dead End: Fighting for Social Justice in the Information Age. Her writing &amp;#8230; More Episode #47 &amp;#8211; Eubanks on Automating&amp;#160;Inequality</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #46 – Minerva on the Ethics of Cryonics</title>
		<link>https://algocracy.wordpress.com/2018/10/05/episode-46-minerva-on-the-ethics-of-cryonics/</link>
					<comments>https://algocracy.wordpress.com/2018/10/05/episode-46-minerva-on-the-ethics-of-cryonics/#respond</comments>
		
		
		<pubDate>Fri, 05 Oct 2018 22:36:44 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2646</guid>

					<description><![CDATA[In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/10/05/episode-46-minerva-on-the-ethics-of-cryonics/">More <span class="screen-reader-text">Episode #46 &#8211; Minerva on the Ethics of&#160;Cryonics</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2647" data-permalink="https://algocracy.wordpress.com/2018/10/05/episode-46-minerva-on-the-ethics-of-cryonics/francesca_bw_square_small/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg" data-orig-size="640,640" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Francesca_BW_square_small" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg?w=640" class="alignnone  wp-image-2647" src="https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg" alt="Francesca_BW_square_small.jpg" width="289" height="289" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg?w=289&amp;h=289 289w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg?w=578&amp;h=578 578w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg?w=300&amp;h=300 300w" sizes="(max-width: 289px) 100vw, 289px" /></p>
<p>In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the <em>Journal of Medical Ethics, Bioethics, Cambridge Quarterly Review of Ethics</em>and the <em>Hastings Centre Report</em>. We talk about life, death and the wisdom and ethics of cryonics.</p>
<p>You can download the episode <a href="https://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3">here</a> or listen below. You can also subscribe on<a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2"> iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/FrancescaMinerva0410201815.59" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes:</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:34 &#8211; What is cryonics anyway?</li>
<li>6:54 &#8211; The tricky logistics and cryonics: you need to die in the right way</li>
<li>10:30 &#8211; Is cryonics too weird/absurd to take seriously? Analogies with IVF and frozen embryos</li>
<li>16:04 &#8211; The opportunity cost of cryonics</li>
<li>18:18 &#8211; Is death bad? Why?</li>
<li>22:51 &#8211; Is live worth living at all? Is it better never to have been born?</li>
<li>24:44 &#8211; What happens when live is no longer worth living? The attraction of cryothanasia</li>
<li>30:28 &#8211; Should we want to live forever? Existential tiredness and existential boredom</li>
<li>37:20 &#8211; Is immortality irrelevant to the debate about cryonics?</li>
<li>41:42 &#8211; Even if cryonics is good for me might it be the unethical choice?</li>
<li>45:00 (ish) &#8211; Egalitarianism and the distribution of life years</li>
<li>49:39 &#8211; Would future generations want to revive us?</li>
<li>52:34 &#8211; Would we feel out of place in the distant future?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.francescaminerva.com">Francesca&#8217;s webpage</a></li>
<li><em><a href="https://www.palgrave.com/la/book/9783319785981">The Ethics of Cryonics: Is it immoral to be immortal?</a></em> by Francesca</li>
<li>&#8216;<a href="https://jetpress.org/v25.1/minerva.htm">Cryopreservation of Embryos and Fetuses as a Future Option for Family Planning Purposes</a>&#8216; by Francesca and Anders Sandberg</li>
<li>&#8216;<a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/bioe.12368">Euthanasia and Cryothanasia</a>&#8216; by Francesca and Anders Sandberg</li>
<li><a href="http://philosophicaldisquisitions.blogspot.com/2014/04/the-badness-of-death-and-meaning-of.html">&#8216;The Badness of Death and the Meaning of Life</a>&#8216; (Series) &#8211; pretty much everything I&#8217;ve ever written about the philosophy of life and death</li>
<li><a href="https://alcor.org">Alcor Life Extension Foundation</a></li>
<li><a href="http://www.cryonics.org">Cryonics Institute</a></li>
<li><em><a href="https://www.penguinrandomhouse.com/books/252017/to-be-a-machine-by-mark-oconnell/9781101911594/">To be a Machine</a></em> by Mark O&#8217;Connell</li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/10/05/episode-46-minerva-on-the-ethics-of-cryonics/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>
<enclosure length="109378167" type="audio/mpeg" url="http://ia801500.us.archive.org/10/items/FrancescaMinerva0410201815.59/FrancescaMinerva-0510201809.15.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2646</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal … More Episode #46 – Minerva on the Ethics of Cryonics</itunes:summary>
<googleplay:description>In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal … More Episode #46 – Minerva on the Ethics of Cryonics</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/10/francesca_bw_square_small.jpg">
			<media:title type="html">Francesca_BW_square_small.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Francesca Minerva. Francesca is a postdoctoral fellow at the University of Ghent. Her research focuses on applied philosophy, specifically lookism, conscientious objection, abortion, academic freedom, and cryonics. She has published many articles on these topics in some of the leading academic journals in ethics and philosophy, including the Journal &amp;#8230; More Episode #46 &amp;#8211; Minerva on the Ethics of&amp;#160;Cryonics</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #45 – Vallor on Virtue Ethics and Technology</title>
		<link>https://algocracy.wordpress.com/2018/09/18/episode-45-vallor-on-virtue-ethics-and-technology/</link>
					<comments>https://algocracy.wordpress.com/2018/09/18/episode-45-vallor-on-virtue-ethics-and-technology/#respond</comments>
		
		
		<pubDate>Tue, 18 Sep 2018 14:47:34 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2641</guid>

					<description><![CDATA[In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/09/18/episode-45-vallor-on-virtue-ethics-and-technology/">More <span class="screen-reader-text">Episode #45 &#8211; Vallor on Virtue Ethics and&#160;Technology</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2642" data-permalink="https://algocracy.wordpress.com/2018/09/18/episode-45-vallor-on-virtue-ethics-and-technology/1450560361-jpg/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png" data-orig-size="400,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1450560361.jpg" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png?w=400" class="alignnone  wp-image-2642" src="https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png" alt="1450560361.jpg.png" width="328" height="246" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png?w=328&amp;h=246 328w, https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png?w=150&amp;h=113 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png?w=300&amp;h=225 300w, https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png 400w" sizes="(max-width: 328px) 100vw, 328px" /></p>
<p>In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, sits on the Board of Directors of the Foundation for Responsible Robotics, and is a member of the IEEE Standards Association&#8217;s Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. We talk about the problem of techno-social opacity and the value of virtue ethics in an era of rapid technological change.</p>
<p>You can download the episode <a href="https://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here)</a>.</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/ShannonVallor1809201815.16" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:39 &#8211; How students encouraged Shannon to write <em>Technology and the Virtues</em></li>
<li>6:30 &#8211; The problem of acute techno-moral opacity</li>
<li>12:34 &#8211; Is this just the problem of morality in a time of accelerating change?</li>
<li>17:16 &#8211; Why can&#8217;t we use abstract moral principles to guide us in a time of rapid technological change? What&#8217;s wrong with utilitarianism or Kantianism?</li>
<li>23:40 &#8211; Making the case for technologically-sensitive virtue ethics</li>
<li>27:27 &#8211; The analogy with education: teaching critical thinking skills vs providing students with information</li>
<li>31:19 &#8211; Aren&#8217;t most virtue ethical traditions too antiquated? Aren&#8217;t they rooted in outdated historical contexts?</li>
<li>37:54 &#8211; Doesn&#8217;t virtue ethics assume a relatively fixed human nature? What if human nature is one of the things that is changed by technology?</li>
<li>42:34 &#8211; Case study on Social Media: Defending Mark Zuckerberg</li>
<li>46:54 &#8211; The Dark Side of Social Media</li>
<li>52:48 &#8211; Are we trapped in an immoral equilibrium? How can we escape?</li>
<li>57:17 &#8211; What would the virtuous person do right now? Would he/she delete Facebook?</li>
<li>1:00:23 &#8211; Can we use technological to solve problems created by technology? Will this help to cultivate the virtues?</li>
<li>1:05:00 &#8211; The virtue of self-regard and the problem of narcissism in a digital age</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.shannonvallor.net/#">Shannon&#8217;s Homepage</a></li>
<li><a href="https://www.scu.edu/cas/philosophy/faculty-and-staff/shannon-vallor/">Shannon&#8217;s profile at Santa Clara University</a></li>
<li>Shannon&#8217;s Twitter profile</li>
<li><a href="https://global.oup.com/academic/product/technology-and-the-virtues-9780190905286?lang=en&amp;cc=ie"><em>Technology and the Virtues </em>(Now in Paperback!)</a> &#8211; by Shannon</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007/s10676-009-9202-1">Social Networking Technology and the Virtues</a>&#8216; by Shannon</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007/s13347-014-0156-9">Moral Deskilling and Upskilling in a New Machine Age</a>&#8216; by Shannon</li>
<li>&#8216;<a href="http://philosophicaldisquisitions.blogspot.com/2017/09/the-moral-problem-of-accelerating-change.html">The Moral Problem of Accelerating Change&#8217; </a>by John Danaher</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/09/18/episode-45-vallor-on-virtue-ethics-and-technology/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>
<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>
<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>
<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>
<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>
<enclosure length="101757515" type="audio/mpeg" url="http://ia601509.us.archive.org/13/items/ShannonVallor1809201815.16/Shannon%20Vallor%20-%2018%3A09%3A2018%2C%2015.16.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2641</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World … More Episode #45 – Vallor on Virtue Ethics and Technology</itunes:summary>
<googleplay:description>In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World … More Episode #45 – Vallor on Virtue Ethics and Technology</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/09/1450560361-jpg.png">
			<media:title type="html">1450560361.jpg.png</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Shannon Vallor. Shannon is the Regis and Diane McKenna Professor in the Department of Philosophy at Santa Clara University, where her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media. Professor Vallor received the 2015 World Technology Award in Ethics from the World &amp;#8230; More Episode #45 &amp;#8211; Vallor on Virtue Ethics and&amp;#160;Technology</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #44 – Fleischman on Evolutionary Psychology and Sex Robots</title>
		<link>https://algocracy.wordpress.com/2018/08/29/episode-44-fleischman-on-evolutionary-psychology-and-sex-robots/</link>
					<comments>https://algocracy.wordpress.com/2018/08/29/episode-44-fleischman-on-evolutionary-psychology-and-sex-robots/#respond</comments>
		
		
		<pubDate>Wed, 29 Aug 2018 14:50:16 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2634</guid>

					<description><![CDATA[In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/08/29/episode-44-fleischman-on-evolutionary-psychology-and-sex-robots/">More <span class="screen-reader-text">Episode #44 &#8211; Fleischman on Evolutionary Psychology and Sex&#160;Robots</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2636" data-permalink="https://algocracy.wordpress.com/2018/08/29/episode-44-fleischman-on-evolutionary-psychology-and-sex-robots/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv2-2/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg" data-orig-size="275,228" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77~mv2" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg?w=275" class="alignnone size-full wp-image-2636" src="https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg" alt="cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77~mv2.jpg" width="275" height="228" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg 275w, https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg?w=150&amp;h=124 150w" sizes="(max-width: 275px) 100vw, 275px" /></p>
<p>In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have a long and detailed chat about the evolved psychology of sex and how it may affect the social acceptance and use of sex robots. Along the way we talk about Mills and Boons novels, the connection between sexual stimulation and the brain, and other, no doubt controversial, topics.</p>
<p>You can download the episode <a href="https://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3">here</a> or listen below. You can also subscribe on<a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2"> iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/DianaFleischmann2908201814.38" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:42 &#8211; Evolutionary Psychology and the Investment Theory of Sex</li>
<li>5:54 &#8211; What&#8217;s the evidence for the investment theory in humans?</li>
<li>8:40 &#8211; Does the evidence for the theory hold up?</li>
<li>11:45 &#8211; Studies on the willingness to engage in casual sex: do men and women really differ?</li>
<li>18:33 &#8211; The ecological validity of these studies</li>
<li>20:20 &#8211; Evolutionary psychology and the replication crisis</li>
<li>23:29 &#8211; Are there better alternative explanations for sex differences?</li>
<li>26:25 &#8211; Ethical criticisms of evolutionary psychology</li>
<li>28:14 &#8211; Sex robots and evolutionary psychology</li>
<li>29:33 &#8211; Argument 1: The rising costs of courtship will drive men into the arms of sexbots</li>
<li>34:12 &#8211; Not all men&#8230;</li>
<li>39:08 &#8211; Couldn&#8217;t something similar be true for women?</li>
<li>46:00 &#8211; Aren&#8217;t the costs of courtship much higher for women?</li>
<li>48:27 &#8211; Argument 2: Sex robots could be used as treatment for dangerous men</li>
<li>51:50 &#8211; Would this stigmatise other sexbot users?</li>
<li>53:31 &#8211; Would this embolden rather than satiate?</li>
<li>55:53 &#8211; Could the logic of this argument be flipped, e.g. the Futurama argument?</li>
<li>58:05 &#8211; Isn&#8217;t this an ethically sub-optimal solution to the problem?</li>
<li>1:00:42 &#8211; Argument 3: This will also impact on women&#8217;s sexual behaviour</li>
<li>1:07:01 &#8211; Do ethical objectors to sex robots underestimate the constraints of our evolved psychology?</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.dianafleischman.com">Diana&#8217;s personal webpage</a></li>
<li><a href="https://twitter.com/sentientist">Diana on Twitter</a></li>
<li><a href="http://www2.port.ac.uk/department-of-psychology/staff/dr-diana-fleischman-.html">Diana&#8217;s academic homepage</a></li>
<li>&#8216;<a href="https://jacobitemag.com/2018/04/24/uncanny-vulvas/">Uncanny Vulvas&#8217; </a>in <em>Jacobite Magazine</em> &#8211; this is the basis for much of our discussion in the podcast</li>
<li>&#8216;<a href="http://docs.wixstatic.com/ugd/cb1e47_a58884cba546426b84d339bc84a71168.pdf">Disgust Trumps Lust: Women’s Disgust and Attraction Towards Men Is Unaffected by Sexual Arousal</a>&#8216; by Zsok, Fleischman, Borg and Morrison</li>
<li><em><a href="https://www.amazon.com/Beyond-Human-Nature-Culture-Experience/dp/0393347893">Beyond Human Nature</a> </em>by Jesse Prinz</li>
<li>&#8216;<a href="https://www.psychologytoday.com/us/blog/sexual-personalities/201706/which-people-would-agree-have-sex-stranger">Which people would agree to have sex with a stranger?</a>&#8216; by David Schmitt</li>
<li>&#8216;<a href="https://jetpress.org/v24/danaher.htm">Sex Work, Technological Unemployment and the Basic Income Guarantee&#8217;</a> by John Danaher</li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/08/29/episode-44-fleischman-on-evolutionary-psychology-and-sex-robots/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>
<enclosure length="103541155" type="audio/mpeg" url="http://ia801502.us.archive.org/18/items/DianaFleischmann2908201814.38/Diana%20Fleischmann%20-%2029%3A08%3A2018%2C%2014.38.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2634</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have … More Episode #44 – Fleischman on Evolutionary Psychology and Sex Robots</itunes:summary>
<googleplay:description>In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have … More Episode #44 – Fleischman on Evolutionary Psychology and Sex Robots</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/08/cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77mv21.jpg">
			<media:title type="html">cb1e47_ca5fd41d8dc64eae8a81d0de7b108e77~mv2.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I chat to Diana Fleischman. Diana is a senior lecturer in evolutionary psychology at the University of Portsmouth. Her research focuses on hormonal influences on behavior, human sexuality, disgust and, recently, the interface of evolutionary psychology and behaviorism. She is a utilitarian, a promoter of effective altruism, and a bivalvegan. We have &amp;#8230; More Episode #44 &amp;#8211; Fleischman on Evolutionary Psychology and Sex&amp;#160;Robots</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #43 – Elder on Friendship, Robots and Social Media</title>
		<link>https://algocracy.wordpress.com/2018/08/08/episode-43-elder-on-friendship-robots-and-social-media/</link>
					<comments>https://algocracy.wordpress.com/2018/08/08/episode-43-elder-on-friendship-robots-and-social-media/#respond</comments>
		
		
		<pubDate>Wed, 08 Aug 2018 06:00:14 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2630</guid>

					<description><![CDATA[In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy &#8211; primarily Chinese and Greek &#8211; in order to think about current problems. &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/08/08/episode-43-elder-on-friendship-robots-and-social-media/">More <span class="screen-reader-text">Episode #43 &#8211; Elder on Friendship, Robots and Social&#160;Media</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2631" data-permalink="https://algocracy.wordpress.com/2018/08/08/episode-43-elder-on-friendship-robots-and-social-media/alexiselder01222018-2-web/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg" data-orig-size="213,319" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="AlexisElder01222018-2 web" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg?w=213" class="alignnone size-full wp-image-2631" src="https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg" alt="AlexisElder01222018-2 web.jpg" width="213" height="319" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg 213w, https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg?w=100&amp;h=150 100w" sizes="(max-width: 213px) 100vw, 213px" /></p>
<p>In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy &#8211; primarily Chinese and Greek &#8211; in order to think about current problems. She is the author of a number of articles on the philosophy of friendship, and her book <em><a href="https://www.routledge.com/Friendship-Robots-and-Social-Media-False-Friends-and-Second-Selves/Elder/p/book/9781138065666">Friendship, Robots, and Social Media: False Friends and Second Selves</a></em>, came out in January 2018. We talk about all things to do with friendship, social media and social robots.</p>
<p>You can download the episode <a href="https://ia801504.us.archive.org/32/items/AlexisElder0508201814.34/Alexis%20Elder%20-%2005%3A08%3A2018%2C%2014.34.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/AlexisElder0508201814.34" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:37 &#8211; Aristotle&#8217;s theory of friendship</li>
<li>5:00 &#8211; The idea of virtue/character friendship</li>
<li>10:14 &#8211; The enduring appeal of Aristotle&#8217;s account of friendship</li>
<li>12: 30 &#8211; Does social media corrode friendship?</li>
<li>16:35 &#8211; The Publicity Objection to online friendships</li>
<li>20:40 &#8211; The Superficiality Objection to online friendships</li>
<li>25:23 &#8211; The Commercialisation/Contamination Objection to online friendships</li>
<li>30:34 &#8211; Deception in online friendships</li>
<li>35:18 &#8211; Must we physically interact with our friends?</li>
<li>39:25 &#8211; Social robots as friends (with a specific focus on elderly populations and those on the autism spectrum)</li>
<li>46:50 &#8211; Can you be friends with a robot? The counterfeit currency analogy</li>
<li>50:55 &#8211; Does the analogy hold up?</li>
<li>56:13 &#8211; Why are robotic friends assumed to be fake?</li>
<li>1:03:50 &#8211; Does the &#8216;falseness&#8217; of robotic friends depend on the type of friendship we are interested in?</li>
<li>1:06:38 &#8211; What about companion animals?</li>
<li>1:08:35 &#8211; Where is this debate going?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://sites.google.com/site/alexiselder/">Alexis Elder&#8217;s webpage</a></li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007/s10676-014-9354-5">Excellent Online Friendships: An Aristotelian Defence of Social Media</a>&#8216; by Alexis</li>
<li>&#8216;<a href="https://dl.acm.org/citation.cfm?id=2874274">False Friends and False Coinage: a tool for navigating the ethics of sociable robots</a>&#8221; by Alexis</li>
<li><a href="https://www.routledge.com/Friendship-Robots-and-Social-Media-False-Friends-and-Second-Selves/Elder/p/book/9781138065666"><em>Friendship, Robots and Social Media</em> </a>by Alexis</li>
<li>&#8216;<a href="http://philosophicaldisquisitions.blogspot.com/2017/02/can-you-be-friends-with-robot.html">Can you be friends with a robot? Aristotelian Friendship and Robotics</a>&#8216; by John Danaher</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/08/08/episode-43-elder-on-friendship-robots-and-social-media/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="100830272" type="audio/mpeg" url="http://ia801504.us.archive.org/32/items/AlexisElder0508201814.34/Alexis%20Elder%20-%2005%3A08%3A2018%2C%2014.34.mp3"/>
<enclosure length="100830272" type="audio/mpeg" url="http://ia801504.us.archive.org/32/items/AlexisElder0508201814.34/Alexis%20Elder%20-%2005%3A08%3A2018%2C%2014.34.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2630</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy – primarily Chinese and Greek – in order to think about current problems. … More Episode #43 – Elder on Friendship, Robots and Social Media</itunes:summary>
<googleplay:description>In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy – primarily Chinese and Greek – in order to think about current problems. … More Episode #43 – Elder on Friendship, Robots and Social Media</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/08/alexiselder01222018-2-web.jpg">
			<media:title type="html">AlexisElder01222018-2 web.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Alexis Elder. Alexis is an Assistant Professor of Philosophy at the University of Minnesota Duluth. Her research focuses on ethics, emerging technologies, social philosophy, metaphysics (especially social ontology), and philosophy of mind. She draws on ancient philosophy &amp;#8211; primarily Chinese and Greek &amp;#8211; in order to think about current problems. &amp;#8230; More Episode #43 &amp;#8211; Elder on Friendship, Robots and Social&amp;#160;Media</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #42 – Earp on Psychedelics and Moral Enhancement</title>
		<link>https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/</link>
					<comments>https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/#respond</comments>
		
		
		<pubDate>Wed, 25 Jul 2018 19:05:49 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2625</guid>

					<description><![CDATA[  In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/">More <span class="screen-reader-text">Episode #42 &#8211; Earp on Psychedelics and Moral&#160;Enhancement</span></a>]]></description>
										<content:encoded><![CDATA[<p> </p>
<p><img loading="lazy" data-attachment-id="2626" data-permalink="https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/brian-earp/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg" data-orig-size="1142,853" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;2.8&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;Canon EOS 5D Mark III&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1421458316&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;70&quot;,&quot;iso&quot;:&quot;800&quot;,&quot;shutter_speed&quot;:&quot;0.004&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="Brian Earp" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=748" class="alignnone  wp-image-2626" src="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg" alt="Brian Earp.jpg" width="404" height="302" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=404&amp;h=302 404w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=808&amp;h=604 808w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=150&amp;h=112 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=300&amp;h=224 300w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg?w=768&amp;h=574 768w" sizes="(max-width: 404px) 100vw, 404px" /></p>
<p>In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the philosophy of science. His research has been covered in Nature, Popular Science, The Chronicle of Higher Education, The Atlantic, New Scientist, and other major outlets. We talk about moral enhancement and the potential use of psychedelics as a form of moral enhancement.</p>
<p>You can download the episode <a href="https://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3">here</a> or listen below. You can also subscribe to the podcast on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> and <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/BrianEarp2507201818.03" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:53 &#8211; Why psychedelics and moral enhancement?</li>
<li>5:07 &#8211; What is moral enhancement anyway? Why are people excited about it?</li>
<li>7:12 &#8211; What are the methods of moral enhancement?</li>
<li>10:18 &#8211; Why is Brian sceptical about the possibility of moral enhancement?</li>
<li>14:16 &#8211; So is it an empty idea?</li>
<li>17:58 &#8211; What if we adopt an &#8216;extended&#8217; concept of enhancement, i.e. beyond the biomedical?</li>
<li>26:12 &#8211; Can we use psychedelics to overcome the dilemma facing the proponent of moral enhancement?</li>
<li>29:07 &#8211; What are psychedelic drugs? How do they work on the brain?</li>
<li>34:26 &#8211; Are your experiences whilst on psychedelic drugs conditional on your cultural background?</li>
<li>37:39 &#8211; Dissolving the ego and the feeling of oneness</li>
<li>41:36 &#8211; Are psychedelics the new productivity hack?</li>
<li>43:48 &#8211; How can psychedelics enhance moral behaviour?</li>
<li>47:36 &#8211; How can a moral philosopher make sense of these effects?</li>
<li>51:12 &#8211; The MDMA case study</li>
<li>58:38 &#8211; How about MDMA assisted political negotiations?</li>
<li>1:02:11 &#8211; Could we achieve the same outcomes without drugs?</li>
<li>1:06:52 &#8211; Where should the research go from here?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://oxford.academia.edu/BrianDEarp">Brian&#8217;s academia.edu page</a></li>
<li><a href="https://www.researchgate.net/profile/Brian_Earp">Brian&#8217;s researchgate page</a></li>
<li><a href="https://www.youtube.com/watch?v=E4rMtaSoLPQ">Brian as Rob Walker</a> (and his <a href="https://www.youtube.com/watch?v=GrB1TgSYP5c">theatre reel</a>)</li>
<li>&#8216;<a href="https://www.academia.edu/33771413/Psychedelic_moral_enhancement">Psychedelic moral enhancement</a>&#8216; by Brian Earp</li>
<li>&#8216;<a href="https://www.academia.edu/27484573/Moral_neuroenhancement">Moral Neuroenhancement</a>&#8216; by Earp, Douglas and Savulescu</li>
<li><a href="https://www.amazon.com/Change-Your-Mind-Consciousness-Transcendence/dp/1594204225"><em>How to Change Your</em> <em>Mind </em></a>by Michael Pollan</li>
<li><a href="http://philosophy247.org/podcasts/psychedelic/">Interview with Ole Martin Moen in the ethics of psychedelics</a></li>
<li><em><a href="https://www.maps.org/images/pdf/books/HuxleyA1954TheDoorsOfPerception.pdf">The Doors of Perception</a></em> by Aldous Huxley</li>
<li><a href="https://www.hopkinsmedicine.org/research/labs/roland-griffiths-laboratory">Roland Griffiths Laboratory </a>at Johns Hopkins</li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/07/25/episode-42-earp-on-psychedelics-and-moral-enhancement/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>
<enclosure length="101579464" type="audio/mpeg" url="http://ia601502.us.archive.org/18/items/BrianEarp2507201818.03/Brian%20Earp%20-%2025%3A07%3A2018%2C%2018.03.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2625</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>  In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the … More Episode #42 – Earp on Psychedelics and Moral Enhancement</itunes:summary>
<googleplay:description>  In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the … More Episode #42 – Earp on Psychedelics and Moral Enhancement</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/07/brian-earp.jpg">
			<media:title type="html">Brian Earp.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>  In this episode I talk to Brian Earp. Brian is Associate Director of the Yale-Hastings Program in Ethics and Health Policy at Yale University and The Hastings Center, and a Research Fellow in the Uehiro Centre for Practical Ethics at the University of Oxford. Brian has diverse research interests in ethics, psychology, and the &amp;#8230; More Episode #42 &amp;#8211; Earp on Psychedelics and Moral&amp;#160;Enhancement</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #41 – Binns on Fairness in Algorithmic Decision-Making</title>
		<link>https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/</link>
					<comments>https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/#respond</comments>
		
		
		<pubDate>Thu, 12 Jul 2018 21:24:53 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2622</guid>

					<description><![CDATA[In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/">More <span class="screen-reader-text">Episode #41 &#8211; Binns on Fairness in Algorithmic&#160;Decision-Making</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2623" data-permalink="https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/reuben-binns/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg" data-orig-size="329,361" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Reuben Binns" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg?w=329" class="  wp-image-2623 alignleft" src="https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg" alt="Reuben Binns.jpg" width="217" height="238" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg?w=217&amp;h=238 217w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg?w=137&amp;h=150 137w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg?w=273&amp;h=300 273w, https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg 329w" sizes="(max-width: 217px) 100vw, 217px" />In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates about algorithmic bias and discrimination, and how they could be informed by the philosophy of egalitarianism.</p>
<p> </p>
<p>You can download the episode <a href="https://ia801504.us.archive.org/31/items/ReubenBinnsV1/Reuben%20Binns%20V1.mp3">here</a> or listen below. You can also subscribe on <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/ReubenBinnsV1" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:46 &#8211; What is algorithmic decision-making?</li>
<li>4:20 &#8211; Isn&#8217;t all decision-making algorithmic?</li>
<li>6:10 &#8211; Examples of unfairness in algorithmic decision-making: The COMPAS debate</li>
<li>12:02 &#8211; Limitations of the COMPAS debate</li>
<li>15:22 &#8211; Other examples of unfairness in algorithmic decision-making</li>
<li>17:00 &#8211; What is discrimination in decision-making?</li>
<li>19:45 &#8211; The mental state theory of discrimination</li>
<li>25:20 &#8211; Statistical discrimination and the problem of generalisation</li>
<li>29:10 &#8211; Defending algorithmic decision-making from the charge of statistical discrimination</li>
<li>34:40 &#8211; Algorithmic typecasting: Could we all end up like William Shatner?</li>
<li>39:02 &#8211; Egalitarianism and algorithmic decision-making</li>
<li>43:07 &#8211; The role that luck and desert play in our understanding of fairness</li>
<li>49:38 &#8211; Deontic justice and historical discrimination in algorithmic decision-making</li>
<li>53:36 &#8211; Fair distribution vs Fair recognition</li>
<li>59:03 &#8211; Should we be enthusiastic about the fairness of future algorithmic decision-making?</li>
</ul>
<h3></h3>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.reubenbinns.com/">Reuben&#8217;s homepage</a></li>
<li><a href="http://www.cs.ox.ac.uk/people/reuben.binns/">Reuben&#8217;s institutional page </a></li>
<li>&#8216;<a href="https://arxiv.org/abs/1712.03586">Fairness in Machine Learning: Lessons from Political Philosophy</a>&#8216; by Reuben Binns</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs13347-017-0263-5">Algorithmic Accountability and Public Reason</a>&#8216; by Reuben Binns</li>
<li>&#8216;<a href="https://arxiv.org/pdf/1801.10408.pdf">It&#8217;s Reducing a Human Being to a Percentage: Perceptions of Justice in Algorithmic Decision-Making</a>&#8216; by Binns et al</li>
<li>&#8216;<a href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">Machine Bias</a>&#8216; &#8211; the ProPublica story on unfairness in the COMPAS recidivism algorithm</li>
<li>&#8216;<a href="https://arxiv.org/abs/1609.05807">Inherent Tradeoffs in the Fair Determination of Risk Scores</a>&#8216; by Kleinberg et al &#8212; an impossibility proof showing that you cannot minimise false positive rates and equalise accuracy rates across two populations at the same time (except in the rare case that the base rate for both populations is the same)</li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/07/12/episode-41-binns-on-fairness-in-algorithmic-decision-making/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="91842478" type="audio/mpeg" url="http://ia801504.us.archive.org/31/items/ReubenBinnsV1/Reuben%20Binns%20V1.mp3"/>
<enclosure length="91842478" type="audio/mpeg" url="http://ia801504.us.archive.org/31/items/ReubenBinnsV1/Reuben%20Binns%20V1.mp3"/>
<enclosure length="91842478" type="audio/mpeg" url="http://ia801504.us.archive.org/31/items/ReubenBinnsV1/Reuben%20Binns%20V1.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2622</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates … More Episode #41 – Binns on Fairness in Algorithmic Decision-Making</itunes:summary>
<googleplay:description>In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates … More Episode #41 – Binns on Fairness in Algorithmic Decision-Making</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/07/reuben-binns.jpg">
			<media:title type="html">Reuben Binns.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Reuben Binns. Reuben is a post-doctoral researcher at the Department of Computer Science in Oxford University. His research focuses on both the technical, ethical and legal aspects of privacy, machine learning and algorithmic decision-making. We have a detailed and informative discussion (for me at any rate!) about recent debates &amp;#8230; More Episode #41 &amp;#8211; Binns on Fairness in Algorithmic&amp;#160;Decision-Making</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #40 – Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars</title>
		<link>https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/</link>
					<comments>https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/#respond</comments>
		
		
		<pubDate>Fri, 29 Jun 2018 17:06:59 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2619</guid>

					<description><![CDATA[In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/">More <span class="screen-reader-text">Episode #40 &#8211; Nyholm on Accident Algorithms and the Ethics of Self-Driving&#160;Cars</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2620" data-permalink="https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/sven-nyholm/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg" data-orig-size="1000,1500" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;Angeline_Swinkels&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1485871704&quot;,&quot;copyright&quot;:&quot;Angeline Swinkels | fotograaf&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Sven-Nyholm" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=683" class="alignnone  wp-image-2620" src="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg" alt="Sven-Nyholm.jpg" width="232" height="348" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=232&amp;h=348 232w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=464&amp;h=696 464w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=100&amp;h=150 100w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg?w=200&amp;h=300 200w" sizes="(max-width: 232px) 100vw, 232px" /></p>
<p>In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and who should be held responsible for them if something goes wrong. We chat about these issues and more.</p>
<p>You can download the podcast <a href="https://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> and <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/SvenInterview22906201817.21" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<p>&nbsp;</p>
<h3>Show Notes:</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:22 &#8211; What is a self-driving car?</li>
<li>3:00 &#8211; Fatal crashes involving self-driving cars</li>
<li>5:10 &#8211; Could self-driving cars ever be completely safe?</li>
<li>8:14 &#8211; Limitations of the Trolley Problem</li>
<li>11:22 &#8211; What kinds of accident scenarios do we need to plan for?</li>
<li>17:18 &#8211; Who should decide which ethical rules a self-driving car follows?</li>
<li>23:47 &#8211; Why not randomise the ethical rules?</li>
<li>25:18 &#8211; Experimental findings on people&#8217;s preferences with self-driving cars</li>
<li>29:16 &#8211; Is this just another typical applied ethical debate?</li>
<li>31:27 &#8211; What would a utilitarian self-driving car do?</li>
<li>36:30 &#8211; What would a Kantian self-driving car do?</li>
<li>39:33 &#8211; A contractualist approach to the ethics of self-driving cars</li>
<li>43:54 &#8211; The responsibility gap problem</li>
<li>46:12 &#8211; Scepticism of the responsibility gap: can self-driving cars be agents?</li>
<li>53:17 &#8211; A collaborative agency approach to self-driving cars</li>
<li>58:18 &#8211; So who should we blame if something goes wrong?</li>
<li>1:03:40 &#8211; Is there a duty to hand over driving to machines?</li>
<li>1:07:30 &#8211; Must self-driving cars be programmed to kill?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.tue.nl/en/research/researchers/sven-nyholm/">Sven&#8217;s faculty webpage</a></li>
<li>&#8216;<a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12507">The Ethics of Crashes with Self-Driving Cars, A Roadmap I</a>&#8216; by Sven</li>
<li>&#8216;<a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/phc3.12506">The Ethics of Crashes with Self-Driving Cars, A Roadmap II</a>&#8216; by Sven</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs11948-017-9943-x">Attributing Responsibility to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility Loci</a>&#8216; by Sven</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs10677-016-9745-2">The Ethics of Accident Algorithms for Self-Driving Cars: An Applied Trolley Problem</a>&#8216; by Nyholm and Smids</li>
<li>&#8216;<a href="https://link.springer.com/article/10.1007%2Fs10677-016-9745-2">Automated Cars meet Human Drivers: responsible human-robot coordination and the ethics of mixed traffic&#8217;</a> by Nyhom and Smids</li>
<li><a href="https://algocracy.wordpress.com/2016/05/12/episode-3-sven-nyholm-on-love-enhancement-deep-brain-stimulation-and-ethics-of-self-driving-cars/">Episode #3 with Sven on Love Drugs, DBS and Self-Driving Cars</a></li>
<li><a href="https://algocracy.wordpress.com/2017/05/22/episode-23-liu-on-responsibility-and-discrimination-in-autonomous-weapons-and-self-driving-cars/">Episode #23 with Liu on Responsibility and Discrimination in Self-Driving Cars</a></li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/06/29/episode-40-nyholm-on-accident-algorithms-and-the-ethics-of-self-driving-cars/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>
<enclosure length="105252698" type="audio/mpeg" url="http://ia601505.us.archive.org/21/items/SvenInterview22906201817.21/Sven%20interview%202%20-%2029%3A06%3A2018%2C%2017.21.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2619</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and … More Episode #40 – Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars</itunes:summary>
<googleplay:description>In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and … More Episode #40 – Nyholm on Accident Algorithms and the Ethics of Self-Driving Cars</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/06/sven-nyholm.jpg">
			<media:title type="html">Sven-Nyholm.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Sven Nyholm about self-driving cars. Sven is an Assistant Professor of Philosophy at TU Eindhoven with an interest in moral philosophy and the ethics of technology. Recently, Sven has been working on the ethics of self-driving cars, focusing in particular on the ethical rules such cars should follow and &amp;#8230; More Episode #40 &amp;#8211; Nyholm on Accident Algorithms and the Ethics of Self-Driving&amp;#160;Cars</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #39 – Re-engineering Humanity with Frischmann and Selinger</title>
		<link>https://algocracy.wordpress.com/2018/06/04/episode-39-re-engineering-humanity-with-frischmann-and-selinger/</link>
					<comments>https://algocracy.wordpress.com/2018/06/04/episode-39-re-engineering-humanity-with-frischmann-and-selinger/#respond</comments>
		
		
		<pubDate>Mon, 04 Jun 2018 16:56:38 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2614</guid>

					<description><![CDATA[In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/06/04/episode-39-re-engineering-humanity-with-frischmann-and-selinger/">More <span class="screen-reader-text">Episode #39 &#8211; Re-engineering Humanity with Frischmann and&#160;Selinger</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2615" data-permalink="https://algocracy.wordpress.com/2018/06/04/episode-39-re-engineering-humanity-with-frischmann-and-selinger/51kgsokv4el-_sx329_bo1204203200_/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg" data-orig-size="331,499" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="51kGSOkv4EL._SX329_BO1,204,203,200_" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg?w=331" class="alignnone size-full wp-image-2615" src="https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg" alt="51kGSOkv4EL._SX329_BO1,204,203,200_.jpg" width="331" height="499" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg 331w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg?w=99&amp;h=150 99w, https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg?w=199&amp;h=300 199w" sizes="(max-width: 331px) 100vw, 331px" /></p>
<p>In this episode I talk to Brett Frischmann and Evan Selinger about their book <i><a href="https://www.amazon.com/Re-Engineering-Humanity-Brett-Frischmann/dp/1107147093/ref=asap_bc?ie=UTF8">Re-engineering Humanity</a></i> (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their book looks at how modern techno-social engineering is affecting humanity. We have a long-ranging conversation about the main arguments and ideas from the book. The book features lots of interesting thought experiments and provocative claims. I recommend checking it out. I highlight of this conversation for me was our discussion of the &#8216;Free Will Wager&#8217; and how it pertains to debates about technology and social engineering.</p>
<p>You can listen to the episode below or download it <a href="https://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3">here</a>. You can also subscribe on <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher </a>and <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/ReEngineeringHumanity0406201817.11" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3></h3>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:33 &#8211; What is techno-social engineering?</li>
<li>7:55 &#8211; Is techno-social engineering turning us into simple machines?</li>
<li>14:11 &#8211; Digital contracting as an example of techno-social engineering</li>
<li>22:17 &#8211; The three important ingredients of modern techno-social engineering</li>
<li>29:17 &#8211; The Digital Tragedy of the Commons</li>
<li>34:09 &#8211; Must we wait for a Leviathan to save us?</li>
<li>44:03 &#8211; The Free Will Wager</li>
<li>55:00 &#8211; The problem of Engineered Determinism</li>
<li>1:00:03 &#8211; What does it mean to be self-determined?</li>
<li>1:12:03 &#8211; Solving the problem? The freedom to be off</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://eselinger.org/">Evan Selinger&#8217;s homepage</a></li>
<li><a href="http://www.brettfrischmann.com/">Brett Frischmann&#8217;s homepage</a></li>
<li><a href="https://www.reengineeringhumanity.com">Re-engineering Humanity &#8211; website</a></li>
<li><a href="http://philosophicaldisquisitions.blogspot.com/2016/07/reverse-turing-tests-are-humans.html">&#8216;Reverse Turing Tests: Are humans becoming more machine-like?&#8217;</a> by me</li>
<li><a href="https://algocracy.wordpress.com/2016/06/10/episode-4-evan-selinger-on-algorithmic-outsourcing-and-the-value-of-privacy/">Episode 4 with Evan Selinger on Privacy and Algorithmic Outsourcing</a></li>
<li><a href="https://algocracy.wordpress.com/2016/07/15/episode-7-brett-frischmann-on-reverse-turing-tests-and-machine-like-humans/">Episode 7 with Brett Frischmann on Human-Focused Turing Tests</a></li>
<li>Gregg Caruso on <a href="https://philpapers.org/rec/CARFWS-2">&#8216;Free Will Skepticism and Its Implications: An Argument for Optimism&#8217;</a></li>
<li><a href="http://philosophicaldisquisitions.blogspot.com/2017/05/free-will-skepticism-and-meaningful.html">Derk Pereboom on Relationships and Free Will </a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/06/04/episode-39-re-engineering-humanity-with-frischmann-and-selinger/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>
<enclosure length="123371229" type="audio/mpeg" url="http://ia601503.us.archive.org/30/items/ReEngineeringHumanity0406201817.11/Re-engineering%20humanity%20-%2004%3A06%3A2018%2C%2017.11.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2614</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their … More Episode #39 – Re-engineering Humanity with Frischmann and Selinger</itunes:summary>
<googleplay:description>In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their … More Episode #39 – Re-engineering Humanity with Frischmann and Selinger</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/06/51kgsokv4el-_sx329_bo1204203200_.jpg">
			<media:title type="html">51kGSOkv4EL._SX329_BO1,204,203,200_.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Brett Frischmann and Evan Selinger about their book Re-engineering Humanity (Cambridge University Press, 2018). Brett and Evan are both former guests on the podcast. Brett is a Professor of Law, Business and Economics at Villanova University and Evan is  Professor of Philosophy at the Rochester Institute of Technology. Their &amp;#8230; More Episode #39 &amp;#8211; Re-engineering Humanity with Frischmann and&amp;#160;Selinger</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #38 – Schwartz on the Ethics of Space Exploration</title>
		<link>https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/</link>
					<comments>https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/#respond</comments>
		
		
		<pubDate>Tue, 27 Mar 2018 17:06:21 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2610</guid>

					<description><![CDATA[  In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/">More <span class="screen-reader-text">Episode #38 &#8211; Schwartz on the Ethics of Space&#160;Exploration</span></a>]]></description>
										<content:encoded><![CDATA[<p> </p>
<p><img loading="lazy" data-attachment-id="2612" data-permalink="https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/use-jim-schwartz-2/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg" data-orig-size="220,147" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="use Jim schwartz" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg?w=220" class="  wp-image-2612 alignleft" src="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg" alt="use Jim schwartz.jpg" width="281" height="188" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg 220w, https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg?w=150&amp;h=100 150w" sizes="(max-width: 281px) 100vw, 281px" />In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is editor (with Tony Milligan) of <i><a href="https://www.amazon.co.uk/Ethics-Space-Exploration-Society/dp/3319398253/ref=sr_1_1?ie=UTF8&amp;qid=1522169587&amp;sr=8-1&amp;keywords=the+ethics+of+space+exploration">The Ethics of Space Exploration</a></i> (Springer 2016) and his publications have appeared in <i>Advances in Space Research</i>, <i>Space Policy</i>, <i>Acta Astronautica</i>, <i>Astropolitics</i>, <i>Environmental Ethics</i>, <i>Ethics &amp; the Environment</i>, and <i>Philosophia Mathematica</i>.  He has also contributed chapters to <i>The Meaning of Liberty Beyond Earth</i>, <i>Human Governance Beyond Earth</i>, <i>Dissent, Revolution and Liberty Beyond Earth</i> (each edited by Charles Cockell), and to <i>Yearbook on Space Policy 2015</i>.  He is currently working on a book project, <i>The Value of Space Science</i>.  We talk about all things space-related, including the scientific case for space exploration and the myths that befuddle space advocacy.</p>
<p>You can download the episode <a href="https://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3">here</a> or listen below. You can also subscribe on <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> and <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/JamesSchwartz12703201817.39" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:40 &#8211; Why did James get interested in the philosophy of space?</li>
<li>3:17 &#8211; Is interest in the philosophy and ethics of space exploration on the rise?</li>
<li>6:05 &#8211; Do space ethicists always say &#8220;no&#8221;?</li>
<li>8:20 &#8211; Do we have a duty to explore space? If so, what kind of duty is this?</li>
<li>10:30 &#8211; Space exploration and the duty to ensure species survival</li>
<li>16:16 &#8211; The link between space ethics and environmental ethics: between misanthrophy and anthropocentrism</li>
<li>19:33 &#8211; How would space exploration help human survival?</li>
<li>23:20 &#8211; The scientific value of space exploration: manned or unmanned?</li>
<li>28:30 &#8211; Why does the scientific case for space exploration take priority?</li>
<li>35:40 &#8211; Is it our destiny to explore space?</li>
<li>38:46 &#8211; Thoughts on Elon Musk and the Colonisation Project</li>
<li>44:34 &#8211; The Myths of Space Advocacy</li>
<li>51:40 &#8211; From space philosophy to space policy: getting rid of the myths</li>
<li>58:55 &#8211; The future of space philosophy</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li>Dr Schwartz&#8217;s website &#8211; <a href="http://www.thespacephilosopher.space">The Space Philosopher</a> (with links to papers and works in progress)</li>
<li>&#8216;<a href="https://www.academia.edu/36136155/Space_Settlement_Whats_the_Rush">Space Settlement: What&#8217;s the rush?&#8217;</a> &#8211; by James Schwartz</li>
<li><a href="https://www.academia.edu/29707320/Myth-Free_Space_Advocacy_Part_I_The_Myth_of_Innate_Exploratory_and_Migratory_Urges">Myth-Free Space Advocacy Part I</a>, <a href="https://www.academia.edu/29331057/Myth-Free_Space_Advocacy_Part_II_The_Myth_of_the_Space_Frontier">Part II</a>, <a href="https://www.academia.edu/28378165/Myth-Free_Space_Advocacy_Part_III_The_Myth_of_Educational_Inspiration">Part III</a>, <a href="https://www.academia.edu/36037575/Myth-Free_Space_Advocacy_Part_IV_The_Myth_of_Public_Support_for_Astrobiology">Part IV </a>-by James Schwartz</li>
<li>Video of <a href="https://www.youtube.com/watch?v=5pfZkGSE1WM">James&#8217;s lecture on Worldship Ethics</a></li>
<li>&#8216;<a href="https://www.sciencedirect.com/science/article/abs/pii/S0265964614000782">Prioritizing Scientific Exploration: A Comparison of Ethical Justifications for Space Development and Space Science&#8217; </a>&#8211; by James Schwartz</li>
<li><a href="https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/">Episode 37 with Christopher Yorke</a> (middle section deals with the prospects for a utopia in space).</li>
</ul>
<p><img loading="lazy" data-attachment-id="2611" data-permalink="https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/use-jim-schwartz/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg" data-orig-size="220,147" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="use Jim schwartz" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg?w=220" class="alignnone size-full wp-image-2611" src="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg" alt="use Jim schwartz.jpg" width="220" height="147" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg 220w, https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg?w=150&amp;h=100 150w" sizes="(max-width: 220px) 100vw, 220px" /></p>
<p><img loading="lazy" data-attachment-id="2612" data-permalink="https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/use-jim-schwartz-2/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg" data-orig-size="220,147" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="use Jim schwartz" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg?w=220" class="alignnone size-full wp-image-2612" src="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg" alt="use Jim schwartz.jpg" width="220" height="147" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg 220w, https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg?w=150&amp;h=100 150w" sizes="(max-width: 220px) 100vw, 220px" /></p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/03/27/episode-38-schwartz-on-the-ethics-of-space-exploration/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>
<enclosure length="91949684" type="audio/mpeg" url="http://ia601509.us.archive.org/12/items/JamesSchwartz12703201817.39/James%20Schwartz%20-%201%20-%2027%3A03%3A2018%2C%2017.39.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2610</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>  In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is … More Episode #38 – Schwartz on the Ethics of Space Exploration</itunes:summary>
<googleplay:description>  In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is … More Episode #38 – Schwartz on the Ethics of Space Exploration</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg">
			<media:title type="html">use Jim schwartz.jpg</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz.jpg">
			<media:title type="html">use Jim schwartz.jpg</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/03/use-jim-schwartz1.jpg">
			<media:title type="html">use Jim schwartz.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>  In this episode I talk to Dr James Schwartz. James teaches philosophy at Wichita State University.  His primary area of research is philosophy and ethics of space exploration, where he defends a position according to which space exploration derives its value primarily from the importance of the scientific study of the Solar System.  He is &amp;#8230; More Episode #38 &amp;#8211; Schwartz on the Ethics of Space&amp;#160;Exploration</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #37 – Yorke on the Philosophy of Utopianism</title>
		<link>https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/</link>
					<comments>https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/#respond</comments>
		
		
		<pubDate>Sat, 03 Mar 2018 17:43:25 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2606</guid>

					<description><![CDATA[In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a &#8216;utopia&#8217; is, why space exploration is associated &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/">More <span class="screen-reader-text">Episode #37 &#8211; Yorke on the Philosophy of&#160;Utopianism</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2607" data-permalink="https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/s200_christopher-yorke/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg" data-orig-size="200,200" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="s200_christopher.yorke" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg?w=200" class="alignleft size-full wp-image-2607" src="https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg" alt="s200_christopher.yorke.jpg" width="200" height="200" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg 200w, https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg?w=150&amp;h=150 150w" sizes="(max-width: 200px) 100vw, 200px" />In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a &#8216;utopia&#8217; is, why space exploration is associated with utopian thinking, and whether Bernard Suits&#8217; is correct to say that games are the highest ideal of human existence.</p>
<p>&nbsp;</p>
<p>You can download the episode <a href="https://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> or <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the <a href="http://feeds.feedburner.com/philosophicaldiscursions">RSS feed is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/ChristopherYorkeV10303201817.08" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:00 &#8211; Why did Christopher choose to study utopianism?</li>
<li>6:44 &#8211; What is a &#8216;utopia&#8217;? Defining the ideal society</li>
<li>14:00 &#8211; Is utopia practically achievable?</li>
<li>19:34 &#8211; Why are dystopias easier to imagine that utopias?</li>
<li>23:00 &#8211; Blueprints vs Horizons &#8211; different understandings of the utopian project</li>
<li>26:40 &#8211; What do philosophers bring to the study of utopia?</li>
<li>30:40 &#8211; Why is space exploration associated with utopianism?</li>
<li>39:20 &#8211; Kant&#8217;s Perpetual Peace vs the Final Frontier</li>
<li>47:09 &#8211; Suits&#8217;s Utopia of Games: What is a game?</li>
<li>53:16 &#8211; Is game-playing the highest ideal of human existence?</li>
<li>1:01:15 &#8211; What kinds of games will Suits&#8217;s utopians play?</li>
<li>1:14:41 &#8211; Is a post-instrumentalist society really intelligible?</li>
</ul>
<p>&nbsp;</p>
<h3>Relevant Links</h3>
<ul>
<li>Christopher Yorke&#8217;s <a href="https://open.academia.edu/ChristopherYorke">Academia.edu page</a></li>
<li>&#8216;<a href="https://www.academia.edu/33456417/Prospects_for_Utopia_in_Space">Prospects for Utopia in Space&#8217;</a> by Christopher Yorke</li>
<li>&#8216;<a href="https://www.academia.edu/33456678/Endless_Summer_What_Kinds_of_Games_Will_Suits_Utopians_Play">Endless Summer: What kinds of games will Suits&#8217;s Utopians Play</a>?&#8217; by Christopher Yorke</li>
<li>&#8216;<a href="https://philosophicaldisquisitions.blogspot.ie/2018/01/the-final-frontier-space-exploration-as.html">The Final Frontier: Space Exploration as Utopia Project</a>&#8216; by John Danaher</li>
<li>&#8216;<a href="https://philosophicaldisquisitions.blogspot.ie/2018/01/the-utopia-of-games-intelligible-or.html">The Utopia of Games: Intelligible or Unintelligible</a>&#8216; by John Danaher</li>
<li><a href="https://philosophicaldisquisitions.blogspot.ie/2014/04/the-badness-of-death-and-meaning-of.html">Other posts on utopianism and the good life</a></li>
<li><a href="https://broadviewpress.com/product/the-grasshopper-third-edition/">The Grasshopper</a> by Bernard Suits</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/03/03/episode-37-yorke-on-the-philosophy-of-utopianism/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>
<enclosure length="117058582" type="audio/mpeg" url="http://ia601501.us.archive.org/31/items/ChristopherYorkeV10303201817.08/Christopher%20Yorke%20V1%20-%2003%3A03%3A2018%2C%2017.08.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2606</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a ‘utopia’ is, why space exploration is associated … More Episode #37 – Yorke on the Philosophy of Utopianism</itunes:summary>
<googleplay:description>In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a ‘utopia’ is, why space exploration is associated … More Episode #37 – Yorke on the Philosophy of Utopianism</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/03/s200_christopher-yorke.jpg">
			<media:title type="html">s200_christopher.yorke.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Christopher Yorke. Christopher is a PhD candidate at The Open University. He specialises in the philosophical study of utopianism and is currently completing a dissertation titled ‘Bernard Suits’ Utopia of Gameplay: A Critical Analysis’. We talk about all things utopian, including what a &amp;#8216;utopia&amp;#8217; is, why space exploration is associated &amp;#8230; More Episode #37 &amp;#8211; Yorke on the Philosophy of&amp;#160;Utopianism</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #36 – Wachter on Algorithms, Explanations and the GDPR</title>
		<link>https://algocracy.wordpress.com/2018/01/27/episode-36-wachter-on-algorithms-explanations-and-the-gdpr/</link>
					<comments>https://algocracy.wordpress.com/2018/01/27/episode-36-wachter-on-algorithms-explanations-and-the-gdpr/#respond</comments>
		
		
		<pubDate>Sat, 27 Jan 2018 00:45:09 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2600</guid>

					<description><![CDATA[In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/01/27/episode-36-wachter-on-algorithms-explanations-and-the-gdpr/">More <span class="screen-reader-text">Episode #36 &#8211; Wachter on Algorithms, Explanations and the&#160;GDPR</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2601" data-permalink="https://algocracy.wordpress.com/2018/01/27/episode-36-wachter-on-algorithms-explanations-and-the-gdpr/s200_sandra-wachter/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg" data-orig-size="200,200" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="s200_sandra.wachter" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg?w=200" class="  wp-image-2601 alignleft" src="https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg" alt="s200_sandra.wachter.jpg" width="214" height="214" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg 200w, https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg?w=150&amp;h=150 150w" sizes="(max-width: 214px) 100vw, 214px" />In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online. Her current work deals with the ethical design of algorithms, including the development of standards and methods to ensure fairness, accountability, transparency, interpretability, and group privacy in complex algorithmic systems.</p>
<p>You can download the episode <a href="https://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3">here</a> or listen below. You can also subscribe <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">on iTunes</a> and <a href="http://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed <a href="http://feeds.feedburner.com/philosophicaldiscursions">is here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/SandraWachterMaster2701201800.18" width="640" height="30" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>2:05 &#8211; The rise of algorithmic/automated decision-making</li>
<li>3:40 &#8211; Why are algorithmic decisions so opaque? Why is this such a concern?</li>
<li>5:25 &#8211; What are the benefits of algorithmic decisions?</li>
<li>7:43 &#8211; Why might we want a &#8216;right to explanation&#8217; of algorithmic decisions?</li>
<li>11:05 &#8211; Explaining specific decisions vs. explaining decision-making systems</li>
<li>15:48 &#8211; Introducing the GDPR &#8211; What is it and why does it matter?</li>
<li>19:29 &#8211; Is there a right to explanation embedded in Article 22 of the GDPR?</li>
<li>23:30 &#8211; The limitations of Article 22</li>
<li>27:40 &#8211; When do algorithmic decisions have &#8216;significant effects&#8217;?</li>
<li>29:30 &#8211; Is there a right to explanation in Articles 13 and 14 of the GDPR (the &#8216;notification duties&#8217; provisions)?</li>
<li>33:33 &#8211; Is there a right to explanation in Article 15 (the access right provision)?</li>
<li>37:45 &#8211; Is there any hope that a right to explanation might be interpreted into the GDPR?</li>
<li>43:04 &#8211; How could we explain algorithmic decisions? Introducing counterfactual explanations</li>
<li>47:55 &#8211; Clarifying the concept of a counterfactual explanation</li>
<li>51:00 &#8211; Criticisms and limitations of counterfactual explanations</li>
</ul>
<h3>Relevant Links</h3>
<ul>
<li><a href="https://www.oii.ox.ac.uk/people/sandra-wachter/">Sandra&#8217;s profile page at the Oxford Internet Institute</a></li>
<li><a href="https://oxford.academia.edu/SandraWachter">Sandra&#8217;s academia.edu page</a></li>
<li>&#8216;<a href="https://www.researchgate.net/profile/Sandra_Wachter2/publication/312597416_Why_a_Right_to_Explanation_of_Automated_Decision-Making_Does_Not_Exist_in_the_General_Data_Protection_Regulation/links/5a4cf690a6fdcc3e99d133d4/Why-a-Right-to-Explanation-of-Automated-Decision-Making-Does-Not-Exist-in-the-General-Data-Protection-Regulation.pdf">Why a right to explanation does not exist in the General Data Protection Regulation&#8217;</a> by Wachter, Mittelstadt and Floridi</li>
<li><a href="https://arxiv.org/pdf/1711.00399.pdf">&#8216;Counterfactual explanations without opening the black box: Automated decisions and the GDPR&#8217; </a>by Wachter, Mittelstadt and Russell</li>
<li><a href="https://www.eugdpr.org">The General Data Protection Regulation</a></li>
<li><a href="http://ec.europa.eu/newsroom/article29/news.cfm?item_type=1358">Article 29 working party guidance on the GDPR</a></li>
<li><a href="http://www.pnas.org/content/108/42/E833.long?utm_content=bufferee22c&amp;utm_medium=social&amp;utm_source=twitter.com&amp;utm_campaign=buffer">Do judges make stricter sentencing decisions when they are hungry?</a> and <a href="http://www.pnas.org/content/108/42/E834.full">a Reply</a></li>
</ul>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/01/27/episode-36-wachter-on-algorithms-explanations-and-the-gdpr/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>
<enclosure length="84816375" type="audio/mpeg" url="http://ia601501.us.archive.org/0/items/SandraWachterMaster2701201800.18/Sandra%20Wachter%20-Master%20-%2027%3A01%3A2018%2C%2000.18.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2600</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal … More Episode #36 – Wachter on Algorithms, Explanations and the GDPR</itunes:summary>
<googleplay:description>In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal … More Episode #36 – Wachter on Algorithms, Explanations and the GDPR</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/01/s200_sandra-wachter.jpg">
			<media:title type="html">s200_sandra.wachter.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Sandra Wachter about the right to explanation for algorithmic decision-making under the GDPR. Sandra is a lawyer and Research Fellow in Data Ethics and Algorithms at the Oxford Internet Institute. She is also a Research Fellow at the Alan Turing Institute in London. Sandra’s research focuses on the legal &amp;#8230; More Episode #36 &amp;#8211; Wachter on Algorithms, Explanations and the&amp;#160;GDPR</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
		<item>
		<title>Episode #35 – Brundage on the Case for Conditional Optimism about AI</title>
		<link>https://algocracy.wordpress.com/2018/01/15/episode-35-brundage-on-conditional-optimism-about-ai/</link>
					<comments>https://algocracy.wordpress.com/2018/01/15/episode-35-brundage-on-conditional-optimism-about-ai/#respond</comments>
		
		
		<pubDate>Mon, 15 Jan 2018 14:13:42 +0000</pubDate>
				<category><![CDATA[Podcast]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://algocracy.wordpress.com/?p=2594</guid>

					<description><![CDATA[In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford&#8217;s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal &#8230; <a class="more-link" href="https://algocracy.wordpress.com/2018/01/15/episode-35-brundage-on-conditional-optimism-about-ai/">More <span class="screen-reader-text">Episode #35 &#8211; Brundage on the Case for Conditional Optimism about&#160;AI</span></a>]]></description>
										<content:encoded><![CDATA[<p><img loading="lazy" data-attachment-id="2595" data-permalink="https://algocracy.wordpress.com/2018/01/15/episode-35-brundage-on-conditional-optimism-about-ai/izbnin-z/" data-orig-file="https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg" data-orig-size="512,512" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="IzBNIN-z" data-image-description="" data-image-caption="" data-large-file="https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg?w=512" class="  wp-image-2595 alignleft" src="https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg" alt="IzBNIN-z.jpg" width="222" height="222" srcset="https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg?w=222&amp;h=222 222w, https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg?w=444&amp;h=444 444w, https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg?w=150&amp;h=150 150w, https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg?w=300&amp;h=300 300w" sizes="(max-width: 222px) 100vw, 222px" />In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford&#8217;s Future of Humanity Institute and a PhD candidate in <a href="http://hsd.asu.edu/">Human and Social Dimensions of Science and Technology</a> at Arizona State University. He is also affiliated with the <a href="http://www.cspo.org/">Consortium for Science, Policy, and Outcomes (CSPO)</a>, the <a href="http://cns.asu.edu/viri">Virtual Institute of Responsible Innovation (VIRI)</a>, and the <a href="http://www.tandfonline.com/action/journalInformation?show=aimsScope&amp;journalCode=tjri20#.VoM_1vE7TGs">Journal of Responsible Innovation (JRI).</a> His research focuses on the societal implications of artificial intelligence. We discuss the case for conditional optimism about AI.</p>
<p>You can download the episode <a href="https://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3">here</a> or listen below. You can also subscribe on <a href="https://itunes.apple.com/ie/podcast/philosophical-disquisitions/id447661909?mt=2">iTunes</a> or <a href="https://www.stitcher.com/podcast/algocracy-and-transhumanism-podcast?refid=stpr">Stitcher</a> (the RSS feed is <a href="http://feeds.feedburner.com/philosophicaldiscursions">here</a>).</p>
<p><div class="embed-archiveorg" style="text-align:center;"><iframe title="Archive.org" src="https://archive.org/embed/MilesBrundageV1" width="640" height="140" style="border:0;" webkitallowfullscreen="true" mozallowfullscreen="true" allowfullscreen></iframe></div></p>
<h3>Show Notes</h3>
<ul>
<li>0:00 &#8211; Introduction</li>
<li>1:00 &#8211; Why did Miles write the conditional case for AI optimism?</li>
<li>5:07 &#8211; What is AI anyway?</li>
<li>8:26 &#8211; The difference between broad and narrow forms of AI</li>
<li>12:00 &#8211; Is the current excitement around AI hype or reality?</li>
<li>16:13 &#8211; What is the conditional case for AI conditional upon?</li>
<li>22:00 &#8211; The First Argument: The Value of Task Expedition</li>
<li>29:30 &#8211; The downsides of task expedition and the problem of speed mismatches</li>
<li>33:28 &#8211; How AI changes our cognitive ecology</li>
<li>36:00 &#8211; The Second Argument: The Value of Improved Coordination</li>
<li>40:50 &#8211; Wouldn&#8217;t AI be used for malicious purposes too?</li>
<li>45:00 &#8211; Can we create safe AI in the absence of global coordination?</li>
<li>48:03 &#8211; The Third Argument: The Value of a Leisure Society</li>
<li>52:30 &#8211; Would a leisure society really be utopian?</li>
<li>56:24 &#8211; How were Miles&#8217;s arguments received when presented at the EU parliament?</li>
</ul>
<p> </p>
<h3>Relevant Links</h3>
<ul>
<li><a href="http://www.milesbrundage.com">Miles&#8217;s Homepage</a></li>
<li><a href="http://www.milesbrundage.com/publications.html">Miles&#8217;s past publications</a></li>
<li><a href="https://www.fhi.ox.ac.uk/team/miles-brundage/">Miles at the Future of Humanity Institute</a></li>
<li><a href="https://web.ep.streamovations.be/index.php/event/stream/171019-1000-committee-stoa/embed">Video of Miles&#8217;s presentation to the EU Parliament (starts at approx 10:05:19 or 1 hour and 1 minute into the video)</a></li>
<li><a href="http://haggstrom.blogspot.ie/2017/10/the-ai-meeting-in-brussels-last-week.html">Olle Haggstrom&#8217;s write-up about the EU parliament event</a></li>
<li>&#8216;<a href="http://philosophicaldisquisitions.blogspot.ie/2017/05/cognitive-scarcity-and-artificial.html">Cognitive Scarcity and Artificial Intelligence</a>&#8216; by Miles Brundage and John Danaher</li>
</ul>
<p> </p>
]]></content:encoded>
					
					<wfw:commentRss>https://algocracy.wordpress.com/2018/01/15/episode-35-brundage-on-conditional-optimism-about-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>
<enclosure length="83924868" type="audio/mpeg" url="http://ia801502.us.archive.org/20/items/MilesBrundageV1/Miles%20Brundage%20V1.mp3"/>

		<post-id xmlns="com-wordpress:feed-additions:1">2594</post-id><itunes:author>John Danaher</itunes:author>
<googleplay:author>John Danaher</googleplay:author>
<itunes:explicit>false</itunes:explicit>
<googleplay:explicit>false</googleplay:explicit>
<itunes:summary>In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal … More Episode #35 – Brundage on the Case for Conditional Optimism about AI</itunes:summary>
<googleplay:description>In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford’s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal … More Episode #35 – Brundage on the Case for Conditional Optimism about AI</googleplay:description>

		<media:content medium="image" url="https://0.gravatar.com/avatar/62f7084c79317cab1cd60f49b97b8e83a4d321ec7f200952e0fce4b1943b8d86?s=96&amp;d=identicon&amp;r=G">
			<media:title type="html">algocracy</media:title>
		</media:content>

		<media:content medium="image" url="https://algocracy.wordpress.com/wp-content/uploads/2018/01/izbnin-z.jpg">
			<media:title type="html">IzBNIN-z.jpg</media:title>
		</media:content>
	<dc:creator>john.danaher@nuigalway.ie (John Danaher)</dc:creator><itunes:subtitle>In this episode I talk to Miles Brundage. Miles is a Research Fellow at the University of Oxford&amp;#8217;s Future of Humanity Institute and a PhD candidate in Human and Social Dimensions of Science and Technology at Arizona State University. He is also affiliated with the Consortium for Science, Policy, and Outcomes (CSPO), the Virtual Institute of Responsible Innovation (VIRI), and the Journal &amp;#8230; More Episode #35 &amp;#8211; Brundage on the Case for Conditional Optimism about&amp;#160;AI</itunes:subtitle><itunes:keywords>Algocracy,Transhumanism,Algorithmic,Governance,Politics</itunes:keywords></item>
	</channel>
</rss>