<?xml version="1.0" encoding="UTF-8" standalone="no"?><!-- generator="podbean/5.5" --><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:spotify="http://www.spotify.com/ns/rss" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
    <title>The Technically Human Podcast</title>
    <atom:link href="https://feed.podbean.com/dmdonig/feed.xml" rel="self" type="application/rss+xml"/>
    <link>https://dmdonig.podbean.com</link>
    <description>Technically Human is a podcast about ethics and technology where I ask what it means to be human in the age of tech. Each week, I interview industry leaders, thinkers, writers, and technologists and I ask them about how they understand the relationship between humans and the technologies we create. We discuss how we can build a better vision for technology, one that represents the best of our human values.</description>
    <pubDate>Fri, 01 Nov 2024 05:00:00 -0700</pubDate>
    <generator>https://podbean.com/?v=5.5</generator>
    <language>en</language>
        <copyright>Copyright 2020 All rights reserved.</copyright>
    <category>Technology</category>
    <ttl>1440</ttl>
    <itunes:type>episodic</itunes:type>
          <itunes:summary>Technically Human is a podcast about ethics and technology where I ask what it means to be human in the age of tech. Each week, host Professor Deb Donig interviews industry leaders, thinkers, writers, and technologists and I ask them about how they understand the relationship between humans and the technologies we create. We discuss how we can build a better vision for technology, one that represents the best of our human values.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>



    
    	<itunes:block>No</itunes:block>
	<itunes:explicit>no</itunes:explicit>
    <itunes:image href="https://lh3.googleusercontent.com/BkQLgZXVejA5cTAGKsycpNqw7QY3wk-lra0KWhIEke-bBmGBT5idQHSmJl5j=s180"/>
    
    <itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords><itunes:subtitle>Technically Human is a podcast about ethics and technology where I ask what it means to be human in the age of tech. Each week, host Professor Deb Donig interviews industry leaders, thinkers, writers, and technologists and I ask them about how they unders</itunes:subtitle><itunes:category text="Technology"/><itunes:owner><itunes:email>Deb Donig, ddonig@calpoly.edu</itunes:email><itunes:name>Deb Donig</itunes:name></itunes:owner><item>
        <title>High Tech Society: IEEE's vision for ethical technological advancement</title>
        <itunes:title>High Tech Society: IEEE's vision for ethical technological advancement</itunes:title>
        <link>https://dmdonig.podbean.com/e/high-tech-society-ieees-vision-for-ethical-technological-advancement/</link>
                    <comments>https://dmdonig.podbean.com/e/high-tech-society-ieees-vision-for-ethical-technological-advancement/#comments</comments>        <pubDate>Fri, 01 Nov 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/7215d9d5-ef4b-3767-b1d3-d4e2b013551a</guid>
                                    <description><![CDATA[<p>In this episode of the show, I speak with Tom Coughlin, the standing President and CEO of <a href='https://www.ieee.org/'>IEEE</a>, the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. We discuss the IEEE's vision of technological innovatinon, what it really means to "benefit humanity" through tech, and how the tech sector can, and should, move toward a values-driven approach to innovation.</p>
<p><a href='https://spectrum.ieee.org/2023-ieee-president-elect'>Tom Coughlin</a> is an IEEE Life Fellow, past president of IEEE-USA, past director of IEEE Region 6, past chair of the Santa Clara Valley IEEE Section, past <a href='https://urldefense.com/v3/__https:/californiaconsultants.org/members/thomas-coughlin/__;!!DlCMXiNAtWOc!1-5PQ6HaX2Yqd68NT7XsxoI46BkLoDhexyKT20lly_lQ42bVL5rX8DZCZpIjsoCTn5LmN0TJcqt4ZS1vbRv-x36h0wKglQ$'>chair of the Consultants Network of Silicon Valley</a> and is also active with the Storage Networking Industry Association and Society of Motion Picture and Television Engineers. Coughlin is also president of Coughlin Associates, a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report) and puts on digital storage-oriented events. He is a regular storage and memory contributor for <a href='https://urldefense.com/v3/__http:/forbes.com/__;!!DlCMXiNAtWOc!1-5PQ6HaX2Yqd68NT7XsxoI46BkLoDhexyKT20lly_lQ42bVL5rX8DZCZpIjsoCTn5LmN0TJcqt4ZS1vbRv-x342tsn_fQ$'>Forbes.com</a> and media and entertainment organization websites</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I speak with Tom Coughlin, the standing President and CEO of <a href='https://www.ieee.org/'>IEEE</a>, the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. We discuss the IEEE's vision of technological innovatinon, what it really means to "benefit humanity" through tech, and how the tech sector can, and should, move toward a values-driven approach to innovation.</p>
<p><a href='https://spectrum.ieee.org/2023-ieee-president-elect'>Tom Coughlin</a> is an IEEE Life Fellow, past president of IEEE-USA, past director of IEEE Region 6, past chair of the Santa Clara Valley IEEE Section, past <a href='https://urldefense.com/v3/__https:/californiaconsultants.org/members/thomas-coughlin/__;!!DlCMXiNAtWOc!1-5PQ6HaX2Yqd68NT7XsxoI46BkLoDhexyKT20lly_lQ42bVL5rX8DZCZpIjsoCTn5LmN0TJcqt4ZS1vbRv-x36h0wKglQ$'>chair of the Consultants Network of Silicon Valley</a> and is also active with the Storage Networking Industry Association and Society of Motion Picture and Television Engineers. Coughlin is also president of Coughlin Associates, a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report) and puts on digital storage-oriented events. He is a regular storage and memory contributor for <a href='https://urldefense.com/v3/__http:/forbes.com/__;!!DlCMXiNAtWOc!1-5PQ6HaX2Yqd68NT7XsxoI46BkLoDhexyKT20lly_lQ42bVL5rX8DZCZpIjsoCTn5LmN0TJcqt4ZS1vbRv-x342tsn_fQ$'>Forbes.com</a> and media and entertainment organization websites</p>
]]></content:encoded>
                                    
        <enclosure length="82095301" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/us748q2aqkq7ttcf/Tom_Coughlin_mixdown7ow70.mp3"/>
        <itunes:summary>In this episode of the show, I speak with Tom Coughlin, the standing President and CEO of IEEE, the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. We discuss the IEEE’s vision of technological innovatinon, what it really means to ”benefit humanity” through tech, and how the tech sector can, and should, move toward a values-driven approach to innovation.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3420</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>139</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I speak with Tom Coughlin, the standing President and CEO of IEEE, the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. We discuss the IEEE's vision of technological innovatinon, what it really means to "benefit humanity" through tech, and how the tech sector can, and should, move toward a values-driven approach to innovation. Tom Coughlin is an IEEE Life Fellow, past president of IEEE-USA, past director of IEEE Region 6, past chair of the Santa Clara Valley IEEE Section, past chair of the Consultants Network of Silicon Valley and is also active with the Storage Networking Industry Association and Society of Motion Picture and Television Engineers. Coughlin is also president of Coughlin Associates, a digital storage analyst and business and technology consultant. He has over 40 years in the data storage industry with engineering and senior management positions at several companies. Coughlin Associates consults, publishes books and market and technology reports (including The Media and Entertainment Storage Report and an Emerging Memory Report) and puts on digital storage-oriented events. He is a regular storage and memory contributor for Forbes.com and media and entertainment organization websites</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Debugging Division: The Architecture of Bridge-Building Social Media</title>
        <itunes:title>Debugging Division: The Architecture of Bridge-Building Social Media</itunes:title>
        <link>https://dmdonig.podbean.com/e/debugging-division-the-architecture-of-bridge-building-social-media/</link>
                    <comments>https://dmdonig.podbean.com/e/debugging-division-the-architecture-of-bridge-building-social-media/#comments</comments>        <pubDate>Fri, 25 Oct 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d3f915d2-d470-3cc5-86e5-f875e7f87901</guid>
                                    <description><![CDATA[<p>Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate.</p>
<p><a href='https://linktr.ee/vardon.world'>Vardon Hamdiu</a> is the co-founder and head of <a href='https://www.sparkable.cc/'>Sparkable</a>, a young nonprofit organization that builds a social media platform aimed at bridging divides.</p>
<p>Growing up immersed in diverse cultures, Vardon has always been a bridge-builder who navigates between worlds. His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion.</p>
<p>Over the past decade, Vardon has worked on the communications team of a Swiss President, studied to become a teacher, spent an exchange semester in South Africa, and engaged with refugees facing often traumatic circumstances. These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe. He became deeply passionate about exploring why today’s social media platforms are often dysfunctional and how these powerful systems, which govern our collective attention, could be constructed differently. Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch <a href='https://www.sparkable.cc/'>Sparkable</a>, aiming to foster a healthier online environment.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate.</p>
<p><a href='https://linktr.ee/vardon.world'>Vardon Hamdiu</a> is the co-founder and head of <a href='https://www.sparkable.cc/'>Sparkable</a>, a young nonprofit organization that builds a social media platform aimed at bridging divides.</p>
<p>Growing up immersed in diverse cultures, Vardon has always been a bridge-builder who navigates between worlds. His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion.</p>
<p>Over the past decade, Vardon has worked on the communications team of a Swiss President, studied to become a teacher, spent an exchange semester in South Africa, and engaged with refugees facing often traumatic circumstances. These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe. He became deeply passionate about exploring why today’s social media platforms are often dysfunctional and how these powerful systems, which govern our collective attention, could be constructed differently. Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch <a href='https://www.sparkable.cc/'>Sparkable</a>, aiming to foster a healthier online environment.</p>
]]></content:encoded>
                                    
        <enclosure length="76387468" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/p78rhqbacrpthw8n/Vardon_Hamdiu_mixdownbts4s.mp3"/>
        <itunes:summary>Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate.

Vardon Hamdiu is the co-founder and head of Sparkable, a young nonprofit organization that builds a social media platform aimed at bridging divides.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3181</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>138</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today we are bringing you a conversation featuring one technologist who is rethinking and reshaping social media—to build platforms that spark empathy and joy, not division and hate. Vardon Hamdiu is the co-founder and head of Sparkable, a young nonprofit organization that builds a social media platform aimed at bridging divides. Growing up immersed in diverse cultures, Vardon has always been a bridge-builder who navigates between worlds. His family history has exposed him to the devastating consequences of communication breakdowns between ethnic communities and the outbreak of war. These experiences have profoundly shaped his understanding of the importance of empathy and social cohesion. Over the past decade, Vardon has worked on the communications team of a Swiss President, studied to become a teacher, spent an exchange semester in South Africa, and engaged with refugees facing often traumatic circumstances. These experiences made him acutely aware of the enormous disconnect between the information we consume online and the lived realities of many people around the globe. He became deeply passionate about exploring why today’s social media platforms are often dysfunctional and how these powerful systems, which govern our collective attention, could be constructed differently. Driven by this vision, he made the pivotal decision to quit his job, drop out of his studies, and launch Sparkable, aiming to foster a healthier online environment.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Algorithm as Witness: Reimagining Holocaust Memory in the Digital Age</title>
        <itunes:title>The Algorithm as Witness: Reimagining Holocaust Memory in the Digital Age</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-algorithm-as-witness-reimagining-holocaust-memory-in-the-digital-age/</link>
                    <comments>https://dmdonig.podbean.com/e/the-algorithm-as-witness-reimagining-holocaust-memory-in-the-digital-age/#comments</comments>        <pubDate>Fri, 18 Oct 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c8c7a578-7f80-3ce9-a36f-af87979420cb</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I bring you a conversation with one of the great thinkers working at the intersection of ethics and technology, Professor Todd Presner, for an episode about his new book, Ethics of the Algorithm: Digital Humanities and Holocaust Memory. In the conversation, we talk about new direction in Holocaust memory and scholarship, how technologies are enabling new approaches, questions, and interpretations of major historical events, and how digital technologies might help us imagine a new ethics of interpretation of history and memory. </p>
<p><a href='https://elts.ucla.edu/person/todd-presner/'>Dr. Todd Presner</a> is Chair of UCLA’s Department of European Languages and Transcultural Studies. Previously, he was the chair of UCLA’s <a href='http://dh.ucla.edu/'>Digital Humanities Program</a> (2011-21), and from 2011-2018, he served as the Sady and Ludwig Kahn Director of the <a href='https://levecenter.ucla.edu/'>Alan D. Leve Center for Jewish Studies</a>. He holds the Michael and Irene Ross Chair in the UCLA Division of the Humanities. His research focuses on European intellectual and cultural history, Holocaust studies, visual culture, and digital humanities. Dr. Presner’s newest book was published with Princeton University Press: <a href='https://press.princeton.edu/books/hardcover/9780691258966/ethics-of-the-algorithm'>Ethics of the Algorithm: Digital Humanities and Holocaust Memory </a> (Fall 2024). </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I bring you a conversation with one of the great thinkers working at the intersection of ethics and technology, Professor Todd Presner, for an episode about his new book, <em>Ethics of the Algorithm: Digital Humanities and Holocaust Memory. </em>In the conversation, we talk about new direction in Holocaust memory and scholarship, how technologies are enabling new approaches, questions, and interpretations of major historical events, and how digital technologies might help us imagine a new ethics of interpretation of history and memory. </p>
<p><a href='https://elts.ucla.edu/person/todd-presner/'>Dr. Todd Presner</a> is Chair of UCLA’s Department of European Languages and Transcultural Studies. Previously, he was the chair of UCLA’s <a href='http://dh.ucla.edu/'>Digital Humanities Program</a> (2011-21), and from 2011-2018, he served as the Sady and Ludwig Kahn Director of the <a href='https://levecenter.ucla.edu/'>Alan D. Leve Center for Jewish Studies</a>. He holds the Michael and Irene Ross Chair in the UCLA Division of the Humanities. His research focuses on European intellectual and cultural history, Holocaust studies, visual culture, and digital humanities. Dr. Presner’s newest book was published with Princeton University Press: <a href='https://press.princeton.edu/books/hardcover/9780691258966/ethics-of-the-algorithm'><em>E</em><em>thics of the Algorithm: Digital Humanities and Holocaust Memory </em></a><em> </em>(Fall 2024). </p>
]]></content:encoded>
                                    
        <enclosure length="113783657" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/gb2c45dc5db678zs/Presner_mixdown9rpio.mp3"/>
        <itunes:summary>In this episode of ”Technically Human,” I bring you a conversation with one of the great thinkers working at the intersection of ethics and technology, Professor Todd Presner, for an episode about his new book, Ethics of the Algorithm: Digital Humanities and Holocaust Memory. In the conversation, we talk about new direction in Holocaust memory and scholarship, how technologies are enabling new approaches, questions, and interpretations of major historical events, and how digital technologies might help us imagine a new ethics of interpretation of history and memory.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4740</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>137</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I bring you a conversation with one of the great thinkers working at the intersection of ethics and technology, Professor Todd Presner, for an episode about his new book, Ethics of the Algorithm: Digital Humanities and Holocaust Memory. In the conversation, we talk about new direction in Holocaust memory and scholarship, how technologies are enabling new approaches, questions, and interpretations of major historical events, and how digital technologies might help us imagine a new ethics of interpretation of history and memory.  Dr. Todd Presner is Chair of UCLA’s Department of European Languages and Transcultural Studies. Previously, he was the chair of UCLA’s Digital Humanities Program (2011-21), and from 2011-2018, he served as the Sady and Ludwig Kahn Director of the Alan D. Leve Center for Jewish Studies. He holds the Michael and Irene Ross Chair in the UCLA Division of the Humanities. His research focuses on European intellectual and cultural history, Holocaust studies, visual culture, and digital humanities. Dr. Presner’s newest book was published with Princeton University Press: Ethics of the Algorithm: Digital Humanities and Holocaust Memory  (Fall 2024). </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Game On, Hate Off: Navigating the Virtual Frontier</title>
        <itunes:title>Game On, Hate Off: Navigating the Virtual Frontier</itunes:title>
        <link>https://dmdonig.podbean.com/e/game-on-hate-off-navigating-the-virtual-frontier/</link>
                    <comments>https://dmdonig.podbean.com/e/game-on-hate-off-navigating-the-virtual-frontier/#comments</comments>        <pubDate>Fri, 11 Oct 2024 08:57:02 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1b397648-7d13-38c5-a8a5-d47819ee3708</guid>
                                    <description><![CDATA[<p>In this week's episode of the show, I speak with Daniel Kelley about the culture of online gaming, and the unique set of challenges in the gaming space, related to hate, harassment, and extremism. We talk about the possibilities, and limitations, of regulating that space, and what the landscape of gaming might foretell about the future of increasingly online lives that we live, as more and more of our social interactions take place virtually. We talk about how to make those spaces safer and more inclusive, and whether moderation is the right tack to take in developing that more inclusive future, as well as what other strategies for cultivating such spaces might be possible.</p>
<p> </p>
<p><a href='https://vnls.adl.org/speaker/daniel-kelley/'>Daniel Kelley</a> is the Director of Strategy and Operations of the <a href='https://www.adl.org/research-centers/center-technology-society?gad_source=1&amp;gclid=CjwKCAjwmaO4BhAhEiwA5p4YL2zqWYhbt1pxe-PJTBFFLCbU6AsEW3J8INHyWSknLA52QN-7ZGCh4xoC71cQAvD_BwE&amp;gclsrc=aw.ds'>Anti-Defamation League (ADL) Center for Technology and Society (CTS</a>). CTS works through research and advocacy to fight for justice and fair treatment for all in digital social spaces from social media to online games and beyond. For the last five years, Daniel has been the lead author of the first nationally representative survey of hate, harassment, and positive social experiences in online games. He is also the co-author of the <a href='https://thrivingingames.org/framework/'>Disruption and Harms in Online Games Framework</a>, a resource to define harms in online multiplayer games together with members of the game industry coalition the Fair Play Alliance. He also leads CTS’ tech accountability research efforts, such as its <a href='https://www.adl.org/resources/report/2023-online-holocaust-denial-report-card'>Antisemitism and Holocaust Denial Report Card</a>, which looks at ways to create research-grounded advocacy products to inform the public about the nature of hate and harassment online and to hold tech companies accountable.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week's episode of the show, I speak with Daniel Kelley about the culture of online gaming, and the unique set of challenges in the gaming space, related to hate, harassment, and extremism. We talk about the possibilities, and limitations, of regulating that space, and what the landscape of gaming might foretell about the future of increasingly online lives that we live, as more and more of our social interactions take place virtually. We talk about how to make those spaces safer and more inclusive, and whether moderation is the right tack to take in developing that more inclusive future, as well as what other strategies for cultivating such spaces might be possible.</p>
<p> </p>
<p><a href='https://vnls.adl.org/speaker/daniel-kelley/'>Daniel Kelley</a> is the Director of Strategy and Operations of the <a href='https://www.adl.org/research-centers/center-technology-society?gad_source=1&amp;gclid=CjwKCAjwmaO4BhAhEiwA5p4YL2zqWYhbt1pxe-PJTBFFLCbU6AsEW3J8INHyWSknLA52QN-7ZGCh4xoC71cQAvD_BwE&amp;gclsrc=aw.ds'>Anti-Defamation League (ADL) Center for Technology and Society (CTS</a>). CTS works through research and advocacy to fight for justice and fair treatment for all in digital social spaces from social media to online games and beyond. For the last five years, Daniel has been the lead author of the first nationally representative survey of hate, harassment, and positive social experiences in online games. He is also the co-author of the <a href='https://thrivingingames.org/framework/'>Disruption and Harms in Online Games Framework</a>, a resource to define harms in online multiplayer games together with members of the game industry coalition the Fair Play Alliance. He also leads CTS’ tech accountability research efforts, such as its <a href='https://www.adl.org/resources/report/2023-online-holocaust-denial-report-card'>Antisemitism and Holocaust Denial Report Card</a>, which looks at ways to create research-grounded advocacy products to inform the public about the nature of hate and harassment online and to hold tech companies accountable.</p>
]]></content:encoded>
                                    
        <enclosure length="85238902" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/iqwceavxaxbiz3fj/Daniel_Kelley_mixdownbcfhe.mp3"/>
        <itunes:summary>In this week’s episode of the show, I speak with Daniel Kelley, the Director of Strategy and Operations of the Anti-Defamation League (ADL) Center for Technology and Society (CTS), about the culture of online gaming, and the unique set of challenges in the gaming space, related to hate, harassment, and extremism.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3550</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>136</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week's episode of the show, I speak with Daniel Kelley about the culture of online gaming, and the unique set of challenges in the gaming space, related to hate, harassment, and extremism. We talk about the possibilities, and limitations, of regulating that space, and what the landscape of gaming might foretell about the future of increasingly online lives that we live, as more and more of our social interactions take place virtually. We talk about how to make those spaces safer and more inclusive, and whether moderation is the right tack to take in developing that more inclusive future, as well as what other strategies for cultivating such spaces might be possible.   Daniel Kelley is the Director of Strategy and Operations of the Anti-Defamation League (ADL) Center for Technology and Society (CTS). CTS works through research and advocacy to fight for justice and fair treatment for all in digital social spaces from social media to online games and beyond. For the last five years, Daniel has been the lead author of the first nationally representative survey of hate, harassment, and positive social experiences in online games. He is also the co-author of the Disruption and Harms in Online Games Framework, a resource to define harms in online multiplayer games together with members of the game industry coalition the Fair Play Alliance. He also leads CTS’ tech accountability research efforts, such as its Antisemitism and Holocaust Denial Report Card, which looks at ways to create research-grounded advocacy products to inform the public about the nature of hate and harassment online and to hold tech companies accountable.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Art, Tech, Self: Untangling the Human Algorithm</title>
        <itunes:title>Art, Tech, Self: Untangling the Human Algorithm</itunes:title>
        <link>https://dmdonig.podbean.com/e/art-tech-self-untangling-the-human-algorithm/</link>
                    <comments>https://dmdonig.podbean.com/e/art-tech-self-untangling-the-human-algorithm/#comments</comments>        <pubDate>Fri, 04 Oct 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ca652c31-38e8-3a4a-a64c-53c0ec6d553c</guid>
                                    <description><![CDATA[<p>Hi Technically Human listeners! Welcome back to another episode of the show. Today I'm sitting down with <a href='https://philosophy.berkeley.edu/noe/'>Alva Noë</a>. We talk about his new book, The Entanglement, and the relationship between technology, philosophy, and art.</p>
<p>In <a href='https://press.princeton.edu/books/hardcover/9780691188812/the-entanglement?srsltid=AfmBOopISrjZDX4ILlJ0hB-wsFnhXOVzSCbRn75F1F8aEbdwxMOQv4vC'>The Entanglement</a>, Professor Noë explores the inseparability of life, art, and philosophy, arguing that we have greatly underestimated what this entangled reality means for understanding human nature. Neither biology, cognitive science, nor AI can tell a complete story of us, and we can no more pin ourselves down than we can fix or settle on the meaning of an artwork. Even more, art and philosophy are the means to set ourselves free, at least to some degree, from convention, habit, technology, culture, and even biology.</p>
<p>Dr. Alva Noë is a philosopher of mind whose research and teaching focus is perception and consciousness, and the philosophy of art. He is the author of Action in Perception (MIT, 2004); Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness (Farrar Straus and Giroux, 2009); Varieties of Presence (Harvard, 2012); Strange Tools: Art and Human Nature (Farrar Strauss and Giroux, 2015), Infinite Baseball: Notes from a Philosopher at the Ballpark (Oxford, 2019) and, most recently, Learning to Look: Dispatches from the Art World (Oxford 2021).  He holds a Bachelor of the Arts degree from Columbia University; a Bachelors of Philosophy. from University of Oxford;  and a Ph.D. from Harvard University. He teaches in the philosophy department of UC Berkeley.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Hi Technically Human listeners! Welcome back to another episode of the show. Today I'm sitting down with <a href='https://philosophy.berkeley.edu/noe/'>Alva Noë</a>. We talk about his new book, The Entanglement, and the relationship between technology, philosophy, and art.</p>
<p>In <a href='https://press.princeton.edu/books/hardcover/9780691188812/the-entanglement?srsltid=AfmBOopISrjZDX4ILlJ0hB-wsFnhXOVzSCbRn75F1F8aEbdwxMOQv4vC'><em>The Entanglement</em></a>, Professor Noë explores the inseparability of life, art, and philosophy, arguing that we have greatly underestimated what this entangled reality means for understanding human nature. Neither biology, cognitive science, nor AI can tell a complete story of us, and we can no more pin ourselves down than we can fix or settle on the meaning of an artwork. Even more, art and philosophy are the means to set ourselves free, at least to some degree, from convention, habit, technology, culture, and even biology.</p>
<p>Dr. Alva Noë is a philosopher of mind whose research and teaching focus is perception and consciousness, and the philosophy of art. He is the author of <em>Action in Perception</em> (MIT, 2004); <em>Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness</em> (Farrar Straus and Giroux, 2009); <em>Varieties of Presence</em> (Harvard, 2012); <em>Strange Tools: Art and Human Nature</em> (Farrar Strauss and Giroux, 2015), <em>Infinite Baseball: Notes from a Philosopher at the Ballpark</em> (Oxford, 2019) and, most recently, <em>Learning to Look: Dispatches from the Art World</em> (Oxford 2021).  He holds a Bachelor of the Arts degree from Columbia University; a Bachelors of Philosophy. from University of Oxford;  and a Ph.D. from Harvard University. He teaches in the philosophy department of UC Berkeley.</p>
]]></content:encoded>
                                    
        <enclosure length="92689150" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qjyggrbqn9as8q8u/Noe_mixdown.mp3"/>
        <itunes:summary><![CDATA[Hi Technically Human listeners! Welcome back to another episode of the show. Today I'm sitting down with Alva Noë. We talk about his new book, The Entanglement, and the relationship between technology, philosophy, and art.
In The Entanglement, Professor Noë explores the inseparability of life, art, and philosophy, arguing that we have greatly underestimated what this entangled reality means for understanding human nature. Neither biology, cognitive science, nor AI can tell a complete story of us, and we can no more pin ourselves down than we can fix or settle on the meaning of an artwork. Even more, art and philosophy are the means to set ourselves free, at least to some degree, from convention, habit, technology, culture, and even biology.
Dr. Alva Noë is a philosopher of mind whose research and teaching focus is perception and consciousness, and the philosophy of art. He is the author of Action in Perception (MIT, 2004); Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness (Farrar Straus and Giroux, 2009); Varieties of Presence (Harvard, 2012); Strange Tools: Art and Human Nature (Farrar Strauss and Giroux, 2015), Infinite Baseball: Notes from a Philosopher at the Ballpark (Oxford, 2019) and, most recently, Learning to Look: Dispatches from the Art World (Oxford 2021).  He holds a Bachelor of the Arts degree from Columbia University; a Bachelors of Philosophy. from University of Oxford;  and a Ph.D. from Harvard University. He teaches in the philosophy department of UC Berkeley.]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3860</itunes:duration>
                <itunes:episode>134</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Hi Technically Human listeners! Welcome back to another episode of the show. Today I'm sitting down with Alva Noë. We talk about his new book, The Entanglement, and the relationship between technology, philosophy, and art. In The Entanglement, Professor Noë explores the inseparability of life, art, and philosophy, arguing that we have greatly underestimated what this entangled reality means for understanding human nature. Neither biology, cognitive science, nor AI can tell a complete story of us, and we can no more pin ourselves down than we can fix or settle on the meaning of an artwork. Even more, art and philosophy are the means to set ourselves free, at least to some degree, from convention, habit, technology, culture, and even biology. Dr. Alva Noë is a philosopher of mind whose research and teaching focus is perception and consciousness, and the philosophy of art. He is the author of Action in Perception (MIT, 2004); Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness (Farrar Straus and Giroux, 2009); Varieties of Presence (Harvard, 2012); Strange Tools: Art and Human Nature (Farrar Strauss and Giroux, 2015), Infinite Baseball: Notes from a Philosopher at the Ballpark (Oxford, 2019) and, most recently, Learning to Look: Dispatches from the Art World (Oxford 2021).  He holds a Bachelor of the Arts degree from Columbia University; a Bachelors of Philosophy. from University of Oxford;  and a Ph.D. from Harvard University. He teaches in the philosophy department of UC Berkeley.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The QWERTY Keyboard and the Chinese Computer</title>
        <itunes:title>The QWERTY Keyboard and the Chinese Computer</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-qwerty-keyboard-and-the-chinese-computer/</link>
                    <comments>https://dmdonig.podbean.com/e/the-qwerty-keyboard-and-the-chinese-computer/#comments</comments>        <pubDate>Fri, 27 Sep 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/dfab4678-f673-3332-a420-0e759dfb010f</guid>
                                    <description><![CDATA[<p>In this episode of the show, I speak with <a href='https://history.stanford.edu/people/thomas-mullaney'>Dr. Thomas Mullaney</a> about his new book, <a href='https://mitpress.mit.edu/9780262047517/the-chinese-computer/'>The Chinese Computer</a>. In the book, Dr. Mullaney outlines the history and evolution of Chinese language computing technology, and explores how the technology of the QWERTY keyboard changed this history of computing. We talk about how the structure of language has shaped the history of digital technologies, and Dr. Mullaney explains how China and the non-Western world—because of the “hypographic” technologies they had to invent in order to join the personal computing revolution— helps us understand the relationship between the human mind and the technologies it creates.</p>
<p>Thomas S. Mullaney is Professor of History and Professor of East Asian Languages and Cultures, by courtesy, at Stanford University. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. </p>
<p id="yui_3_17_2_1_1727217751094_449">He is the author or lead editor of 7 books, including The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, Coming to Terms with the Nation: Ethnic Classification in Modern China, and The Chinese Computer—the first comprehensive history of Chinese-language computing. 

His writings have appeared in the Journal of Asian Studies, Technology &amp; Culture, Aeon, Foreign Affairs, and Foreign Policy, and his work has been featured in the LA Times, The Atlantic, the BBC, and in invited lectures at Google, Microsoft, Adobe, and more. He holds a PhD from Columbia University.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I speak with <a href='https://history.stanford.edu/people/thomas-mullaney'>Dr. Thomas Mullaney</a> about his new book, <a href='https://mitpress.mit.edu/9780262047517/the-chinese-computer/'><em>The Chinese Computer</em></a><em>. </em>In the book, Dr. Mullaney outlines the history and evolution of Chinese language computing technology, and explores how the technology of the QWERTY keyboard changed this history of computing. We talk about how the structure of language has shaped the history of digital technologies, and Dr. Mullaney explains how China and the non-Western world—because of the “hypographic” technologies they had to invent in order to join the personal computing revolution— helps us understand the relationship between the human mind and the technologies it creates.</p>
<p>Thomas S. Mullaney is Professor of History and Professor of East Asian Languages and Cultures, by courtesy, at Stanford University. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow. </p>
<p id="yui_3_17_2_1_1727217751094_449">He is the author or lead editor of 7 books, including <em>The Chinese Typewriter </em>(winner of the Fairbank prize), <em>Your Computer is on Fire</em>, <em>Coming to Terms with the Nation: Ethnic Classification in Modern China</em>, and <em>The Chinese Computer</em>—the first comprehensive history of Chinese-language computing. <br>
<br>
His writings have appeared in the Journal of Asian Studies, Technology &amp; Culture, Aeon, Foreign Affairs, and Foreign Policy, and his work has been featured in the LA Times, The Atlantic, the BBC, and in invited lectures at Google, Microsoft, Adobe, and more. He holds a PhD from Columbia University.</p>
]]></content:encoded>
                                    
        <enclosure length="95420973" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/vc3wpf2ba7nny5jv/Mullaney_mixdown.mp3"/>
        <itunes:summary>In this episode of the show, I speak with Dr. Thomas Mullaney about his new book, The Chinese Computer. In the book, Dr. Mullaney outlines the history and evolution of Chinese language computing technology, and explores how the technology of the QWERTY keyboard changed this history of computing. We talk about how the structure of language has shaped the history of digital technologies, and Dr. Mullaney explains how China and the non-Western world—because of the “hypographic” technologies they had to invent in order to join the personal computing revolution— helps us understand the relationship between the human mind and the technologies it creates.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3975</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>135</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I speak with Dr. Thomas Mullaney about his new book, The Chinese Computer. In the book, Dr. Mullaney outlines the history and evolution of Chinese language computing technology, and explores how the technology of the QWERTY keyboard changed this history of computing. We talk about how the structure of language has shaped the history of digital technologies, and Dr. Mullaney explains how China and the non-Western world—because of the “hypographic” technologies they had to invent in order to join the personal computing revolution— helps us understand the relationship between the human mind and the technologies it creates. Thomas S. Mullaney is Professor of History and Professor of East Asian Languages and Cultures, by courtesy, at Stanford University. He is also the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow.  He is the author or lead editor of 7 books, including The Chinese Typewriter (winner of the Fairbank prize), Your Computer is on Fire, Coming to Terms with the Nation: Ethnic Classification in Modern China, and The Chinese Computer—the first comprehensive history of Chinese-language computing.  His writings have appeared in the Journal of Asian Studies, Technology &amp;amp; Culture, Aeon, Foreign Affairs, and Foreign Policy, and his work has been featured in the LA Times, The Atlantic, the BBC, and in invited lectures at Google, Microsoft, Adobe, and more. He holds a PhD from Columbia University.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Agree to Disagree: Are we living in an age of techno-pessimism?</title>
        <itunes:title>Agree to Disagree: Are we living in an age of techno-pessimism?</itunes:title>
        <link>https://dmdonig.podbean.com/e/agree-to-disagree-are-we-living-in-an-age-of-techno-pessimism/</link>
                    <comments>https://dmdonig.podbean.com/e/agree-to-disagree-are-we-living-in-an-age-of-techno-pessimism/#comments</comments>        <pubDate>Fri, 20 Sep 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/eaf3cfab-65f3-3944-a26f-55f5423acd3e</guid>
                                    <description><![CDATA[<p>Hi Technically Human Listeners!</p>
<p>After a long summer break we are back with a brand season and brand new episodes of the show! To kick off the season, we are bringing you an episode that I’m calling “agree to disagree,” with two guests, Robert D. Atkinson and David Moschella, who join me to argue that the critiques of tech circulating in our environment are full of “myths and scapegoats.” That’s the title of their new book, “Technology Fears and Scapegoats: 40 Myths About Privacy, Jobs, AI, and Today’s Innovation Economy,” published this year by Pallgrave McMillan. The book argues that our era of tech critique, and the impetus for regulation that many critics advocate for and recommend, is misguided, and that our era is one of general pessimism toward AI, in which our society largely overlooks the benefits of this technology. In their words, quote, “These attitudes both reduce the enthusiasm for innovation and the efforts by government needed to spur it.”</p>
<p>Well, as the title of the episode suggests, agree to disagree, both on the facts and the merits of the argument! A key component of this show is my commitment to talking to people with whom I disagree, and foregrounding civil discourse with people whose ideas differ from my own. My hope is that you, the listeners, can weigh out their arguments against my own and see where you land. As always, if you have thoughts about the show, please get in touch!</p>
<p><a href='https://itif.org/person/robert-d-atkinson/'>Robert D. Atkinson</a> is the founder and president of the Information Technology and Innovation Foundation (ITIF). He is an internationally recognized scholar and a widely published author whom The New Republic has named one of the “three most important thinkers about innovation,” Washingtonian Magazine has called a “tech titan,” Government Technology Magazine has judged to be one of the 25 top “doers, dreamers and drivers of information technology,” and the Wharton Business School has given the “Wharton Infosys Business Transformation Award.”</p>
<p>A sought-after speaker and valued adviser to policymakers around the world, Atkinson’s books include <a href='https://itif.org/publications/2024/05/07/technology-fears-and-scapegoats/'>Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy</a> (Palgrave Macmillan, 2024); <a href='https://mitpress.mit.edu/books/big-beautiful'>Big is Beautiful: Debunking the Mythology of Small Business</a> (MIT Press, 2018); <a href='http://www.amazon.com/gp/product/0300168993/'>Innovation Economics: The Race for Global Advantage</a> (Yale, 2012); <a href='https://www.amazon.com/Supply-Side-Follies-Conservative-Economics-Innovation/dp/0742551075'>Supply-Side Follies: Why Conservative Economics Fails, Liberal Economics Falters, and Innovation Economics is the Answer</a> (Rowman &amp; Littlefield, 2006); and <a href='http://www.amazon.com/gp/product/1845425766'>The Past And Future Of America’s Economy: Long Waves Of Innovation That Power Cycles Of Growth</a> (Edward Elgar, 2005).</p>
<p>President Clinton appointed Atkinson to the Commission on Workers, Communities, and Economic Change in the New Economy; the Bush administration appointed him chair of the congressionally created National Surface Transportation Infrastructure Financing Commission; the Obama administration appointed him to the National Innovation and Competitiveness Strategy Advisory Board; as co-chair of the White House Office of Science and Technology Policy’s China-U.S. Innovation Policy Experts Group; to the U.S. Department of Commerce’s National Advisory Council on Innovation and Entrepreneurship; and the Trump administration appointed him to the G7 Global Partnership on Artificial Intelligence. The Biden administration appointed him as a member of the U.S. State Department’s Advisory Committee on International Communications and Information, and a member of the Export-Import Bank of the United States' Council on China Competition.</p>
<p>Atkinson holds a Ph.D. in city and regional planning from the University of North Carolina, Chapel Hil.</p>
<p><a href='https://www.google.com/search?q=David+Moschella&amp;rlz=1C5GCEM_enUS1069US1071&amp;oq=David+Moschella&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg8MgYIAhBFGDwyBggDEEUYPNIBBzMzNGowajSoAgCwAgA&amp;sourceid=chrome&amp;ie=UTF-8'>David Moschella</a> is a nonresident senior fellow at ITIF. Previously, he was a research fellow at Leading Edge Forum (LEF), where he explored the global business impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. For more than a decade before LEF, David was in charge of worldwide research for IDC, the largest market analysis firm in the information technology industry, responsible for the company’s global technology industry forecasts and insights.</p>
<p>A well-known international speaker, writer, and thought leader, David’s books include <a href='https://itif.org/publications/2024/05/07/technology-fears-and-scapegoats/'>Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy</a> (Palgrave Macmillan, 2024), <a href='https://www.amazon.com/Seeing-Digital-Industries-Organizations-Careers/dp/0692113444/'>Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s</a> (DXC Technology, 2018), <a href='https://www.amazon.com/Customer-Driven-Shaping-Technology-Industry-Growth/dp/1578518652'>Customer-Driven IT</a> (Harvard Business School Press, 2003), and <a href='https://www.amazon.com/Waves-Power-Technology-Leadership-1964-2010/dp/0814403794/'>Waves of Power</a> (Amacom, 1997). He has lectured and consulted on digital trends and strategies in more than 30 countries, working with leading customers and suppliers alike.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Hi Technically Human Listeners!</p>
<p>After a long summer break we are back with a brand season and brand new episodes of the show! To kick off the season, we are bringing you an episode that I’m calling “agree to disagree,” with two guests, Robert D. Atkinson and David Moschella, who join me to argue that the critiques of tech circulating in our environment are full of “myths and scapegoats.” That’s the title of their new book, “Technology Fears and Scapegoats: 40 Myths About Privacy, Jobs, AI, and Today’s Innovation Economy,” published this year by Pallgrave McMillan. The book argues that our era of tech critique, and the impetus for regulation that many critics advocate for and recommend, is misguided, and that our era is one of general pessimism toward AI, in which our society largely overlooks the benefits of this technology. In their words, quote, “These attitudes both reduce the enthusiasm for innovation and the efforts by government needed to spur it.”</p>
<p>Well, as the title of the episode suggests, agree to disagree, both on the facts and the merits of the argument! A key component of this show is my commitment to talking to people with whom I disagree, and foregrounding civil discourse with people whose ideas differ from my own. My hope is that you, the listeners, can weigh out their arguments against my own and see where you land. As always, if you have thoughts about the show, please get in touch!</p>
<p><a href='https://itif.org/person/robert-d-atkinson/'>Robert D. Atkinson</a> is the founder and president of the Information Technology and Innovation Foundation (ITIF). He is an internationally recognized scholar and a widely published author whom <em>The New Republic </em>has named one of the “three most important thinkers about innovation,” <em>Washingtonian Magazine </em>has called a “tech titan,” <em>Government Technology Magazine</em> has judged to be one of the 25 top “doers, dreamers and drivers of information technology,” and the Wharton Business School has given the “Wharton Infosys Business Transformation Award.”</p>
<p>A sought-after speaker and valued adviser to policymakers around the world, Atkinson’s books include <em><a href='https://itif.org/publications/2024/05/07/technology-fears-and-scapegoats/'>Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy</a> </em>(Palgrave Macmillan, 2024);<em> </em><em><a href='https://mitpress.mit.edu/books/big-beautiful'>Big is Beautiful: Debunking the Mythology of Small Business</a></em> (MIT Press, 2018); <a href='http://www.amazon.com/gp/product/0300168993/'><em>Innovation Economics: The Race for Global Advantage</em></a> (Yale, 2012); <a href='https://www.amazon.com/Supply-Side-Follies-Conservative-Economics-Innovation/dp/0742551075'><em>Supply-Side Follies: Why Conservative Economics Fails, Liberal Economics Falters, and Innovation Economics is the Answer</em></a> (Rowman &amp; Littlefield, 2006); and <a href='http://www.amazon.com/gp/product/1845425766'><em>The Past And Future Of America’s Economy: Long Waves Of Innovation That Power Cycles Of Growth</em></a> (Edward Elgar, 2005).</p>
<p>President Clinton appointed Atkinson to the Commission on Workers, Communities, and Economic Change in the New Economy; the Bush administration appointed him chair of the congressionally created National Surface Transportation Infrastructure Financing Commission; the Obama administration appointed him to the National Innovation and Competitiveness Strategy Advisory Board; as co-chair of the White House Office of Science and Technology Policy’s China-U.S. Innovation Policy Experts Group; to the U.S. Department of Commerce’s National Advisory Council on Innovation and Entrepreneurship; and the Trump administration appointed him to the G7 Global Partnership on Artificial Intelligence. The Biden administration appointed him as a member of the U.S. State Department’s Advisory Committee on International Communications and Information, and a member of the Export-Import Bank of the United States' Council on China Competition.</p>
<p>Atkinson holds a Ph.D. in city and regional planning from the University of North Carolina, Chapel Hil.</p>
<p><a href='https://www.google.com/search?q=David+Moschella&amp;rlz=1C5GCEM_enUS1069US1071&amp;oq=David+Moschella&amp;gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg8MgYIAhBFGDwyBggDEEUYPNIBBzMzNGowajSoAgCwAgA&amp;sourceid=chrome&amp;ie=UTF-8'>David Moschella</a> is a nonresident senior fellow at ITIF. Previously, he was a research fellow at Leading Edge Forum (LEF), where he explored the global business impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. For more than a decade before LEF, David was in charge of worldwide research for IDC, the largest market analysis firm in the information technology industry, responsible for the company’s global technology industry forecasts and insights.</p>
<p>A well-known international speaker, writer, and thought leader, David’s books include <a href='https://itif.org/publications/2024/05/07/technology-fears-and-scapegoats/'><em>Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy</em></a> (Palgrave Macmillan, 2024), <a href='https://www.amazon.com/Seeing-Digital-Industries-Organizations-Careers/dp/0692113444/'><em>Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s</em></a> (DXC Technology, 2018), <a href='https://www.amazon.com/Customer-Driven-Shaping-Technology-Industry-Growth/dp/1578518652'><em>Customer-Driven IT</em></a> (Harvard Business School Press, 2003), and <a href='https://www.amazon.com/Waves-Power-Technology-Leadership-1964-2010/dp/0814403794/'><em>Waves of Power</em></a> (Amacom, 1997). He has lectured and consulted on digital trends and strategies in more than 30 countries, working with leading customers and suppliers alike.</p>
]]></content:encoded>
                                    
        <enclosure length="111612415" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/j5ywbeiermz8stkh/Moschino_2_mixdown90n7k.mp3"/>
        <itunes:summary>To kick off the season, we are bringing you an episode that I’m calling “agree to disagree,” with two guests, Robert D. Atkinson and David Moschella, who join me to argue that the critiques of tech circulating in our environment are full of “myths and scapegoats,” the subject of their new book. We debate the role of regulation, the idea that our age is one of techno-pessimism, and, well, everything else!</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4650</itunes:duration>
        <itunes:season>13</itunes:season>
        <itunes:episode>133</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Hi Technically Human Listeners! After a long summer break we are back with a brand season and brand new episodes of the show! To kick off the season, we are bringing you an episode that I’m calling “agree to disagree,” with two guests, Robert D. Atkinson and David Moschella, who join me to argue that the critiques of tech circulating in our environment are full of “myths and scapegoats.” That’s the title of their new book, “Technology Fears and Scapegoats: 40 Myths About Privacy, Jobs, AI, and Today’s Innovation Economy,” published this year by Pallgrave McMillan. The book argues that our era of tech critique, and the impetus for regulation that many critics advocate for and recommend, is misguided, and that our era is one of general pessimism toward AI, in which our society largely overlooks the benefits of this technology. In their words, quote, “These attitudes both reduce the enthusiasm for innovation and the efforts by government needed to spur it.” Well, as the title of the episode suggests, agree to disagree, both on the facts and the merits of the argument! A key component of this show is my commitment to talking to people with whom I disagree, and foregrounding civil discourse with people whose ideas differ from my own. My hope is that you, the listeners, can weigh out their arguments against my own and see where you land. As always, if you have thoughts about the show, please get in touch! Robert D. Atkinson is the founder and president of the Information Technology and Innovation Foundation (ITIF). He is an internationally recognized scholar and a widely published author whom The New Republic has named one of the “three most important thinkers about innovation,” Washingtonian Magazine has called a “tech titan,” Government Technology Magazine has judged to be one of the 25 top “doers, dreamers and drivers of information technology,” and the Wharton Business School has given the “Wharton Infosys Business Transformation Award.” A sought-after speaker and valued adviser to policymakers around the world, Atkinson’s books include Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy (Palgrave Macmillan, 2024); Big is Beautiful: Debunking the Mythology of Small Business (MIT Press, 2018); Innovation Economics: The Race for Global Advantage (Yale, 2012); Supply-Side Follies: Why Conservative Economics Fails, Liberal Economics Falters, and Innovation Economics is the Answer (Rowman &amp;amp; Littlefield, 2006); and The Past And Future Of America’s Economy: Long Waves Of Innovation That Power Cycles Of Growth (Edward Elgar, 2005). President Clinton appointed Atkinson to the Commission on Workers, Communities, and Economic Change in the New Economy; the Bush administration appointed him chair of the congressionally created National Surface Transportation Infrastructure Financing Commission; the Obama administration appointed him to the National Innovation and Competitiveness Strategy Advisory Board; as co-chair of the White House Office of Science and Technology Policy’s China-U.S. Innovation Policy Experts Group; to the U.S. Department of Commerce’s National Advisory Council on Innovation and Entrepreneurship; and the Trump administration appointed him to the G7 Global Partnership on Artificial Intelligence. The Biden administration appointed him as a member of the U.S. State Department’s Advisory Committee on International Communications and Information, and a member of the Export-Import Bank of the United States' Council on China Competition. Atkinson holds a Ph.D. in city and regional planning from the University of North Carolina, Chapel Hil. David Moschella is a nonresident senior fellow at ITIF. Previously, he was a research fellow at Leading Edge Forum (LEF), where he explored the global business impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. For more than a decade before LEF, David was in charge of worldwide research for IDC, the largest market analysis firm in the information technology industry, responsible for the company’s global technology industry forecasts and insights. A well-known international speaker, writer, and thought leader, David’s books include Technology Fears and Scapegoats: 40 Myths about Privacy, Jobs, AI, and Today’s Innovation Economy (Palgrave Macmillan, 2024), Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s (DXC Technology, 2018), Customer-Driven IT (Harvard Business School Press, 2003), and Waves of Power (Amacom, 1997). He has lectured and consulted on digital trends and strategies in more than 30 countries, working with leading customers and suppliers alike.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Ethics and Technology of Teams in the Age of AI</title>
        <itunes:title>The Ethics and Technology of Teams in the Age of AI</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-ethics-and-technology-of-teams-in-the-age-of-ai/</link>
                    <comments>https://dmdonig.podbean.com/e/the-ethics-and-technology-of-teams-in-the-age-of-ai/#comments</comments>        <pubDate>Fri, 10 May 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e4ab0c7a-5012-3062-88bb-3cbffd2d7e0b</guid>
                                    <description><![CDATA[<p>Today I’m speaking with <a href='https://www.linkedin.com/in/pgonloop'>Projjal Ghatak</a>, CEO &amp; Co-Founder At Onloop, about the ethics of teamwork, collaboration, and providing constructive feedback.</p>
<p>Projjal founded OnLoop in 2020 to create a category called Collaborative Team Development (CTD) to fundamentally reinvent how hybrid teams are assessed and developed,  after over a decade of frustration with clunky, traditional enterprise performance management and learning processes and tools that were either hated or ignored by his teams at companies like Uber and Accenture where he spent many years.</p>
<p>Prior to founding OnLoop, Projjal spent three and a half years at Uber in a variety of roles including leading Strategy &amp; Operations for Business Development globally, leading Strategy &amp; Planning for the APAC rides business, and GM of the Philippines rides business. Besides Uber, he also spent some time raising debt and equity from New York hedge funds for an industrial conglomerate (Essar), in strategy consulting in South East Asia (Accenture), and in early-stage companies in Latin America (BlueKite, El Market) prior to that. He holds an MBA from Stanford University.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today I’m speaking with <a href='https://www.linkedin.com/in/pgonloop'>Projjal Ghatak</a>, CEO &amp; Co-Founder At Onloop, about the ethics of teamwork, collaboration, and providing constructive feedback.</p>
<p>Projjal founded OnLoop in 2020 to create a category called Collaborative Team Development (CTD) to fundamentally reinvent how hybrid teams are assessed and developed,  after over a decade of frustration with clunky, traditional enterprise performance management and learning processes and tools that were either hated or ignored by his teams at companies like Uber and Accenture where he spent many years.</p>
<p>Prior to founding OnLoop, Projjal spent three and a half years at Uber in a variety of roles including leading Strategy &amp; Operations for Business Development globally, leading Strategy &amp; Planning for the APAC rides business, and GM of the Philippines rides business. Besides Uber, he also spent some time raising debt and equity from New York hedge funds for an industrial conglomerate (Essar), in strategy consulting in South East Asia (Accenture), and in early-stage companies in Latin America (BlueKite, El Market) prior to that. He holds an MBA from Stanford University.</p>
]]></content:encoded>
                                    
        <enclosure length="73462441" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/zjrjjf38fq96t86u/Projjal_mixdown.mp3"/>
        <itunes:summary>Today I’m speaking with Projjal Ghatak, CEO &amp; Co-Founder At Onloop, about the ethics of teamwork, collaboration, and providing constructive feedback.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3060</itunes:duration>
        <itunes:season>14</itunes:season>
        <itunes:episode>132</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today I’m speaking with Projjal Ghatak, CEO &amp;amp; Co-Founder At Onloop, about the ethics of teamwork, collaboration, and providing constructive feedback. Projjal founded OnLoop in 2020 to create a category called Collaborative Team Development (CTD) to fundamentally reinvent how hybrid teams are assessed and developed,  after over a decade of frustration with clunky, traditional enterprise performance management and learning processes and tools that were either hated or ignored by his teams at companies like Uber and Accenture where he spent many years. Prior to founding OnLoop, Projjal spent three and a half years at Uber in a variety of roles including leading Strategy &amp;amp; Operations for Business Development globally, leading Strategy &amp;amp; Planning for the APAC rides business, and GM of the Philippines rides business. Besides Uber, he also spent some time raising debt and equity from New York hedge funds for an industrial conglomerate (Essar), in strategy consulting in South East Asia (Accenture), and in early-stage companies in Latin America (BlueKite, El Market) prior to that. He holds an MBA from Stanford University.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Ethics Works: A day in the life of an ethics worker in tech</title>
        <itunes:title>Ethics Works: A day in the life of an ethics worker in tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/ethics-works-a-day-in-the-life-of-an-ethics-worker-in-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/ethics-works-a-day-in-the-life-of-an-ethics-worker-in-tech/#comments</comments>        <pubDate>Fri, 26 Apr 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e148cf70-66c9-3a57-8ed5-721696f4dc38</guid>
                                    <description><![CDATA[<p>In this episode of the show, I speak with <a href='https://www.linkedin.com/in/sarahfairweather/'>Sarah Fairweather</a> about what it is like to be an ethics worker. We talk about how ethical work can sync up with business practices, how to develop a culture of ethics in industry, and Sarah talks me through what it is like to practice ethics as a day job.</p>
<p>Sarah Fairweather is the Senior Program Manager of Ethics at <a href='https://wellsaidlabs.com/'>WellSaid Labs</a>, shaping Responsible AI for synthetic voice technology and designing policies for WellSaid Labs’ ethical AI deployment. She leads the effort at WellSaid to ensure that every team in the organization is equipped with the tools and skills they need to make ethics-informed designs and decisions in support of responsible innovation. Before WellSaid Labs, she was the Director of Professional Learning at Code.org where she designed equity-focused K-12 professional development experiences and co-led the company’s first Equity Working Group.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I speak with <a href='https://www.linkedin.com/in/sarahfairweather/'>Sarah Fairweather</a> about what it is like to be an ethics worker. We talk about how ethical work can sync up with business practices, how to develop a culture of ethics in industry, and Sarah talks me through what it is like to practice ethics as a day job.</p>
<p>Sarah Fairweather is the Senior Program Manager of Ethics at <a href='https://wellsaidlabs.com/'>WellSaid Labs</a>, shaping Responsible AI for synthetic voice technology and designing policies for WellSaid Labs’ ethical AI deployment. She leads the effort at WellSaid to ensure that every team in the organization is equipped with the tools and skills they need to make ethics-informed designs and decisions in support of responsible innovation. Before WellSaid Labs, she was the Director of Professional Learning at Code.org where she designed equity-focused K-12 professional development experiences and co-led the company’s first Equity Working Group.</p>
]]></content:encoded>
                                    
        <enclosure length="64825652" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/a7qit7n79tky7fhc/Fairweather_mixdown.mp3"/>
        <itunes:summary>In this episode of the show, I speak with Sarah Fairweather about what it is like to be an ethics worker. We talk about how ethical work can sync up with business practices, how to develop a culture of ethics in industry, and Sarah talks me through what it is like to practice ethics as a day job.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2700</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>131</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I speak with Sarah Fairweather about what it is like to be an ethics worker. We talk about how ethical work can sync up with business practices, how to develop a culture of ethics in industry, and Sarah talks me through what it is like to practice ethics as a day job. Sarah Fairweather is the Senior Program Manager of Ethics at WellSaid Labs, shaping Responsible AI for synthetic voice technology and designing policies for WellSaid Labs’ ethical AI deployment. She leads the effort at WellSaid to ensure that every team in the organization is equipped with the tools and skills they need to make ethics-informed designs and decisions in support of responsible innovation. Before WellSaid Labs, she was the Director of Professional Learning at Code.org where she designed equity-focused K-12 professional development experiences and co-led the company’s first Equity Working Group.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Feel the Burn: A new novel explores the financial crisis in tech</title>
        <itunes:title>Feel the Burn: A new novel explores the financial crisis in tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/feel-the-burn-a-new-novel-explores-the-financial-crisis-in-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/feel-the-burn-a-new-novel-explores-the-financial-crisis-in-tech/#comments</comments>        <pubDate>Fri, 19 Apr 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/fb5a0b88-e89e-3d75-a64f-d28b0da9f560</guid>
                                    <description><![CDATA[<p>In this episode of the show, I sit down with author <a href='https://www.google.com/search?q=mike+trigg+bio+author+technologist&amp;rlz=1C5GCEM_enUS1069US1071&amp;oq=mike+trigg&amp;gs_lcrp=EgZjaHJvbWUqBggAEEUYOzIGCAAQRRg7MgYIARBFGDsyBggCEEUYOTIGCAMQRRg8MgYIBBBFGDwyBggFEEUYPNIBCDM4MzdqMGo0qAIAsAIA&amp;sourceid=chrome&amp;ie=UTF-8'>Mike Trigg</a> about his new novel, <a href='https://www.barnesandnoble.com/w/burner-mike-trigg/1143708287'>Burner</a>. Mike Trigg is an author, a novelist, a tech executive, a tech founder, and an investor in dozens of technology start-up companies for over twenty-five years. His first novel, Bit Flip, was released in August 2022 to critical acclaim, lauded by the San Francisco Chronicle as a “twisty, acerbic corporate thriller.” His work has been featured in Publisher’s Weekly, Kirkus, and Literary Hub. He has been a contributor to TechCrunch, Entrepreneur, and Fast Company, and frequently posts on his author site, <a href='http://www.miketrigg.com'>www.miketrigg.com</a>.</p>
<p> </p>
<p><a href='https://www.miketrigg.com/books'>Burner</a> is a mind-bending thriller that dives headfirst into our modern online zeitgeist of social media disinformation, toxic internet subcultures, and the human need for belonging, purpose, and love in an age of distorted electronic personas. The story confronts the loss of the American dream and the societal factors behind it, including wealth inequality, lack of opportunity, and cultural prejudices. At the same time, it is a tragic love story, asking the question of whether real human connection is inherently incompatible with our addiction to online esteem.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I sit down with author <a href='https://www.google.com/search?q=mike+trigg+bio+author+technologist&amp;rlz=1C5GCEM_enUS1069US1071&amp;oq=mike+trigg&amp;gs_lcrp=EgZjaHJvbWUqBggAEEUYOzIGCAAQRRg7MgYIARBFGDsyBggCEEUYOTIGCAMQRRg8MgYIBBBFGDwyBggFEEUYPNIBCDM4MzdqMGo0qAIAsAIA&amp;sourceid=chrome&amp;ie=UTF-8'>Mike Trigg</a> about his new novel, <a href='https://www.barnesandnoble.com/w/burner-mike-trigg/1143708287'><em>Burner</em></a>. Mike Trigg is an author, a novelist, a tech executive, a tech founder, and an investor in dozens of technology start-up companies for over twenty-five years. His first novel,<em> </em><em>Bit Flip</em>, was released in August 2022 to critical acclaim, lauded by the San Francisco Chronicle as a “twisty, acerbic corporate thriller.” His work has been featured in <em>Publisher’s Weekly</em>, <em>Kirkus</em>, and<em> </em><em>Literary Hub</em>. He has been a contributor to <em>TechCrunch</em>,<em> </em><em>Entrepreneur</em>, and<em> </em><em>Fast Company</em>, and frequently posts on his author site, <a href='http://www.miketrigg.com'>www.miketrigg.com</a>.</p>
<p> </p>
<p><a href='https://www.miketrigg.com/books'><em>Burner</em></a> is a mind-bending thriller that dives headfirst into our modern online zeitgeist of social media disinformation, toxic internet subcultures, and the human need for belonging, purpose, and love in an age of distorted electronic personas. The story confronts the loss of the American dream and the societal factors behind it, including wealth inequality, lack of opportunity, and cultural prejudices. At the same time, it is a tragic love story, asking the question of whether real human connection is inherently incompatible with our addiction to online esteem.</p>
]]></content:encoded>
                                    
        <enclosure length="92071569" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2sbx95ybmpcjkvzh/Trigg_mixdown.mp3"/>
        <itunes:summary>In this episode of the show, I sit down with author Mike Trigg about his new novel, Burner. Mike Trigg is an author, a novelist, a tech executive, a tech founder, and an investor in dozens of technology start-up companies for over twenty-five years. His first novel, Bit Flip, was released in August 2022 to critical acclaim, lauded by the San Francisco Chronicle as a “twisty, acerbic corporate thriller.”</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3834</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>130</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I sit down with author Mike Trigg about his new novel, Burner. Mike Trigg is an author, a novelist, a tech executive, a tech founder, and an investor in dozens of technology start-up companies for over twenty-five years. His first novel, Bit Flip, was released in August 2022 to critical acclaim, lauded by the San Francisco Chronicle as a “twisty, acerbic corporate thriller.” His work has been featured in Publisher’s Weekly, Kirkus, and Literary Hub. He has been a contributor to TechCrunch, Entrepreneur, and Fast Company, and frequently posts on his author site, www.miketrigg.com.   Burner is a mind-bending thriller that dives headfirst into our modern online zeitgeist of social media disinformation, toxic internet subcultures, and the human need for belonging, purpose, and love in an age of distorted electronic personas. The story confronts the loss of the American dream and the societal factors behind it, including wealth inequality, lack of opportunity, and cultural prejudices. At the same time, it is a tragic love story, asking the question of whether real human connection is inherently incompatible with our addiction to online esteem.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Dr. Strangelanguage: How I Learned to Stop Worrying and Love Generative AI in Medicine</title>
        <itunes:title>Dr. Strangelanguage: How I Learned to Stop Worrying and Love Generative AI in Medicine</itunes:title>
        <link>https://dmdonig.podbean.com/e/dr-strangelanguage-how-i-learned-to-stop-worrying-and-love-generative-ai-in-medicine/</link>
                    <comments>https://dmdonig.podbean.com/e/dr-strangelanguage-how-i-learned-to-stop-worrying-and-love-generative-ai-in-medicine/#comments</comments>        <pubDate>Fri, 12 Apr 2024 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/cfb01247-d73d-35cc-a5cc-88e02167a7d9</guid>
                                    <description><![CDATA[<p>In this episode of the show, I sit down with Dr. Robert Pearl to talk about his new book, <a href='https://www.amazon.com/ChatGPT-MD-AI-Empowered-Patients-American-ebook/dp/B0CWCV9DVZ'>ChatGPT, MD: How AI-Empowered Patients &amp; Doctors Can Take Back Control of American Medicine</a>, a book he co-authored with...ChatGPT! We talk about the deep fractures and problems in American health care that Generative AI may be positioned to solve, the changing landscape of health care, and  the possibility that Amazon, Google, or OpenAI may become the nation's latest healthcare providers.</p>
<p> </p>








<p>For 18 years, <a href='https://robertpearlmd.com/'>Dr. Robert Pearl, MD </a>served as CEO of The Permanente Medical Group (Kaiser Permanente). He is also former president of The Mid-Atlantic Permanente Medical Group. In these roles he led 10,000 physicians, 38,000 staff and was responsible for the nationally recognized medical care of 5 million Kaiser Permanente members on the west and east coasts. </p>
<p>He is a clinical professor of plastic surgery at Stanford University School of Medicine and on the faculty at the Stanford Graduate School of Business, where he teaches courses on healthcare strategy, technology, and leadership. Pearl is board-certified in plastic and reconstructive surgery, receiving his medical degree from Yale, followed by a residency in plastic and reconstructive surgery at Stanford University.</p>

















<p>He’s the author of three books: Mistreated: Why We Think We’re Getting Good Healthcare—And Why We’re Usually Wrong, a Washington Post bestseller (2017); Uncaring: How the Culture of Medicine Kills Doctors &amp; Patients, a Kirkus star recipient (2021); and his newest book ChatGPT, MD: How AI-Empowered Patients &amp; Doctors Can Take Back Control of American Medicine (April 2024). All profits from sales of his books go to Doctors Without Borders.</p>
<p>Dr. Pearl is a LinkedIn “Top Voice” in healthcare and host of the popular podcasts Fixing Healthcare and Medicine: The Truth. He publishes two monthly healthcare newsletters reaching 50,000+ combined subscribers. A frequent keynote speaker, Pearl has presented at The World Healthcare Congress, the Commonwealth Club, TEDx, HLTH, NCQA Quality Talks, the National Primary Care Transformation Summit, American Society of Plastic Surgeons, and international conferences in Brazil, Australia, India, and beyond.</p>
<p>Pearl’s insights on generative AI in healthcare have been featured in Associated Press, USA Today, MSN, FOX Business, Forbes, Fast Company, WIRED, Global News, Modern Healthcare, Medscape, Medpage Today, AI in Healthcare, Doximity, Becker’s Hospital Review, the Advisory Board, the Journal of AHIMA, and more.</p>








]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I sit down with Dr. Robert Pearl to talk about his new book, <em><a href='https://www.amazon.com/ChatGPT-MD-AI-Empowered-Patients-American-ebook/dp/B0CWCV9DVZ'>ChatGPT, MD: How AI-Empowered Patients &amp; Doctors Can Take Back Control of American Medicine</a>, </em>a book he co-authored with...ChatGPT! We talk about the deep fractures and problems in American health care that Generative AI may be positioned to solve, the changing landscape of health care, and  the possibility that Amazon, Google, or OpenAI may become the nation's latest healthcare providers.</p>
<p> </p>








<p>For 18 years, <a href='https://robertpearlmd.com/'>Dr. Robert Pearl, MD </a>served as CEO of The Permanente Medical Group (Kaiser Permanente). He is also former president of The Mid-Atlantic Permanente Medical Group. In these roles he led 10,000 physicians, 38,000 staff and was responsible for the nationally recognized medical care of 5 million Kaiser Permanente members on the west and east coasts. </p>
<p>He is a clinical professor of plastic surgery at Stanford University School of Medicine and on the faculty at the Stanford Graduate School of Business, where he teaches courses on healthcare strategy, technology, and leadership. Pearl is board-certified in plastic and reconstructive surgery, receiving his medical degree from Yale, followed by a residency in plastic and reconstructive surgery at Stanford University.</p>

















<p>He’s the author of three books: <em>Mistreated: Why We Think We’re Getting Good Healthcare—And Why We’re Usually Wrong</em>, a Washington Post bestseller (2017); <em>Uncaring: How the Culture of Medicine Kills Doctors &amp; Patients</em>, a Kirkus star recipient (2021); and his newest book <em>ChatGPT, MD: How AI-Empowered Patients &amp; Doctors Can Take Back Control of American Medicine </em>(April 2024)<em>. </em>All profits from sales of his books go to Doctors Without Borders.</p>
<p>Dr. Pearl is a LinkedIn “Top Voice” in healthcare and host of the popular podcasts <em>Fixing Healthcare</em> and <em>Medicine: The Truth</em>. He publishes two monthly healthcare newsletters reaching 50,000+ combined subscribers. A frequent keynote speaker, Pearl has presented at The World Healthcare Congress, the Commonwealth Club, TEDx, HLTH, NCQA Quality Talks, the National Primary Care Transformation Summit, American Society of Plastic Surgeons, and international conferences in Brazil, Australia, India, and beyond.</p>
<p>Pearl’s insights on generative AI in healthcare have been featured in Associated Press, USA Today, MSN, FOX Business, Forbes, Fast Company, WIRED, Global News, Modern Healthcare, Medscape, Medpage Today, AI in Healthcare, Doximity, Becker’s Hospital Review, the Advisory Board, the Journal of AHIMA, and more.</p>








]]></content:encoded>
                                    
        <enclosure length="95060804" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/utfzerqukrzzbrkk/R_Pearl_mixdown_2b5kk6.mp3"/>
        <itunes:summary>In this episode of the show, I sit down with Dr. Robert Pearl to talk about his new book, ChatGPT, MD: How AI-Empowered Patients &amp; Doctors Can Take Back Control of American Medicine, a book he co-authored with...ChatGPT! We talk about the deep fractures and problems in American health care that Generative AI may be positioned to solve, the changing landscape of health care, and  the possibility that Amazon, Google, or OpenAI may become the nation’s latest healthcare providers.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3959</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>129</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I sit down with Dr. Robert Pearl to talk about his new book, ChatGPT, MD: How AI-Empowered Patients &amp;amp; Doctors Can Take Back Control of American Medicine, a book he co-authored with...ChatGPT! We talk about the deep fractures and problems in American health care that Generative AI may be positioned to solve, the changing landscape of health care, and  the possibility that Amazon, Google, or OpenAI may become the nation's latest healthcare providers.   For 18 years, Dr. Robert Pearl, MD served as CEO of The Permanente Medical Group (Kaiser Permanente). He is also former president of The Mid-Atlantic Permanente Medical Group. In these roles he led 10,000 physicians, 38,000 staff and was responsible for the nationally recognized medical care of 5 million Kaiser Permanente members on the west and east coasts.  He is a clinical professor of plastic surgery at Stanford University School of Medicine and on the faculty at the Stanford Graduate School of Business, where he teaches courses on healthcare strategy, technology, and leadership. Pearl is board-certified in plastic and reconstructive surgery, receiving his medical degree from Yale, followed by a residency in plastic and reconstructive surgery at Stanford University. He’s the author of three books: Mistreated: Why We Think We’re Getting Good Healthcare—And Why We’re Usually Wrong, a Washington Post bestseller (2017); Uncaring: How the Culture of Medicine Kills Doctors &amp;amp; Patients, a Kirkus star recipient (2021); and his newest book ChatGPT, MD: How AI-Empowered Patients &amp;amp; Doctors Can Take Back Control of American Medicine (April 2024). All profits from sales of his books go to Doctors Without Borders. Dr. Pearl is a LinkedIn “Top Voice” in healthcare and host of the popular podcasts Fixing Healthcare and Medicine: The Truth. He publishes two monthly healthcare newsletters reaching 50,000+ combined subscribers. A frequent keynote speaker, Pearl has presented at The World Healthcare Congress, the Commonwealth Club, TEDx, HLTH, NCQA Quality Talks, the National Primary Care Transformation Summit, American Society of Plastic Surgeons, and international conferences in Brazil, Australia, India, and beyond. Pearl’s insights on generative AI in healthcare have been featured in Associated Press, USA Today, MSN, FOX Business, Forbes, Fast Company, WIRED, Global News, Modern Healthcare, Medscape, Medpage Today, AI in Healthcare, Doximity, Becker’s Hospital Review, the Advisory Board, the Journal of AHIMA, and more.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Taking the Temperature of AI: Measuring AI's Environmental Impact</title>
        <itunes:title>Taking the Temperature of AI: Measuring AI's Environmental Impact</itunes:title>
        <link>https://dmdonig.podbean.com/e/kneese/</link>
                    <comments>https://dmdonig.podbean.com/e/kneese/#comments</comments>        <pubDate>Fri, 16 Feb 2024 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9c6a3dd8-4fa2-34aa-b9a4-b0bd48f6d738</guid>
                                    <description><![CDATA[<p>In this episode of the show, I talk to Dr. Tamara Kneese about <a href='https://datasociety.net/'>Data and Society's</a> initiative to develop standards and ways to <a href='https://www.techpolicy.press/measuring-ais-environmental-impacts-requires-empirical-research-and-standards/'>measure the environmental impact of AI</a>. I talk to Dr. Kneese about her work at the <a href='https://datasociety.net/algorithmic-impact-methods-lab/'>Algorithmic Impact Methods Lab</a> (AIMLab), we talk about the links and frictions between tech and climate change, and we consider how AI may be changing how we experience not only life, but also our experience of death.</p>
<p>Dr. Tamara Kneese is Project Director of Data &amp; Society’s Algorithmic Impact Methods Lab, where she is also a Senior Researcher. For the 2023-2024 academic year, she's a Visiting Scholar at UC Berkeley's Center for Science, Technology, Medicine &amp; Society. Before joining D&amp;S, she was Lead Researcher at Green Software Foundation, Director of Developer Engagement on the Green Software team at Intel, and Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco.</p>
<p>Dr. Kneese holds a PhD in Media, Culture and Communication from NYU and is the author of <a href='https://yalebooks.yale.edu/book/9780300248272/death-glitch/'>Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond</a>. In her spare time, she is a volunteer with Tech Workers Coalition.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I talk to Dr. Tamara Kneese about <a href='https://datasociety.net/'>Data and Society's</a> initiative to develop standards and ways to <a href='https://www.techpolicy.press/measuring-ais-environmental-impacts-requires-empirical-research-and-standards/'>measure the environmental impact of AI</a>. I talk to Dr. Kneese about her work at the <a href='https://datasociety.net/algorithmic-impact-methods-lab/'>Algorithmic Impact Methods Lab</a> (AIMLab), we talk about the links and frictions between tech and climate change, and we consider how AI may be changing how we experience not only life, but also our experience of death.</p>
<p>Dr. Tamara Kneese is Project Director of Data &amp; Society’s Algorithmic Impact Methods Lab, where she is also a Senior Researcher. For the 2023-2024 academic year, she's a Visiting Scholar at UC Berkeley's Center for Science, Technology, Medicine &amp; Society. Before joining D&amp;S, she was Lead Researcher at Green Software Foundation, Director of Developer Engagement on the Green Software team at Intel, and Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco.</p>
<p>Dr. Kneese holds a PhD in Media, Culture and Communication from NYU and is the author of <a href='https://yalebooks.yale.edu/book/9780300248272/death-glitch/'><em>Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond</em></a>. In her spare time, she is a volunteer with Tech Workers Coalition.</p>
]]></content:encoded>
                                    
        <enclosure length="94736340" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/mgm7g5/Kneese_mixdown.mp3"/>
        <itunes:summary>In this episode of the show, I talk to Dr. Tamara Kneese about Data and Society’s initiative to develop standards and ways to measure the environmental impact of AI. I talk to Dr. Kneese about her work at the Algorithmic Impact Lab (AIMLab), we talk about the links and frictions between tech and climate change, and we consider how AI may be changing how we experience not only life, but also our experience of death.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3945</itunes:duration>
        <itunes:season>12</itunes:season>
        <itunes:episode>128</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I talk to Dr. Tamara Kneese about Data and Society's initiative to develop standards and ways to measure the environmental impact of AI. I talk to Dr. Kneese about her work at the Algorithmic Impact Methods Lab (AIMLab), we talk about the links and frictions between tech and climate change, and we consider how AI may be changing how we experience not only life, but also our experience of death. Dr. Tamara Kneese is Project Director of Data &amp;amp; Society’s Algorithmic Impact Methods Lab, where she is also a Senior Researcher. For the 2023-2024 academic year, she's a Visiting Scholar at UC Berkeley's Center for Science, Technology, Medicine &amp;amp; Society. Before joining D&amp;amp;S, she was Lead Researcher at Green Software Foundation, Director of Developer Engagement on the Green Software team at Intel, and Assistant Professor of Media Studies and Director of Gender and Sexualities Studies at the University of San Francisco. Dr. Kneese holds a PhD in Media, Culture and Communication from NYU and is the author of Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond. In her spare time, she is a volunteer with Tech Workers Coalition.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Brain Storm: The new technologies that are changing how we think about brain function</title>
        <itunes:title>Brain Storm: The new technologies that are changing how we think about brain function</itunes:title>
        <link>https://dmdonig.podbean.com/e/brain-storm-the-new-technologies-that-are-changing-how-we-think-about-brain-function/</link>
                    <comments>https://dmdonig.podbean.com/e/brain-storm-the-new-technologies-that-are-changing-how-we-think-about-brain-function/#comments</comments>        <pubDate>Fri, 26 Jan 2024 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/76bcfa8e-c10a-3d26-a92d-72b4c3258e6c</guid>
                                    <description><![CDATA[<p>In today's episode, I sit down with Dr. Peter Bonutti to talk about the ways in which technologies are revolutionizing our understanding of the brain, and how they may be used to treat crippling brain disorders such as stroke and seizures.</p>
<p>Dr. Peter Bonutti, M.D. is a surgeon, inventor, author, professor, consultant, and entrepreneur. He is the founder of <a href='https://www.bonuttitechnologies.com/'>Bonutti Research</a>, a medical device incubator that has developed products and technology used around the world. He maintains his clinical and surgical practice, focusing on the integration of robotics into surgical procedures. He is the founder and president of <a href='https://www.realeve.net/'>Releave</a>, a company whose technology has already been <a href='https://www.businesswire.com/news/home/20230920079061/en/Realeve-Unveils-Breakthrough-Solution-to-Treat-Central-Nervous-System-Disorders-Including-Stroke-and-Cluster-Headache-Bypasses-Brain%E2%80%99s-Natural-Barrier-to-Deliver-Therapeutics'>clinically proven in more than 700 patients</a> for the treatment of a brain related disorder. Realeve’s ultimate goal is to solve one of the critical remaining barriers in brain health: the ability to bypass the brain's natural barrier preventing the delivery of effective drugs for stroke, cancer treatment, and other degenerative orders. Dr. Bonutti is a pioneer in Minimally Invasive Surgery, has over 500 patents and applications, more than 700 licenses and multiple FDA-approved products to date. Major corporations leveraging his technology include Hitachi, Kyphon, Covidien, US Surgical, Biomet, Arthrocare, Synthes, Zimmer/Biomet and Stryker. He is a prolific speaker, lecturing internationally, and has trained over 100 surgeons on his surgical techniques. In his career, Dr. Bonutti has received more than a dozen industry honors and awards for his achievements. Dr. Bonutti earned his medical degree at University of Cincinnati College of Medicine and completed his Orthopaedic Surgery Residency at Cleveland Clinic Foundation with international fellowships in Canada, Australia, and Austria.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In today's episode, I sit down with Dr. Peter Bonutti to talk about the ways in which technologies are revolutionizing our understanding of the brain, and how they may be used to treat crippling brain disorders such as stroke and seizures.</p>
<p>Dr. Peter Bonutti, M.D. is a surgeon, inventor, author, professor, consultant, and entrepreneur. He is the founder of <a href='https://www.bonuttitechnologies.com/'>Bonutti Research</a>, a medical device incubator that has developed products and technology used around the world. He maintains his clinical and surgical practice, focusing on the integration of robotics into surgical procedures. He is the founder and president of <a href='https://www.realeve.net/'>Releave</a>, a company whose technology has already been <a href='https://www.businesswire.com/news/home/20230920079061/en/Realeve-Unveils-Breakthrough-Solution-to-Treat-Central-Nervous-System-Disorders-Including-Stroke-and-Cluster-Headache-Bypasses-Brain%E2%80%99s-Natural-Barrier-to-Deliver-Therapeutics'>clinically proven in more than 700 patients</a> for the treatment of a brain related disorder. Realeve’s ultimate goal is to solve one of the critical remaining barriers in brain health: the ability to bypass the brain's natural barrier preventing the delivery of effective drugs for stroke, cancer treatment, and other degenerative orders. Dr. Bonutti is a pioneer in Minimally Invasive Surgery, has over 500 patents and applications, more than 700 licenses and multiple FDA-approved products to date. Major corporations leveraging his technology include Hitachi, Kyphon, Covidien, US Surgical, Biomet, Arthrocare, Synthes, Zimmer/Biomet and Stryker. He is a prolific speaker, lecturing internationally, and has trained over 100 surgeons on his surgical techniques. In his career, Dr. Bonutti has received more than a dozen industry honors and awards for his achievements. Dr. Bonutti earned his medical degree at University of Cincinnati College of Medicine and completed his Orthopaedic Surgery Residency at Cleveland Clinic Foundation with international fellowships in Canada, Australia, and Austria.</p>
]]></content:encoded>
                                    
        <enclosure length="90050364" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9cp44r/Bonutti_mixdown_2bd55l.mp3"/>
        <itunes:summary>In today’s episode, I sit down with Dr. Peter Bonutti to talk about the ways in which technologies are revolutionizing our understanding of the brain, and how they may be used to treat crippling brain disorders such as stroke and seizures.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3750</itunes:duration>
        <itunes:season>12</itunes:season>
        <itunes:episode>127</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In today's episode, I sit down with Dr. Peter Bonutti to talk about the ways in which technologies are revolutionizing our understanding of the brain, and how they may be used to treat crippling brain disorders such as stroke and seizures. Dr. Peter Bonutti, M.D. is a surgeon, inventor, author, professor, consultant, and entrepreneur. He is the founder of Bonutti Research, a medical device incubator that has developed products and technology used around the world. He maintains his clinical and surgical practice, focusing on the integration of robotics into surgical procedures. He is the founder and president of Releave, a company whose technology has already been clinically proven in more than 700 patients for the treatment of a brain related disorder. Realeve’s ultimate goal is to solve one of the critical remaining barriers in brain health: the ability to bypass the brain's natural barrier preventing the delivery of effective drugs for stroke, cancer treatment, and other degenerative orders. Dr. Bonutti is a pioneer in Minimally Invasive Surgery, has over 500 patents and applications, more than 700 licenses and multiple FDA-approved products to date. Major corporations leveraging his technology include Hitachi, Kyphon, Covidien, US Surgical, Biomet, Arthrocare, Synthes, Zimmer/Biomet and Stryker. He is a prolific speaker, lecturing internationally, and has trained over 100 surgeons on his surgical techniques. In his career, Dr. Bonutti has received more than a dozen industry honors and awards for his achievements. Dr. Bonutti earned his medical degree at University of Cincinnati College of Medicine and completed his Orthopaedic Surgery Residency at Cleveland Clinic Foundation with international fellowships in Canada, Australia, and Austria.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Singularity of Hope: The case for AI optimism</title>
        <itunes:title>The Singularity of Hope: The case for AI optimism</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-singularity-of-hope-the-case-for-ai-optimism/</link>
                    <comments>https://dmdonig.podbean.com/e/the-singularity-of-hope-the-case-for-ai-optimism/#comments</comments>        <pubDate>Fri, 19 Jan 2024 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/7bc7ddca-a570-305d-8973-bfbe9c9f7e99</guid>
                                    <description><![CDATA[<p>Today I am interviewing <a href='http://www.sammane.com'>Dr. Sam Sammane</a> about his forthcoming book, "The Singularity of Hope”, which aims to guide readers through the challenges and opportunities of the AI era, advocating for a harmonious fusion of human intelligence and machine capabilities.</p>
<p>Dr. Sammane envisions a world where the rapid advancements in AI and technology are harnessed for the greater good, leading to a new age of global prosperity. He is a seasoned entrepreneur with multiple success exits, and an academic with a rich blend of expertise in applied physics, digital circuit design, nanotechnology, formal methods, life science, and business. Holding a Bachelor's degree and Master's degree in Applied Physics, a Master's degree in Digital Circuit Design, and a Ph.D. in Nanotechnology, Dr. Sammane has authored several articles on high-order logic, symbolic simulation, and automatic theorem proving. </p>
<p>Beyond the academic realm, Dr. Sammane has co-founded and led multiple successful companies in the life sciences, IT and real estate industries. He resides in southern California with his wife and three daughters.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today I am interviewing <a href='http://www.sammane.com'>Dr. Sam Sammane</a> about his forthcoming book, "The Singularity of Hope”, which aims to guide readers through the challenges and opportunities of the AI era, advocating for a harmonious fusion of human intelligence and machine capabilities.</p>
<p>Dr. Sammane envisions a world where the rapid advancements in AI and technology are harnessed for the greater good, leading to a new age of global prosperity. He is a seasoned entrepreneur with multiple success exits, and an academic with a rich blend of expertise in applied physics, digital circuit design, nanotechnology, formal methods, life science, and business. Holding a Bachelor's degree and Master's degree in Applied Physics, a Master's degree in Digital Circuit Design, and a Ph.D. in Nanotechnology, Dr. Sammane has authored several articles on high-order logic, symbolic simulation, and automatic theorem proving. </p>
<p>Beyond the academic realm, Dr. Sammane has co-founded and led multiple successful companies in the life sciences, IT and real estate industries. He resides in southern California with his wife and three daughters.</p>
]]></content:encoded>
                                    
        <enclosure length="101431266" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/nesvmu/sammane_mixdown_27f36t.mp3"/>
        <itunes:summary>Today I am interviewing Dr. Sam Sammane about his forthcoming book, ”The Singularity of Hope”, which aims to guide readers through the challenges and opportunities of the AI era, advocating for a harmonious fusion of human intelligence and machine capabilities.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4225</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>126</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today I am interviewing Dr. Sam Sammane about his forthcoming book, "The Singularity of Hope”, which aims to guide readers through the challenges and opportunities of the AI era, advocating for a harmonious fusion of human intelligence and machine capabilities. Dr. Sammane envisions a world where the rapid advancements in AI and technology are harnessed for the greater good, leading to a new age of global prosperity. He is a seasoned entrepreneur with multiple success exits, and an academic with a rich blend of expertise in applied physics, digital circuit design, nanotechnology, formal methods, life science, and business. Holding a Bachelor's degree and Master's degree in Applied Physics, a Master's degree in Digital Circuit Design, and a Ph.D. in Nanotechnology, Dr. Sammane has authored several articles on high-order logic, symbolic simulation, and automatic theorem proving.  Beyond the academic realm, Dr. Sammane has co-founded and led multiple successful companies in the life sciences, IT and real estate industries. He resides in southern California with his wife and three daughters.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Count: The politics of data science</title>
        <itunes:title>The Count: The politics of data science</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-count-the-politics-of-data-science/</link>
                    <comments>https://dmdonig.podbean.com/e/the-count-the-politics-of-data-science/#comments</comments>        <pubDate>Fri, 12 Jan 2024 11:11:31 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/f4a5746b-e4b2-3675-98eb-8d317a90362b</guid>
                                    <description><![CDATA[<p>Welcome back to a brand-new season of Technically Human! We’re thrilled to be back with new episodes of the show. We are kicking off the new season, and the new year, with an episode featuring one of my favorite thinkers, <a href='https://scholarworks.brandeis.edu/esploro/profile/deborah_stone'>Dr. Deborah Stone</a>, to talk about what it means to count—that is to say, what it means to measure, and what it means to matter.</p>
<p>Dr. Deborah Stone is currently a Lecturer in Public Policy in the Department of Urban Studies and Planning at MIT. She is also an Honorary Professor of Political Science at Aarhus University in Denmark, where she occasionally teaches as a visiting professor. She has taught at Duke University in the Institute of Policy Sciences (1974-77); MIT Department of Political Science (1977-86); Brandeis University Heller School, where she held the David R. Pokross Chair of Law and Social Policy (1986-99); and Dartmouth College Government Department, where she was Research Professor of Government (1999-2014). She has taught as a visitor at Yale, Tulane, University of Bremen, Germany, and National Chung Cheng University in Taiwan. She is a graduate of the University of Michigan and holds a Ph.D. in Political Science from MIT.</p>
<p>Stone is the author of <a href='https://wwnorton.com/books/Policy-Paradox'>Policy Paradox: The Art of Political Decision-Making,</a> which has been published in multiple editions (W.W. Norton), translated into five languages, and won the Aaron Wildavsky Award from the American Political Science Association for its enduring contribution to policy studies. She has also authored three other books: <a href='https://www.amazon.com/Samaritans-Dilemma-Should-Government-Neighbor/dp/1568583540'>The Samaritan’s Dilemma</a> (Nation Books, 2008), <a href='https://tupress.temple.edu/books/the-disabled-state'>The Disabled State</a> (Temple University Press 1984), and T<a href='https://www.amazon.com/Limits-Professional-Power-National-Republic/dp/0226775534'>he Limits of Professional Power</a><a href='https://www.amazon.com/Limits-Professional-Power-National-Republic/dp/0226775534'> </a>(University of Chicago Press, 1980). She serves on the editorial boards of the Journal of Health Politics, and Policy and Law (of which she was a founder); Women, Politics and Public Policy, and Critical Policy Studies. In addition to numerous articles in academic journals and book chapters, she writes for general audiences. She was the founding senior editor of The American Prospect and her articles have appeared there as well as in in Nation, New Republic, Boston Review, Civilization, Natural History, and Natural New England.</p>
<p>Stone has held fellowships from the Guggenheim Foundation, Harvard Law School, German Marshall Fund, Open Society Institute and Robert Wood Johnson Foundation. She was a Phi Beta Kappa Society Visiting Scholar in 2005-2006, and a Senior Fellow at Demos from 2008-2012. She has served as a consultant to the Social Security Administration, the Institute of Medicine, the Office of Technology Assessment, and the Human Genome Project. Stone is also the recipient of numerous professional awards, including, the 2013 Charles M. McCoy Career Achievement Award for a progressive political scientist who has had a long successful career as a writer, teacher, and activist (American Political Science Association).</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to a brand-new season of Technically Human! We’re thrilled to be back with new episodes of the show. We are kicking off the new season, and the new year, with an episode featuring one of my favorite thinkers, <a href='https://scholarworks.brandeis.edu/esploro/profile/deborah_stone'>Dr. Deborah Stone</a>, to talk about what it means to count—that is to say, what it means to measure, and what it means to matter.</p>
<p>Dr. Deborah Stone is currently a Lecturer in Public Policy in the Department of Urban Studies and Planning at MIT. She is also an Honorary Professor of Political Science at Aarhus University in Denmark, where she occasionally teaches as a visiting professor. She has taught at Duke University in the Institute of Policy Sciences (1974-77); MIT Department of Political Science (1977-86); Brandeis University Heller School, where she held the David R. Pokross Chair of Law and Social Policy (1986-99); and Dartmouth College Government Department, where she was Research Professor of Government (1999-2014). She has taught as a visitor at Yale, Tulane, University of Bremen, Germany, and National Chung Cheng University in Taiwan. She is a graduate of the University of Michigan and holds a Ph.D. in Political Science from MIT.</p>
<p>Stone is the author of <a href='https://wwnorton.com/books/Policy-Paradox'><em>Policy Paradox: The Art of Political Decision-Making</em>,</a> which has been published in multiple editions (W.W. Norton), translated into five languages, and won the Aaron Wildavsky Award from the American Political Science Association for its enduring contribution to policy studies. She has also authored three other books: <a href='https://www.amazon.com/Samaritans-Dilemma-Should-Government-Neighbor/dp/1568583540'><em>The Samaritan’s Dilemma</em></a> (Nation Books, 2008), <a href='https://tupress.temple.edu/books/the-disabled-state'><em>The Disabled State</em></a> (Temple University Press 1984), and <em>T<a href='https://www.amazon.com/Limits-Professional-Power-National-Republic/dp/0226775534'>he Limits of Professional Power</a></em><a href='https://www.amazon.com/Limits-Professional-Power-National-Republic/dp/0226775534'> </a>(University of Chicago Press, 1980). She serves on the editorial boards of the <em>Journal of Health Politics, and Policy and Law</em> (of which she was a founder); <em>Women, Politics and Public Policy</em>, and <em>Critical Policy Studies</em>. In addition to numerous articles in academic journals and book chapters, she writes for general audiences. She was the founding senior editor of <em>The American Prospect</em> and her articles have appeared there as well as in in <em>Nation</em>, <em>New Republic</em>, <em>Boston Review</em>, <em>Civilization</em>, <em>Natural History</em>, and <em>Natural New England</em>.</p>
<p>Stone has held fellowships from the Guggenheim Foundation, Harvard Law School, German Marshall Fund, Open Society Institute and Robert Wood Johnson Foundation. She was a Phi Beta Kappa Society Visiting Scholar in 2005-2006, and a Senior Fellow at Demos from 2008-2012. She has served as a consultant to the Social Security Administration, the Institute of Medicine, the Office of Technology Assessment, and the Human Genome Project. Stone is also the recipient of numerous professional awards, including, the 2013 Charles M. McCoy Career Achievement Award for a progressive political scientist who has had a long successful career as a writer, teacher, and activist (American Political Science Association).</p>
]]></content:encoded>
                                    
        <enclosure length="106940199" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/y6dune/D_Stone_4_mixdown7kfdk.mp3"/>
        <itunes:summary>Welcome back to a brand-new season of Technically Human! We’re thrilled to be back with new episodes of the show. We are kicking off the new season, and the new year, with an episode featuring one of my favorite thinkers, Dr. Deborah Stone, to talk about what it means to count—that is to say, what it means to measure, and what it means to matter.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4453</itunes:duration>
        <itunes:season>12</itunes:season>
        <itunes:episode>125</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to a brand-new season of Technically Human! We’re thrilled to be back with new episodes of the show. We are kicking off the new season, and the new year, with an episode featuring one of my favorite thinkers, Dr. Deborah Stone, to talk about what it means to count—that is to say, what it means to measure, and what it means to matter. Dr. Deborah Stone is currently a Lecturer in Public Policy in the Department of Urban Studies and Planning at MIT. She is also an Honorary Professor of Political Science at Aarhus University in Denmark, where she occasionally teaches as a visiting professor. She has taught at Duke University in the Institute of Policy Sciences (1974-77); MIT Department of Political Science (1977-86); Brandeis University Heller School, where she held the David R. Pokross Chair of Law and Social Policy (1986-99); and Dartmouth College Government Department, where she was Research Professor of Government (1999-2014). She has taught as a visitor at Yale, Tulane, University of Bremen, Germany, and National Chung Cheng University in Taiwan. She is a graduate of the University of Michigan and holds a Ph.D. in Political Science from MIT. Stone is the author of Policy Paradox: The Art of Political Decision-Making, which has been published in multiple editions (W.W. Norton), translated into five languages, and won the Aaron Wildavsky Award from the American Political Science Association for its enduring contribution to policy studies. She has also authored three other books: The Samaritan’s Dilemma (Nation Books, 2008), The Disabled State (Temple University Press 1984), and The Limits of Professional Power (University of Chicago Press, 1980). She serves on the editorial boards of the Journal of Health Politics, and Policy and Law (of which she was a founder); Women, Politics and Public Policy, and Critical Policy Studies. In addition to numerous articles in academic journals and book chapters, she writes for general audiences. She was the founding senior editor of The American Prospect and her articles have appeared there as well as in in Nation, New Republic, Boston Review, Civilization, Natural History, and Natural New England. Stone has held fellowships from the Guggenheim Foundation, Harvard Law School, German Marshall Fund, Open Society Institute and Robert Wood Johnson Foundation. She was a Phi Beta Kappa Society Visiting Scholar in 2005-2006, and a Senior Fellow at Demos from 2008-2012. She has served as a consultant to the Social Security Administration, the Institute of Medicine, the Office of Technology Assessment, and the Human Genome Project. Stone is also the recipient of numerous professional awards, including, the 2013 Charles M. McCoy Career Achievement Award for a progressive political scientist who has had a long successful career as a writer, teacher, and activist (American Political Science Association).</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Getting Public About Privacy: Understanding data privacy in the digital age</title>
        <itunes:title>Getting Public About Privacy: Understanding data privacy in the digital age</itunes:title>
        <link>https://dmdonig.podbean.com/e/private-parts-understanding-data-privacy-in-the-digital-age/</link>
                    <comments>https://dmdonig.podbean.com/e/private-parts-understanding-data-privacy-in-the-digital-age/#comments</comments>        <pubDate>Wed, 22 Nov 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c8312dea-22ce-39de-854e-b95ec19c9264</guid>
                                    <description><![CDATA[<p>In this episode of the show, I talk with <a href='https://www.linkedin.com/in/jared-maslin-6315934b/'>Jared Maslin</a> about what it means to have privacy on the internet. We talk about the difference between privacy and secrecy, the benefits and limitations of GDPR and the possibility of privacy regulation coming to the US, and we explore the biggest challenges facing data privacy today.</p>
<p>His most recent work, including his most recent publication, "<a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4545137'>Learning From the Past: Applying Concepts of the Sarbanes-Oxley Act to Restore Consumer Trust in Global Data Privacy</a>," involves the design and testing of a more holistic data privacy risk model, using some of the key tenets of independent auditing structures and oversight functions seen after the investor crises of Enron, Tyco, and other financial reporting fraud. Their goal is to leverage the same concepts that were once applied to restore investor trust in businesses, and to extend those concepts to data privacy in order to restore consumer trust in the businesses processing their personal data. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I talk with <a href='https://www.linkedin.com/in/jared-maslin-6315934b/'>Jared Maslin</a> about what it means to have privacy on the internet. We talk about the difference between privacy and secrecy, the benefits and limitations of GDPR and the possibility of privacy regulation coming to the US, and we explore the biggest challenges facing data privacy today.</p>
<p>His most recent work, including his most recent publication, "<a href='https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4545137'>Learning From the Past: Applying Concepts of the Sarbanes-Oxley Act to Restore Consumer Trust in Global Data Privacy</a>," involves the design and testing of a more holistic data privacy risk model, using some of the key tenets of independent auditing structures and oversight functions seen after the investor crises of Enron, Tyco, and other financial reporting fraud. Their goal is to leverage the same concepts that were once applied to restore investor trust in businesses, and to extend those concepts to data privacy in order to restore consumer trust in the businesses processing their personal data. </p>
]]></content:encoded>
                                    
        <enclosure length="111791786" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ejn4de/Jared_Maslin_mixdown8eqfc.mp3"/>
        <itunes:summary>In this episode of the show, I talk with Jared Maslin about what it means to have privacy on the internet. We talk about the difference between privacy and secrecy, the benefits and limitations of GDPR and the possibility of privacy regulation coming to the US, and we explore the biggest challenges facing data privacy today.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4657</itunes:duration>
        <itunes:season>12</itunes:season>
        <itunes:episode>124</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I talk with Jared Maslin about what it means to have privacy on the internet. We talk about the difference between privacy and secrecy, the benefits and limitations of GDPR and the possibility of privacy regulation coming to the US, and we explore the biggest challenges facing data privacy today. His most recent work, including his most recent publication, "Learning From the Past: Applying Concepts of the Sarbanes-Oxley Act to Restore Consumer Trust in Global Data Privacy," involves the design and testing of a more holistic data privacy risk model, using some of the key tenets of independent auditing structures and oversight functions seen after the investor crises of Enron, Tyco, and other financial reporting fraud. Their goal is to leverage the same concepts that were once applied to restore investor trust in businesses, and to extend those concepts to data privacy in order to restore consumer trust in the businesses processing their personal data. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Case for Cryptocurrency: The future of digital assets post Sam Bankman-Fried</title>
        <itunes:title>The Case for Cryptocurrency: The future of digital assets post Sam Bankman-Fried</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-case-for-cryptocurrency-the-future-of-digital-assets-post-sam-bankman-fried/</link>
                    <comments>https://dmdonig.podbean.com/e/the-case-for-cryptocurrency-the-future-of-digital-assets-post-sam-bankman-fried/#comments</comments>        <pubDate>Fri, 10 Nov 2023 12:50:10 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1c6e506b-57a7-3c87-bce5-38aaea4bf84a</guid>
                                    <description><![CDATA[<p>In this week’s episode of the show I sit down with Dr. Tonya Evans to talk about the state of crypto in the wake of last week’s landmark criminal fraud conviction of the former CEO of FTX and the former prophet of crypto, Sam Bankman-Fried. Dr. Evans and I discuss what new crypto economy might emerge in the wake of his conviction. We discuss the principles and the possibilities of new digital assets, and we talk about the challenges of regulating new financial technologies.</p>
<p><a href='https://dickinsonlaw.psu.edu/tonya-evans'>Dr. Tonya M. Evans</a> is a distinguished professor at Penn State Dickinson Law and a leading expert in intellectual property and new technologies. With a prestigious 2023 EDGE in Tech Athena Award, she is highly sought-after as a keynote speaker and consultant. Her expertise spans blockchain, entrepreneurship, entertainment law, and more.</p>
<p>As a member of international boards and committees, including the World Economic Forum/Wharton DAO Project Series, Dr. Evans remains at the forefront of cutting-edge research. She recently <a href='https://docs.house.gov/meetings/BA/BA21/20230309/115389/HHRG-118-BA21-Wstate-EvansT-20230309.pdf'>testified before the House Financial Services Committee</a> and the Copyright Office and USPTO to advise on the intellectual property law issues related to NFTs and blockchain technology.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week’s episode of the show I sit down with Dr. Tonya Evans to talk about the state of crypto in the wake of last week’s landmark criminal fraud conviction of the former CEO of FTX and the former prophet of crypto, Sam Bankman-Fried. Dr. Evans and I discuss what new crypto economy might emerge in the wake of his conviction. We discuss the principles and the possibilities of new digital assets, and we talk about the challenges of regulating new financial technologies.</p>
<p><a href='https://dickinsonlaw.psu.edu/tonya-evans'>Dr. Tonya M. Evans</a> is a distinguished professor at Penn State Dickinson Law and a leading expert in intellectual property and new technologies. With a prestigious 2023 EDGE in Tech Athena Award, she is highly sought-after as a keynote speaker and consultant. Her expertise spans blockchain, entrepreneurship, entertainment law, and more.</p>
<p>As a member of international boards and committees, including the World Economic Forum/Wharton DAO Project Series, Dr. Evans remains at the forefront of cutting-edge research. She recently <a href='https://docs.house.gov/meetings/BA/BA21/20230309/115389/HHRG-118-BA21-Wstate-EvansT-20230309.pdf'>testified before the House Financial Services Committee</a> and the Copyright Office and USPTO to advise on the intellectual property law issues related to NFTs and blockchain technology.</p>
]]></content:encoded>
                                    
        <enclosure length="75131940" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/8f9nir/Tonya_Evans_mixdown9etj9.mp3"/>
        <itunes:summary>In this week’s episode of the show I sit down with Dr. Tonya Evans to talk about the state of crypto in the wake of last week’s landmark criminal fraud conviction of the former CEO of FTX and the former prophet of crypto, Sam Bankman-Fried. Dr. Evans and I discuss what new crypto economy might emerge in the wake of his conviction. We discuss the principles and the possibilities of new digital assets, and we talk about the challenges of regulating new financial technologies.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3130</itunes:duration>
        <itunes:season>12</itunes:season>
        <itunes:episode>123</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week’s episode of the show I sit down with Dr. Tonya Evans to talk about the state of crypto in the wake of last week’s landmark criminal fraud conviction of the former CEO of FTX and the former prophet of crypto, Sam Bankman-Fried. Dr. Evans and I discuss what new crypto economy might emerge in the wake of his conviction. We discuss the principles and the possibilities of new digital assets, and we talk about the challenges of regulating new financial technologies. Dr. Tonya M. Evans is a distinguished professor at Penn State Dickinson Law and a leading expert in intellectual property and new technologies. With a prestigious 2023 EDGE in Tech Athena Award, she is highly sought-after as a keynote speaker and consultant. Her expertise spans blockchain, entrepreneurship, entertainment law, and more. As a member of international boards and committees, including the World Economic Forum/Wharton DAO Project Series, Dr. Evans remains at the forefront of cutting-edge research. She recently testified before the House Financial Services Committee and the Copyright Office and USPTO to advise on the intellectual property law issues related to NFTs and blockchain technology.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The New Rules: challenging Big Tech’s reign over legal reform</title>
        <itunes:title>The New Rules: challenging Big Tech’s reign over legal reform</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-new-rules-challenging-big-tech-s-reign-over-legal-reform/</link>
                    <comments>https://dmdonig.podbean.com/e/the-new-rules-challenging-big-tech-s-reign-over-legal-reform/#comments</comments>        <pubDate>Fri, 03 Nov 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/445a6ab8-e367-39e1-aa75-dc27592412c3</guid>
                                    <description><![CDATA[<p>In today’s episode, I talk about how to create new legal rules to guide tech toward reflecting human values with Brian Beckcom, one of the leading lawyers of his generation.</p>
<p><a href='https://www.vbattorneys.com/attorneys/brian-beckcom/'>Brian Beckcom</a> is a Texas Super Lawyer, a designation that recognizes him as one of the top legal experts and practitioners in his arena. In addition to his work as a lawyer, he is also a Computer Scientist and a Philosopher. He created and hosts the popular podcast "<a href='http://www.brianbeckcom.org/'>Lesson from Leaders with Brian Beckcom</a>."</p>
<p>Brian is an honors graduate of the University of Texas School of Law. He is the author of 6 books and hundreds of articles on a wide variety of topics.</p>
<p>He successfully prosecuted many high-profile cases, including the case that emerged in the aftermath of the <a href='https://www.vbattorneys.com/blog/maersk-alabama-and-somali-pirates-suit'>Somali Pirate attack on the Maersk Alabama</a>, which made headlines around the world, and the event was made into a Hollywood blockbuster starring Tom Hanks as Captain Phillips. Representing many members of the crew, Brian and his firm took on one of the largest shipping companies in the world, while simultaneously, his investigative efforts ensured that the true story was told.He also represented <a href='https://www.vbattorneys.com/blog/pirates-hold-us-mariner-hostage?hsLang=en'>Captain Wren Thomas, who was kidnapped by Nigerian mercenaries while operating off the Coast of West Africa</a>. Captain Thomas’ story has been featured in national and international media. The case received international attention from the media and maritime shipping companies because of the heroic acts of Captain Thomas during the attack and hostage situation and also because of connections to Boko Haram and corruption in West Africa.</p>
<p>In the conversation we talk about the way that case law formed to treat piracy, that is to say, the practice of attacking and robbing ships at sea, and piracy in our digital age, that is to say, the unauthorized duplication of copyrighted content that is then sold at substantially lower prices in the 'grey' market, We talk about the possibilities for, and the obstructions to, creating legislation that would stop some of the worst consequences and tendencies of big tech. And Brian makes the case for what law, at its most ethical and generative potential, might do to guide tech toward protecting and elevating human values.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In today’s episode, I talk about how to create new legal rules to guide tech toward reflecting human values with Brian Beckcom, one of the leading lawyers of his generation.</p>
<p><a href='https://www.vbattorneys.com/attorneys/brian-beckcom/'>Brian Beckcom</a> is a Texas Super Lawyer, a designation that recognizes him as one of the top legal experts and practitioners in his arena. In addition to his work as a lawyer, he is also a Computer Scientist and a Philosopher. He created and hosts the popular podcast "<a href='http://www.brianbeckcom.org/'>Lesson from Leaders with Brian Beckcom</a>."</p>
<p>Brian is an honors graduate of the University of Texas School of Law. He is the author of 6 books and hundreds of articles on a wide variety of topics.</p>
<p>He successfully prosecuted many high-profile cases, including the case that emerged in the aftermath of the <a href='https://www.vbattorneys.com/blog/maersk-alabama-and-somali-pirates-suit'>Somali Pirate attack on the Maersk Alabama</a>, which made headlines around the world, and the event was made into a Hollywood blockbuster starring Tom Hanks as Captain Phillips. Representing many members of the crew, Brian and his firm took on one of the largest shipping companies in the world, while simultaneously, his investigative efforts ensured that the true story was told.He also represented <a href='https://www.vbattorneys.com/blog/pirates-hold-us-mariner-hostage?hsLang=en'>Captain Wren Thomas, who was kidnapped by Nigerian mercenaries while operating off the Coast of West Africa</a>. Captain Thomas’ story has been featured in national and international media. The case received international attention from the media and maritime shipping companies because of the heroic acts of Captain Thomas during the attack and hostage situation and also because of connections to Boko Haram and corruption in West Africa.</p>
<p>In the conversation we talk about the way that case law formed to treat <em>piracy</em>, that is to say, the practice of attacking and robbing ships at sea, and <em>piracy in our digital age</em>, that is to say, the unauthorized duplication of copyrighted content that is then sold at substantially lower prices in the 'grey' market, We talk about the possibilities for, and the obstructions to, creating legislation that would stop some of the worst consequences and tendencies of big tech. And Brian makes the case for what law, at its most ethical and generative potential, might do to guide tech toward protecting and elevating human values.</p>
]]></content:encoded>
                                    
        <enclosure length="124572990" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/zsjv4t/Brian_Beckcom_Interview_mixdownan77p.mp3"/>
        <itunes:summary>In today’s episode, I talk about how to create new legal rules to guide tech toward reflecting human values with Brian Beckcom, one of the leading lawyers of his generation. We talk about the way that case law formed to treat piracy, that is to say, the practice of attacking and robbing ships at sea, and piracy in our digital age, that is to say, the unauthorized duplication of copyrighted content that is then sold at substantially lower prices in the ’grey’ market, We talk about the possibilities for, and the obstructions to, creating legislation that would stop some of the worst consequences and tendencies of big tech. And Brian makes the case for what law, at its most ethical and generative potential, might do to guide tech toward protecting and elevating human values.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>true</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>5190</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>122</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In today’s episode, I talk about how to create new legal rules to guide tech toward reflecting human values with Brian Beckcom, one of the leading lawyers of his generation. Brian Beckcom is a Texas Super Lawyer, a designation that recognizes him as one of the top legal experts and practitioners in his arena. In addition to his work as a lawyer, he is also a Computer Scientist and a Philosopher. He created and hosts the popular podcast "Lesson from Leaders with Brian Beckcom." Brian is an honors graduate of the University of Texas School of Law. He is the author of 6 books and hundreds of articles on a wide variety of topics. He successfully prosecuted many high-profile cases, including the case that emerged in the aftermath of the Somali Pirate attack on the Maersk Alabama, which made headlines around the world, and the event was made into a Hollywood blockbuster starring Tom Hanks as Captain Phillips. Representing many members of the crew, Brian and his firm took on one of the largest shipping companies in the world, while simultaneously, his investigative efforts ensured that the true story was told.He also represented Captain Wren Thomas, who was kidnapped by Nigerian mercenaries while operating off the Coast of West Africa. Captain Thomas’ story has been featured in national and international media. The case received international attention from the media and maritime shipping companies because of the heroic acts of Captain Thomas during the attack and hostage situation and also because of connections to Boko Haram and corruption in West Africa. In the conversation we talk about the way that case law formed to treat piracy, that is to say, the practice of attacking and robbing ships at sea, and piracy in our digital age, that is to say, the unauthorized duplication of copyrighted content that is then sold at substantially lower prices in the 'grey' market, We talk about the possibilities for, and the obstructions to, creating legislation that would stop some of the worst consequences and tendencies of big tech. And Brian makes the case for what law, at its most ethical and generative potential, might do to guide tech toward protecting and elevating human values.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Soul Machines: Can AI have a body?</title>
        <itunes:title>Soul Machines: Can AI have a body?</itunes:title>
        <link>https://dmdonig.podbean.com/e/soul-machines-can-ai-have-a-body/</link>
                    <comments>https://dmdonig.podbean.com/e/soul-machines-can-ai-have-a-body/#comments</comments>        <pubDate>Fri, 27 Oct 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9072cb1b-83b2-3327-93fd-8ae8f0f5a861</guid>
                                    <description><![CDATA[<p>In this episode of the show, I sit down with <a href='https://www.auckland.ac.nz/en/engineering/community-engagement/alumni/alumni-stories/mark-sagar.html'>Dr. Mark Sagar</a> to talk about his vision of an embodied form of AI.</p>
<p>Dr. Sagar is the co-founder and Chief Science Officer at <a href='http://www.soulmachines.com/'>Soul Machines</a>, a company investigating how to use natural language processing with hyper-realistic visuals to create autonomously animated, emotionally dynamic Digital People. In addition to developing new technologies, the research seeks answers to big questions: should we be humanizing AI? How does feeding AI socio-emotional context help create rich, multimodal humanlike experiences, and at what point are we teetering on sentience? And what is really at stake the intersection of human cooperation with intelligent machines?</p>
<p>Dr. Mark Sagar is currently Director for the Auckland Bioengineering Institute's Laboratory for Animate Technologies. He is a two-time Oscar winner, in the categories of scientific and engineering awards, for his work creating realistic digital characters for the screen. The technology has been used in Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button and Avatar. The technology he created emerged out of his research, completed in the late 1990s, in a landmark study that explored how to develop an anatomically correct virtual eye and realistic models of biomechanically-simulated anatomy. It was one of the first examples of how believable human features could be created on a screen by combining computer graphics with mathematics and human physiology.</p>
<p> He is also the founder of the <a href='https://www.soulmachines.com/resources/research/baby-x/'>BabyX,</a> a pioneering research initiative that  seeks to combine models of physiology, cognition and emotion with advanced lifelike CGI, in an attempt to create a new form of biologically inspired AI.</p>
<p>Dr. Sagar received his Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. In addition to his recognition by the Academy Awards, Dr. Sagar was elected as a fellow of the Royal Society of New Zealand in 2019.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I sit down with <a href='https://www.auckland.ac.nz/en/engineering/community-engagement/alumni/alumni-stories/mark-sagar.html'>Dr. Mark Sagar</a> to talk about his vision of an embodied form of AI.</p>
<p>Dr. Sagar is the co-founder and Chief Science Officer at <a href='http://www.soulmachines.com/'>Soul Machines</a>, a company investigating how to use natural language processing with hyper-realistic visuals to create autonomously animated, emotionally dynamic Digital People. In addition to developing new technologies, the research seeks answers to big questions: should we be humanizing AI? How does feeding AI socio-emotional context help create rich, multimodal humanlike experiences, and at what point are we teetering on sentience? And what is really at stake the intersection of human cooperation with intelligent machines?</p>
<p>Dr. Mark Sagar is currently Director for the Auckland Bioengineering Institute's Laboratory for Animate Technologies. He is a two-time Oscar winner, in the categories of scientific and engineering awards, for his work creating realistic digital characters for the screen. The technology has been used in <em>Spider-Man 2</em>, <em>Superman Returns</em>, <em>The Curious Case of Benjamin Button</em> and <em>Avatar</em>. The technology he created emerged out of his research, completed in the late 1990s, in a landmark study that explored how to develop an anatomically correct virtual eye and realistic models of biomechanically-simulated anatomy. It was one of the first examples of how believable human features could be created on a screen by combining computer graphics with mathematics and human physiology.</p>
<p> He is also the founder of the <a href='https://www.soulmachines.com/resources/research/baby-x/'>BabyX,</a> a pioneering research initiative that  seeks to combine models of physiology, cognition and emotion with advanced lifelike CGI, in an attempt to create a new form of biologically inspired AI.</p>
<p>Dr. Sagar received his Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. In addition to his recognition by the Academy Awards, Dr. Sagar was elected as a fellow of the Royal Society of New Zealand in 2019.</p>
]]></content:encoded>
                                    
        <enclosure length="100812646" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/jvrapg/Mark_Sagar_mixdownag0ct.mp3"/>
        <itunes:summary>In this episode of the show, I sit down with Dr. Mark Sagar to talk about his vision of an embodied form of AI.

Dr. Sagar is the co-founder and Chief Science Officer at Soul Machines, a company investigating how to use natural language processing with hyper-realistic visuals to create autonomously animated, emotionally dynamic Digital People. In addition to developing new technologies, the research seeks answers to big questions: should we be humanizing AI? How does feeding AI socio-emotional context help create rich, multimodal humanlike experiences, and at what point are we teetering on sentience? And what is really at stake the intersection of human cooperation with intelligent machines?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4200</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>121</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I sit down with Dr. Mark Sagar to talk about his vision of an embodied form of AI. Dr. Sagar is the co-founder and Chief Science Officer at Soul Machines, a company investigating how to use natural language processing with hyper-realistic visuals to create autonomously animated, emotionally dynamic Digital People. In addition to developing new technologies, the research seeks answers to big questions: should we be humanizing AI? How does feeding AI socio-emotional context help create rich, multimodal humanlike experiences, and at what point are we teetering on sentience? And what is really at stake the intersection of human cooperation with intelligent machines? Dr. Mark Sagar is currently Director for the Auckland Bioengineering Institute's Laboratory for Animate Technologies. He is a two-time Oscar winner, in the categories of scientific and engineering awards, for his work creating realistic digital characters for the screen. The technology has been used in Spider-Man 2, Superman Returns, The Curious Case of Benjamin Button and Avatar. The technology he created emerged out of his research, completed in the late 1990s, in a landmark study that explored how to develop an anatomically correct virtual eye and realistic models of biomechanically-simulated anatomy. It was one of the first examples of how believable human features could be created on a screen by combining computer graphics with mathematics and human physiology.  He is also the founder of the BabyX, a pioneering research initiative that  seeks to combine models of physiology, cognition and emotion with advanced lifelike CGI, in an attempt to create a new form of biologically inspired AI. Dr. Sagar received his Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. In addition to his recognition by the Academy Awards, Dr. Sagar was elected as a fellow of the Royal Society of New Zealand in 2019.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Saving Israeli and Palestinian Lives: Technology For Life: Disaster relief and life-saving tech *From the Archives*</title>
        <itunes:title>Saving Israeli and Palestinian Lives: Technology For Life: Disaster relief and life-saving tech *From the Archives*</itunes:title>
        <link>https://dmdonig.podbean.com/e/from-the-archives/</link>
                    <comments>https://dmdonig.podbean.com/e/from-the-archives/#comments</comments>        <pubDate>Mon, 23 Oct 2023 09:46:45 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/7b86c59a-bd5f-3b0b-8de4-85987b2f88a5</guid>
                                    <description><![CDATA[<p>Hi Technically Human listeners. This is a show about ethics and tech, but it’s also a show about what it means to be human. There is no area of being human in this moment that technologies does not touch.</p>
<p>I know that many members of this listening community have been deeply affected by the loss of life and the brutality that began with the Hamas attack on Israel and is ongoing in Israel and in Gaza. This is not a show about my politics. But it is a show that strives toward the ideals of diverse representation, and bipartisan collaboration toward ethical and humanistic ends.</p>
<p>In this dark moment, I wanted to elevate one of our previous episodes featuring <a href='https://israelrescue.org/'>United Hatzalah.</a> United Hatzallah is a volunteer-based organization which provides emergency medical response within minutes of any medical emergency for free. They are committed to saving human lives independent of race, religion, ethnicity, or national identity. They are non political and non religious. United Hatzalah volunteers respond to more than 675,000 calls per year throughout Israel and beyond its borders, saving lives every day. </p>
<p>There are a lot of people in the region in need right now, and United Hatzallah is on the front lines. If you have the means, and if you want to support an organization that is working to save civilians lives, no matter what their religion, race, ethnic identity, or national identity might be, please consider supporting United Hatzallah.</p>
<p><a href='https://www.insightpartners.com/'>Insight Partners</a>, a global software investor partnering with high-growth technology, software and internet startups, is currently matching donations to United Hatzallah, up to $1,000,000.<a href='https://hedado.com/c/SoftwareinService'> Please consider supporting this effort, if you have the means</a>. The ideal of universal human rights is central to this show, and when I see the tech community driving toward that effort, I think it’s worth highlighting. That ideal is, and always has been, that human lives are human lives anywhere and everywhere, no matter which tribe they belong to, and that the global community has an obligation to protect those lives. </p>
<p>Link to the donation site here: <a href='https://hedado.com/c/SoftwareinService'>https://hedado.com/c/SoftwareinService</a> </p>
<p>And now, here is my episode featuring United Hatzallah, whose volunteers have been on the ground saving lives, as they have been doing since the organization was founded.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Hi Technically Human listeners. This is a show about ethics and tech, but it’s also a show about what it means to be human. There is no area of being human in this moment that technologies does not touch.</p>
<p>I know that many members of this listening community have been deeply affected by the loss of life and the brutality that began with the Hamas attack on Israel and is ongoing in Israel and in Gaza. This is not a show about my politics. But it is a show that strives toward the ideals of diverse representation, and bipartisan collaboration toward ethical and humanistic ends.</p>
<p>In this dark moment, I wanted to elevate one of our previous episodes featuring <a href='https://israelrescue.org/'>United Hatzalah.</a> United Hatzallah is a volunteer-based organization which provides emergency medical response within minutes of any medical emergency for free. They are committed to saving human lives independent of race, religion, ethnicity, or national identity. They are non political and non religious. United Hatzalah volunteers respond to more than 675,000 calls per year throughout Israel and beyond its borders, saving lives every day. </p>
<p>There are a lot of people in the region in need right now, and United Hatzallah is on the front lines. If you have the means, and if you want to support an organization that is working to save civilians lives, no matter what their religion, race, ethnic identity, or national identity might be, please consider supporting United Hatzallah.</p>
<p><a href='https://www.insightpartners.com/'>Insight Partners</a>, a global software investor partnering with high-growth technology, software and internet startups, is currently matching donations to United Hatzallah, up to $1,000,000.<a href='https://hedado.com/c/SoftwareinService'> Please consider supporting this effort, if you have the means</a>. The ideal of universal human rights is central to this show, and when I see the tech community driving toward that effort, I think it’s worth highlighting. That ideal is, and always has been, that human lives are human lives anywhere and everywhere, no matter which tribe they belong to, and that the global community has an obligation to protect those lives. </p>
<p>Link to the donation site here: <a href='https://hedado.com/c/SoftwareinService'>https://hedado.com/c/SoftwareinService</a> </p>
<p>And now, here is my episode featuring United Hatzallah, whose volunteers have been on the ground saving lives, as they have been doing since the organization was founded.</p>
]]></content:encoded>
                                    
        <enclosure length="74841338" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/dvmdk4/Untitled_Session_5_mixdown93bmo.mp3"/>
        <itunes:summary>In this dark moment, I wanted to elevate one of our previous episodes featuring United Hatzalah. United Hatzallah is a volunteer-based organization which provides emergency medical response within minutes of any medical emergency for free. They are committed to saving human lives independent of race, religion, ethnicity, or national identity. They are non political and non religious. United Hatzalah volunteers respond to more than 675,000 calls per year throughout Israel and beyond its borders, saving lives every day.

Insight Partners, a global software investor partnering with high-growth technology, software and internet startups, is currently matching donations to United Hatzallah, up to $1,000,000. Please consider supporting this effort, if you have the means.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3118</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>120</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Hi Technically Human listeners. This is a show about ethics and tech, but it’s also a show about what it means to be human. There is no area of being human in this moment that technologies does not touch. I know that many members of this listening community have been deeply affected by the loss of life and the brutality that began with the Hamas attack on Israel and is ongoing in Israel and in Gaza. This is not a show about my politics. But it is a show that strives toward the ideals of diverse representation, and bipartisan collaboration toward ethical and humanistic ends. In this dark moment, I wanted to elevate one of our previous episodes featuring United Hatzalah. United Hatzallah is a volunteer-based organization which provides emergency medical response within minutes of any medical emergency for free. They are committed to saving human lives independent of race, religion, ethnicity, or national identity. They are non political and non religious. United Hatzalah volunteers respond to more than 675,000 calls per year throughout Israel and beyond its borders, saving lives every day.  There are a lot of people in the region in need right now, and United Hatzallah is on the front lines. If you have the means, and if you want to support an organization that is working to save civilians lives, no matter what their religion, race, ethnic identity, or national identity might be, please consider supporting United Hatzallah. Insight Partners, a global software investor partnering with high-growth technology, software and internet startups, is currently matching donations to United Hatzallah, up to $1,000,000. Please consider supporting this effort, if you have the means. The ideal of universal human rights is central to this show, and when I see the tech community driving toward that effort, I think it’s worth highlighting. That ideal is, and always has been, that human lives are human lives anywhere and everywhere, no matter which tribe they belong to, and that the global community has an obligation to protect those lives.  Link to the donation site here: https://hedado.com/c/SoftwareinService  And now, here is my episode featuring United Hatzallah, whose volunteers have been on the ground saving lives, as they have been doing since the organization was founded.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Romance of AI: Discussing Love and Artificial Intelligence with Amy Kurzweil</title>
        <itunes:title>The Romance of AI: Discussing Love and Artificial Intelligence with Amy Kurzweil</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-romance-of-ai-discussing-love-and-artificial-intelligence-with-amy-kurzweil/</link>
                    <comments>https://dmdonig.podbean.com/e/the-romance-of-ai-discussing-love-and-artificial-intelligence-with-amy-kurzweil/#comments</comments>        <pubDate>Fri, 13 Oct 2023 08:39:37 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/47051374-5e79-3a12-a9e8-8bd9441d8875</guid>
                                    <description><![CDATA[<p>In today’s conversation, I sit down with<a href='https://www.google.com/search?gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg7MgYIAhBFGDsyBggDEEUYPDIGCAQQRRg8MgYIBRBFGDzSAQczMjJqMGo3qAIAsAIA&amp;ie=UTF-8&amp;oq=Amy%20Kurzweil&amp;q=Amy%20Kurzweil&amp;rlz=1C5GCEM_enUS1069US1071&amp;sourceid=chrome'> Amy Kurzweil</a>, the author of the new graphic memoir, <a href='https://books.catapult.co/books/artificial/'>Artificial: A Love Story</a>.</p>
<p>Artificial: A Love Story tells the story of three generations of artists whose search for meaning and connection transcends the limits of life. The story begins with the LLM generated chatbot that Amy’s father, the futurist <a href='https://en.wikipedia.org/wiki/Ray_Kurzweil'>Ray Kurzweil</a>, created out of his father’s archive, but the story doesn’t start and end there. Instead, the story takes us on a journey through new questions that technologies are asking about what it means to be human.  How do we relate to—and hold—our family’s past? And how is technology changing what it means to remember the past? And what does it mean to know--and to love--in the age of AI?</p>
<p><a href='https://en.wikipedia.org/wiki/Amy_Kurzweil'>Amy Kurzweil</a> is a New Yorker cartoonist and the author of two graphic memoirs: Flying Couch, a NYT’s Editor’s Choice and Kirkus “Best Memoir” of 2016, and Artificial: A Love Story, forthcoming October 2023. She was a 2021 Berlin Prize Fellow with the American Academy in Berlin, a 2019 Shearing Fellow with the Black Mountain Institute, and she’s received fellowships from MacDowell, Djerassi, and elsewhere. Her work has been nominated for a Reuben Award and an Ignatz Award for “Technofeelia,” a four-part series with The Believer Magazine. Her writing, comics, and cartoons have also been published in The Verge, The New York Times Book Review, Longreads, Literary Hub, WIRED, and many other places. She’s taught writing and comics at Parsons The New School for Design, The Fashion Institute of Technology, Center for Talented Youth, Interlochen Center for the Arts, in New York City Public Schools, and in many other venues, and she currently teaches a monthly cartooning class to a growing community of virtual students all over the world.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In today’s conversation, I sit down with<a href='https://www.google.com/search?gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIGCAEQRRg7MgYIAhBFGDsyBggDEEUYPDIGCAQQRRg8MgYIBRBFGDzSAQczMjJqMGo3qAIAsAIA&amp;ie=UTF-8&amp;oq=Amy%20Kurzweil&amp;q=Amy%20Kurzweil&amp;rlz=1C5GCEM_enUS1069US1071&amp;sourceid=chrome'> Amy Kurzweil</a>, the author of the new graphic memoir, <em><a href='https://books.catapult.co/books/artificial/'>Artificial: A Love Story</a>.</em></p>
<p><em>Artificial: A Love Story</em> tells the story of three generations of artists whose search for meaning and connection transcends the limits of life. The story begins with the LLM generated chatbot that Amy’s father, the futurist <a href='https://en.wikipedia.org/wiki/Ray_Kurzweil'>Ray Kurzweil</a>, created out of his father’s archive, but the story doesn’t start and end there. Instead, the story takes us on a journey through new questions that technologies are asking about what it means to be human.  How do we relate to—and hold—our family’s past? And how is technology changing what it means to remember the past? And what does it mean to know--and to love--in the age of AI?</p>
<p><a href='https://en.wikipedia.org/wiki/Amy_Kurzweil'>Amy Kurzweil</a> is a <em>New Yorker</em> cartoonist and the author of two graphic memoirs: <em>Flying Couch</em>, a <em>NYT</em>’s Editor’s Choice and Kirkus “Best Memoir” of 2016, and Artificial: A Love Story, forthcoming October 2023. She was a 2021 Berlin Prize Fellow with the American Academy in Berlin, a 2019 Shearing Fellow with the Black Mountain Institute, and she’s received fellowships from MacDowell, Djerassi, and elsewhere. Her work has been nominated for a Reuben Award and an Ignatz Award for “Technofeelia,” a four-part series with The Believer Magazine. Her writing, comics, and cartoons have also been published in The Verge, The New York Times Book Review, Longreads, Literary Hub, WIRED, and many other places. She’s taught writing and comics at Parsons The New School for Design, The Fashion Institute of Technology, Center for Talented Youth, Interlochen Center for the Arts, in New York City Public Schools, and in many other venues, and she currently teaches a monthly cartooning class to a growing community of virtual students all over the world.</p>
]]></content:encoded>
                                    
        <enclosure length="102965662" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/7iuba9/Amy_Kurzweil_20231005T232343340967_mixdown63wwu.mp3"/>
        <itunes:summary>In today’s conversation, I sit down with Amy Kurzweil, the author of the new graphic memoir, Artificial: A Love Story. Artificial: A Love Story tells the story of three generations of artists whose search for meaning and connection transcends the limits of life. The story begins with the LLM generated chatbot that Amy’s father, the futurist Ray Kurzweil, created out of his father’s archive, but the story doesn’t start and end there. Instead, the story takes us on a journey through new questions that technologies are asking about what it means to be human.  How do we relate to—and hold—our family’s past? And how is technology changing what it means to remember the past? And what does it mean to know--and to love--in the age of AI?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4289</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>119</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In today’s conversation, I sit down with Amy Kurzweil, the author of the new graphic memoir, Artificial: A Love Story. Artificial: A Love Story tells the story of three generations of artists whose search for meaning and connection transcends the limits of life. The story begins with the LLM generated chatbot that Amy’s father, the futurist Ray Kurzweil, created out of his father’s archive, but the story doesn’t start and end there. Instead, the story takes us on a journey through new questions that technologies are asking about what it means to be human.  How do we relate to—and hold—our family’s past? And how is technology changing what it means to remember the past? And what does it mean to know--and to love--in the age of AI? Amy Kurzweil is a New Yorker cartoonist and the author of two graphic memoirs: Flying Couch, a NYT’s Editor’s Choice and Kirkus “Best Memoir” of 2016, and Artificial: A Love Story, forthcoming October 2023. She was a 2021 Berlin Prize Fellow with the American Academy in Berlin, a 2019 Shearing Fellow with the Black Mountain Institute, and she’s received fellowships from MacDowell, Djerassi, and elsewhere. Her work has been nominated for a Reuben Award and an Ignatz Award for “Technofeelia,” a four-part series with The Believer Magazine. Her writing, comics, and cartoons have also been published in The Verge, The New York Times Book Review, Longreads, Literary Hub, WIRED, and many other places. She’s taught writing and comics at Parsons The New School for Design, The Fashion Institute of Technology, Center for Talented Youth, Interlochen Center for the Arts, in New York City Public Schools, and in many other venues, and she currently teaches a monthly cartooning class to a growing community of virtual students all over the world.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Funny Business: ”Silicon Valley” writer and co-producer Dan Lyons explains what‘s funny about tech culture *From the Archives*</title>
        <itunes:title>Funny Business: ”Silicon Valley” writer and co-producer Dan Lyons explains what‘s funny about tech culture *From the Archives*</itunes:title>
        <link>https://dmdonig.podbean.com/e/funny-business-silicon-valley-writer-and-co-producer-dan-lyons-explains-what-s-funny-about-tech-culture-from-the-archives/</link>
                    <comments>https://dmdonig.podbean.com/e/funny-business-silicon-valley-writer-and-co-producer-dan-lyons-explains-what-s-funny-about-tech-culture-from-the-archives/#comments</comments>        <pubDate>Fri, 06 Oct 2023 12:54:18 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/30509d35-27e0-321f-a359-38aec7cba8c3</guid>
                                    <description><![CDATA[<p>Hi Technically Human listeners! I’m on vacation this week, and our team has pulled one of our favorite interviews, and definitely hands down our funniest from our archives to share with you—an episode with Dan Lyons on hat makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.  If you haven’t had a chance to listen to the interview yet, I think you’ll enjoy it! I’ll be back next week with a brand-new episode of the show!</p>
<p>In this episode, I sit down with a personal hero, the iconic literary giant <a href='https://danlyons.io/'>Dan Lyons</a>. We discuss Dan's experience writing about tech culture for the hit HBO show "<a href='https://www.hbo.com/silicon-valley'>Silicon Valley</a>," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. </p>
<p>Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at Newsweek, a staff writer at Forbes, and a columnist for Fortune magazine, while also contributing op-ed columns to the New York Times about the economics and culture of Silicon Valley. </p>
<p>Dan is the author of two of the most important recent books about Silicon Valley: <a href='https://www.amazon.com/Disrupted-My-Misadventure-Start-Up-Bubble/dp/0316306096'>Disrupted: My Misadventure in the Startup Bubble</a>, an international best-seller, and <a href='https://www.amazon.com/Lab-Rats-Silicon-Valley-Miserable/dp/031656186X'>Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us</a>, which was chosen by The Guardian as one of the best business books of 2018. He is also the mastermind of the epic parody blog <a href='https://www.fakesteve.net/2010/04/an-open-letter-to-the-people-of-the-world.html'>“The Fake Steve Jobs Blog.”</a> </p>
<p>Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Hi Technically Human listeners! I’m on vacation this week, and our team has pulled one of our favorite interviews, and definitely hands down our funniest from our archives to share with you—an episode with Dan Lyons on hat makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.  If you haven’t had a chance to listen to the interview yet, I think you’ll enjoy it! I’ll be back next week with a brand-new episode of the show!</p>
<p>In this episode, I sit down with a personal hero, the iconic literary giant <a href='https://danlyons.io/'>Dan Lyons</a>. We discuss Dan's experience writing about tech culture for the hit HBO show "<a href='https://www.hbo.com/silicon-valley'>Silicon Valley</a>," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. </p>
<p>Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at <em>Newsweek</em>, a staff writer at <em>Forbes</em>, and a columnist for <em>Fortune</em> magazine, while also contributing op-ed columns to the <em>New York Times</em> about the economics and culture of Silicon Valley. </p>
<p>Dan is the author of two of the most important recent books about Silicon Valley: <a href='https://www.amazon.com/Disrupted-My-Misadventure-Start-Up-Bubble/dp/0316306096'><em>Disrupted: My Misadventure in the Startup Bubble</em></a>, an international best-seller, and <em><a href='https://www.amazon.com/Lab-Rats-Silicon-Valley-Miserable/dp/031656186X'>Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us</a>, </em>which was chosen by <em>The Guardian</em> as one of the best business books of 2018. He is also the mastermind of the epic parody blog <a href='https://www.fakesteve.net/2010/04/an-open-letter-to-the-people-of-the-world.html'>“The Fake Steve Jobs Blog.”</a> </p>
<p>Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space.</p>
]]></content:encoded>
                                    
        <enclosure length="123076054" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ch4qye/Lyons_rerun_mixdown6bwhp.mp3"/>
        <itunes:summary>In this episode, I sit down with a personal hero, the iconic literary giant Dan Lyons. We discuss Dan’s experience writing about tech culture for the hit HBO show ”Silicon Valley,” and Dan’s own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>5127</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>118</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Hi Technically Human listeners! I’m on vacation this week, and our team has pulled one of our favorite interviews, and definitely hands down our funniest from our archives to share with you—an episode with Dan Lyons on hat makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.  If you haven’t had a chance to listen to the interview yet, I think you’ll enjoy it! I’ll be back next week with a brand-new episode of the show! In this episode, I sit down with a personal hero, the iconic literary giant Dan Lyons. We discuss Dan's experience writing about tech culture for the hit HBO show "Silicon Valley," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.  Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at Newsweek, a staff writer at Forbes, and a columnist for Fortune magazine, while also contributing op-ed columns to the New York Times about the economics and culture of Silicon Valley.  Dan is the author of two of the most important recent books about Silicon Valley: Disrupted: My Misadventure in the Startup Bubble, an international best-seller, and Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us, which was chosen by The Guardian as one of the best business books of 2018. He is also the mastermind of the epic parody blog “The Fake Steve Jobs Blog.”  Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The American Dream Goes Digital: The myths and technologies that bind us with Dr. Julie Albright *From the Archives*</title>
        <itunes:title>The American Dream Goes Digital: The myths and technologies that bind us with Dr. Julie Albright *From the Archives*</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-american-dream-goes-digital-the-myths-and-technologies-that-bind-us-with-dr-julie-albright-from-the-archives/</link>
                    <comments>https://dmdonig.podbean.com/e/the-american-dream-goes-digital-the-myths-and-technologies-that-bind-us-with-dr-julie-albright-from-the-archives/#comments</comments>        <pubDate>Fri, 29 Sep 2023 11:15:17 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a37fa703-2bf4-3f49-9bbc-20e8adc684e9</guid>
                                    <description><![CDATA[<p>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</p>
<p><a href='http://www.drjuliealbright.com/'>Dr. Julie Albright</a> is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure. </p>
<p>Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems.</p>
<p>She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others. </p>
<p>Her new book, <a href='https://www.amazon.com/Left-Their-Own-Devices-Reshaping/dp/1633884449'>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream </a>(Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: <em>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. </em>We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</p>
<p><a href='http://www.drjuliealbright.com/'>Dr. Julie Albright</a> is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure. </p>
<p>Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems.</p>
<p>She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others. </p>
<p>Her new book, <a href='https://www.amazon.com/Left-Their-Own-Devices-Reshaping/dp/1633884449'><em>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream</em> </a>(Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society. </p>
]]></content:encoded>
                                    
        <enclosure length="108148343" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/mj65bg/ja_mixdown.mp3"/>
        <itunes:summary>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4505</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>117</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating. Dr. Julie Albright is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure.  Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems. She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others.  Her new book, Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream (Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Bad Input: Raising public awareness about AI bias</title>
        <itunes:title>Bad Input: Raising public awareness about AI bias</itunes:title>
        <link>https://dmdonig.podbean.com/e/bad-input/</link>
                    <comments>https://dmdonig.podbean.com/e/bad-input/#comments</comments>        <pubDate>Fri, 22 Sep 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6d0ba829-a253-3636-b325-553b612df4ae</guid>
                                    <description><![CDATA[<p>Earlier this year, Consumer Reports, in collaboration with the Kapor Center, debuted "<a href='https://www.consumerreports.org/badinput/'>Bad Input</a>," three short films that set out to explore and to create public awareness about how biases in algorithms/data sets result in unfair practices for communities of color, often without their knowledge.</p>
<p>In this episode of the show, I talk to Lily Gangas, Chief Technology Community Officer at the Kapor Center, and Amira Dhalla, Director of Impact Partnerships and Programs at Consumer Reports, about the film and about state of AI at the intersection of race and equity, and the importance of educating the public if we want to see change in the future of AI and human values.</p>
<p><a href='https://www.consumerreports.org/'>Consumer Reports</a> is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. They do it by fighting to put consumers’ needs first in the marketplace and by empowering them with the trusted knowledge they depend on to make better, more informed choices.</p>
<p>The<a href='https://www.kaporcenter.org/'> Kapor Center</a>’s work focuses at the intersection of racial justice and technology to create a more inclusive technology sector for all. Founded by Freada Kapor Klein and Mitch Kapor, the center seeks to develop a vision and practice to make the tech industry more diverse and inclusive. The Kapor Foundation, alongside <a href='https://www.kaporcenter.org/kapor-capital/'>Kapor Capital</a>, and the STEM education initiative <a href='https://www.kaporcenter.org/smash/'>SMASH</a>, takes a comprehensive approach to expand access to computer science education, conduct research on disparities in the technology pipeline, support nonprofit organizations and initiatives, and invest in gap-closing startups and entrepreneurs that close gaps of access for all. The Kapor Center seeks to intentionally dismantle barriers to tech and deployment of technologies across the <a href='http://leakytechpipeline.com/'>Leaky Tech Pipeline</a> through research-driven practices, gap-closing investments, increased access to computer science education, supporting and partnering with mission-aligned organizations, advocating for needed policy change, and more. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Earlier this year, Consumer Reports, in collaboration with the Kapor Center, debuted "<a href='https://www.consumerreports.org/badinput/'>Bad Input</a>," three short films that set out to explore and to create public awareness about how biases in algorithms/data sets result in unfair practices for communities of color, often without their knowledge.</p>
<p>In this episode of the show, I talk to Lily Gangas, Chief Technology Community Officer at the Kapor Center, and Amira Dhalla, Director of Impact Partnerships and Programs at Consumer Reports, about the film and about state of AI at the intersection of race and equity, and the importance of educating the public if we want to see change in the future of AI and human values.</p>
<p><a href='https://www.consumerreports.org/'>Consumer Reports</a> is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. They do it by fighting to put consumers’ needs first in the marketplace and by empowering them with the trusted knowledge they depend on to make better, more informed choices.</p>
<p>The<a href='https://www.kaporcenter.org/'> Kapor Center</a>’s work focuses at the intersection of racial justice and technology to create a more inclusive technology sector for all. Founded by Freada Kapor Klein and Mitch Kapor, the center seeks to develop a vision and practice to make the tech industry more diverse and inclusive. The Kapor Foundation, alongside <a href='https://www.kaporcenter.org/kapor-capital/'>Kapor Capital</a>, and the STEM education initiative <a href='https://www.kaporcenter.org/smash/'>SMASH</a>, takes a comprehensive approach to expand access to computer science education, conduct research on disparities in the technology pipeline, support nonprofit organizations and initiatives, and invest in gap-closing startups and entrepreneurs that close gaps of access for all. The Kapor Center seeks to intentionally dismantle barriers to tech and deployment of technologies across the <a href='http://leakytechpipeline.com/'>Leaky Tech Pipeline</a> through research-driven practices, gap-closing investments, increased access to computer science education, supporting and partnering with mission-aligned organizations, advocating for needed policy change, and more. </p>
]]></content:encoded>
                                    
        <enclosure length="104427800" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/xhqvhy/Bad_Input_FINALadrpn.mp3"/>
        <itunes:summary>Earlier this year, Consumer Reports, in collaboration with the Kapor Center, debuted ”Bad Input,” three short films that set out to explore and to create public awareness about how biases in algorithms/data sets result in unfair practices for communities of color, often without their knowledge.

In this episode of the show, I talk to Lily Gangas, Chief Technology Community Officer at the Kapor Center, and Amira Dhalla, Director of Impact Partnerships and Programs at Consumer Reports, about the film and about state of AI at the intersection of race and equity, and the importance of educating the public if we want to see change in the future of AI and human values.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4350</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>116</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Earlier this year, Consumer Reports, in collaboration with the Kapor Center, debuted "Bad Input," three short films that set out to explore and to create public awareness about how biases in algorithms/data sets result in unfair practices for communities of color, often without their knowledge. In this episode of the show, I talk to Lily Gangas, Chief Technology Community Officer at the Kapor Center, and Amira Dhalla, Director of Impact Partnerships and Programs at Consumer Reports, about the film and about state of AI at the intersection of race and equity, and the importance of educating the public if we want to see change in the future of AI and human values. Consumer Reports is an independent, nonprofit organization that works side by side with consumers to create a fairer, safer, and healthier world. They do it by fighting to put consumers’ needs first in the marketplace and by empowering them with the trusted knowledge they depend on to make better, more informed choices. The Kapor Center’s work focuses at the intersection of racial justice and technology to create a more inclusive technology sector for all. Founded by Freada Kapor Klein and Mitch Kapor, the center seeks to develop a vision and practice to make the tech industry more diverse and inclusive. The Kapor Foundation, alongside Kapor Capital, and the STEM education initiative SMASH, takes a comprehensive approach to expand access to computer science education, conduct research on disparities in the technology pipeline, support nonprofit organizations and initiatives, and invest in gap-closing startups and entrepreneurs that close gaps of access for all. The Kapor Center seeks to intentionally dismantle barriers to tech and deployment of technologies across the Leaky Tech Pipeline through research-driven practices, gap-closing investments, increased access to computer science education, supporting and partnering with mission-aligned organizations, advocating for needed policy change, and more. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Instituting Greenlining: how policy can promote digital inclusion</title>
        <itunes:title>Instituting Greenlining: how policy can promote digital inclusion</itunes:title>
        <link>https://dmdonig.podbean.com/e/instituting-greenlining-how-policy-can-promote-digital-inclusion/</link>
                    <comments>https://dmdonig.podbean.com/e/instituting-greenlining-how-policy-can-promote-digital-inclusion/#comments</comments>        <pubDate>Fri, 15 Sep 2023 08:43:30 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ab0171b6-81e9-30c5-abe6-db93d38f25a2</guid>
                                    <description><![CDATA[<p>In today’s episode, I sit down with Vinhcent Le, Senior Legal Counsel of Tech Equity at the <a href='https://greenlining.org/about/'>Greenlining Institute</a>, an organization that works towards a future where communities of color can build wealth, live in healthy places filled with economic opportunity, and are ready to meet the challenges posed by climate change. We talk about the possibilities and limitations of regulation to address inequities in tech, the challenges of negotiating race in tech production, and how greenlining seeks to address a history of redlining.</p>
<p><a href='https://greenlining.org/team/?staff=vinhcent-le%20%20'>Vinhcent Le</a> (he/him/his) leads Greenlining’s work to close the digital divide, to protect consumer privacy, and to ensure algorithms are fair and that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the <a href='https://cppa.ca.gov/'>California Privacy Protection Agency</a>.</p>
<p>Vinhcent received his J.D. from the University of California, Irvine School of Law and a B.A. in Political Science from the University of California, San Diego. Prior to Greenlining, Vinhcent advocated for clients as a law clerk at the Public Defender’s Office, the Office of Medicare Hearing and Appeals, and the Small Business Administration.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In today’s episode, I sit down with Vinhcent Le, Senior Legal Counsel of Tech Equity at the <a href='https://greenlining.org/about/'>Greenlining Institute</a>, an organization that works towards a future where communities of color can build wealth, live in healthy places filled with economic opportunity, and are ready to meet the challenges posed by climate change. We talk about the possibilities and limitations of regulation to address inequities in tech, the challenges of negotiating race in tech production, and how greenlining seeks to address a history of redlining.</p>
<p><a href='https://greenlining.org/team/?staff=vinhcent-le%20%20'>Vinhcent Le</a> (he/him/his) leads Greenlining’s work to close the digital divide, to protect consumer privacy, and to ensure algorithms are fair and that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the <a href='https://cppa.ca.gov/'>California Privacy Protection Agency</a>.</p>
<p>Vinhcent received his J.D. from the University of California, Irvine School of Law and a B.A. in Political Science from the University of California, San Diego. Prior to Greenlining, Vinhcent advocated for clients as a law clerk at the Public Defender’s Office, the Office of Medicare Hearing and Appeals, and the Small Business Administration.</p>
]]></content:encoded>
                                    
        <enclosure length="90976715" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/iww6rx/Vinhcent_mixdown.mp3"/>
        <itunes:summary>In today’s episode, I sit down with Vinhcent Le, Senior Legal Counsel of Tech Equity at the Greenlining Institute,an organization that works towards a future where communities of color can build wealth, live in healthy places filled with economic opportunity, and are ready to meet the challenges posed by climate change.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3790</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>115</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In today’s episode, I sit down with Vinhcent Le, Senior Legal Counsel of Tech Equity at the Greenlining Institute, an organization that works towards a future where communities of color can build wealth, live in healthy places filled with economic opportunity, and are ready to meet the challenges posed by climate change. We talk about the possibilities and limitations of regulation to address inequities in tech, the challenges of negotiating race in tech production, and how greenlining seeks to address a history of redlining. Vinhcent Le (he/him/his) leads Greenlining’s work to close the digital divide, to protect consumer privacy, and to ensure algorithms are fair and that technology builds economic opportunity for communities of color. In this role, Vinhcent helps develop and implement policies to increase broadband affordability and digital inclusion as well as bring transparency and accountability to automated decision systems. Vinhcent also serves on several regulatory boards including the California Privacy Protection Agency. Vinhcent received his J.D. from the University of California, Irvine School of Law and a B.A. in Political Science from the University of California, San Diego. Prior to Greenlining, Vinhcent advocated for clients as a law clerk at the Public Defender’s Office, the Office of Medicare Hearing and Appeals, and the Small Business Administration.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Designing Data Governance</title>
        <itunes:title>Designing Data Governance</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-future-of-data-governance/</link>
                    <comments>https://dmdonig.podbean.com/e/the-future-of-data-governance/#comments</comments>        <pubDate>Fri, 08 Sep 2023 11:36:33 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1bfe5cea-589d-39a5-91a8-370ad5612e23</guid>
                                    <description><![CDATA[<p>In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics.</p>
<a href='https://www.linkedin.com/in/laurenmaffeo/'>Lauren Maffeo</a> is an award-winning designer and analyst who currently works as a service designer at Steampunk, a human-centered design firm serving the federal government. She is also a founding editor of Springer’s AI and Ethics journal and an adjunct lecturer in Interaction Design at The George Washington University. Her first book, <a href='https://pragprog.com/titles/lmmlops/designing-data-governance-from-the-ground-up/'>Designing Data Governance from the Ground Up</a>, is available from The Pragmatic Programmers.
 
Lauren has written for Harvard Data Science Review, Financial Times, and The Guardian, among other publications. She is a fellow of the Royal Society of Arts, a former member of the Association for Computing Machinery’s Distinguished Speakers Program, and a member of the International Academy of Digital Arts and Sciences, where she helps judge the Webby Awards.]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics.</p>
<a href='https://www.linkedin.com/in/laurenmaffeo/'>Lauren Maffeo</a> is an award-winning designer and analyst who currently works as a service designer at Steampunk, a human-centered design firm serving the federal government. She is also a founding editor of Springer’s AI and Ethics journal and an adjunct lecturer in Interaction Design at The George Washington University. Her first book, <a href='https://pragprog.com/titles/lmmlops/designing-data-governance-from-the-ground-up/'><em>Designing Data Governance from the Ground Up</em></a>, is available from The Pragmatic Programmers.
 
Lauren has written for Harvard Data Science Review, Financial Times, and The Guardian, among other publications. She is a fellow of the Royal Society of Arts, a former member of the Association for Computing Machinery’s Distinguished Speakers Program, and a member of the International Academy of Digital Arts and Sciences, where she helps judge the Webby Awards.]]></content:encoded>
                                    
        <enclosure length="72730694" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/rhas9f/Maffeo_Episode_mixdown898yc.mp3"/>
        <itunes:summary>In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3030</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>114</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the show, I continue my deep dive into data, human values, and governance with an interview featuring Lauren Maffeo. We talk about the future of data governance, the possibilities of, and the catastrophe that Lauren thinks our society may need to experience in order to turn the corner on an data governance and ethics. Lauren Maffeo is an award-winning designer and analyst who currently works as a service designer at Steampunk, a human-centered design firm serving the federal government. She is also a founding editor of Springer’s AI and Ethics journal and an adjunct lecturer in Interaction Design at The George Washington University. Her first book, Designing Data Governance from the Ground Up, is available from The Pragmatic Programmers.   Lauren has written for Harvard Data Science Review, Financial Times, and The Guardian, among other publications. She is a fellow of the Royal Society of Arts, a former member of the Association for Computing Machinery’s Distinguished Speakers Program, and a member of the International Academy of Digital Arts and Sciences, where she helps judge the Webby Awards.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Behind the Data: data, human values, and society</title>
        <itunes:title>Behind the Data: data, human values, and society</itunes:title>
        <link>https://dmdonig.podbean.com/e/data-and-society/</link>
                    <comments>https://dmdonig.podbean.com/e/data-and-society/#comments</comments>        <pubDate>Fri, 01 Sep 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/5a03c842-939a-3062-8212-990c0a73a6e7</guid>
                                    <description><![CDATA[<p>We’re back, after a long and restful break, with a brand new season of Technically Human! In our first episode of the season, I am joined by a guest cohost, Dr. Morgan Ames, for a conversation with Janet Haven, Executive Director of Data and Society. We talk about the movement to root data and AI practices in human values, the future of automation, and the pressing needs—and challenges—of data governance.</p>
<p>Janet Haven is the executive director of Data &amp; Society. She has worked at the intersection of technology policy, governance, and accountability for more than twenty years, both domestically and internationally. Janet is a member of the <a href='https://www.ai.gov/naiac/'>National Artificial Intelligence Advisory Committee (NAIAC)</a>, which advises President Biden and the National AI Initiative Office on a range of issues related to artificial intelligence. She also acts as an advisor to the <a href='https://trustandsafetyfoundation.org/'>Trust and Safety Foundation</a>, and has brought her expertise in non-profit governance to bear through varied board memberships. She writes and speaks regularly on matters related to technology and society, federal AI research and development, and AI governance and policy.</p>
<p>Before joining D&amp;S, Janet spent more than a decade at the Open Society Foundations. There, she oversaw funding strategies and worldwide grant-making related to technology, human rights, and governance, and played a substantial role in shaping the emerging international field focused on technology and accountability.</p>
<p><a href='https://datasociety.net/about/'>Data &amp; Society</a> is an independent nonprofit research organization rooted in the belief that empirical evidence should directly inform the development and governance of new technologies — and that these technologies can and must be grounded in equity and human dignity. Recognizing that the concentrated, profit-driven power of corporations and tech platforms will not steer us toward a just future, our work foregrounds the power of the people and communities most impacted by technological change. Their work studies the social implications of data, automation, and AI, producing original research to ground informed public debate about emerging technology.</p>
<p><a href='https://www.ischool.berkeley.edu/about/profiles/morgan-ames'>Dr. Morgan Ames</a> is an adjunct professor in the School of Information and interim associate director of research for the Center for Science, Technology, Medicine and Society (<a href='http://cstms.berkeley.edu/'>CSTMS</a>) at the University of California, Berkeley, where she teaches in Data Science and administers the <a href='http://cstms.berkeley.edu/teaching/de-in-sts/'>Designated Emphasis in Science and Technology Studies</a>. She is also affiliated with the Algorithmic Fairness and Opacity Working Group (<a href='http://afog.berkeley.edu/'>AFOG</a>), the Center for Science, Technology, Society and Policy (<a href='https://ctsp.berkeley.edu/'>CTSP</a>), and the Berkeley Institute of Data Science (<a href='https://bids.berkeley.edu/'>BIDS</a>).</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>We’re back, after a long and restful break, with a brand new season of Technically Human! In our first episode of the season, I am joined by a guest cohost, Dr. Morgan Ames, for a conversation with Janet Haven, Executive Director of Data and Society. We talk about the movement to root data and AI practices in human values, the future of automation, and the pressing needs—and challenges—of data governance.</p>
<p>Janet Haven is the executive director of Data &amp; Society. She has worked at the intersection of technology policy, governance, and accountability for more than twenty years, both domestically and internationally. Janet is a member of the <a href='https://www.ai.gov/naiac/'>National Artificial Intelligence Advisory Committee (NAIAC)</a>, which advises President Biden and the National AI Initiative Office on a range of issues related to artificial intelligence. She also acts as an advisor to the <a href='https://trustandsafetyfoundation.org/'>Trust and Safety Foundation</a>, and has brought her expertise in non-profit governance to bear through varied board memberships. She writes and speaks regularly on matters related to technology and society, federal AI research and development, and AI governance and policy.</p>
<p>Before joining D&amp;S, Janet spent more than a decade at the Open Society Foundations. There, she oversaw funding strategies and worldwide grant-making related to technology, human rights, and governance, and played a substantial role in shaping the emerging international field focused on technology and accountability.</p>
<p><a href='https://datasociety.net/about/'>Data &amp; Society</a> is an independent nonprofit research organization rooted in the belief that empirical evidence should directly inform the development and governance of new technologies — and that these technologies can and must be grounded in equity and human dignity. Recognizing that the concentrated, profit-driven power of corporations and tech platforms will not steer us toward a just future, our work foregrounds the power of the people and communities most impacted by technological change. Their work studies the social implications of data, automation, and AI, producing original research to ground informed public debate about emerging technology.</p>
<p><a href='https://www.ischool.berkeley.edu/about/profiles/morgan-ames'>Dr. Morgan Ames</a> is an adjunct professor in the School of Information and interim associate director of research for the Center for Science, Technology, Medicine and Society (<a href='http://cstms.berkeley.edu/'>CSTMS</a>) at the University of California, Berkeley, where she teaches in Data Science and administers the <a href='http://cstms.berkeley.edu/teaching/de-in-sts/'>Designated Emphasis in Science and Technology Studies</a>. She is also affiliated with the Algorithmic Fairness and Opacity Working Group (<a href='http://afog.berkeley.edu/'>AFOG</a>), the Center for Science, Technology, Society and Policy (<a href='https://ctsp.berkeley.edu/'>CTSP</a>), and the Berkeley Institute of Data Science (<a href='https://bids.berkeley.edu/'>BIDS</a>).</p>
]]></content:encoded>
                                    
        <enclosure length="106225560" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/tpnds3/Data_and_Societymp397gwe.mp3"/>
        <itunes:summary>We’re back, after a long and restful break, with a brand new season of Technically Human! In our first episode of the season, I am joined by a guest cohost, Dr. Morgan Ames, for a conversation with Janet Haven, Executive Director of Data and Society. We talk about the movement to root data and AI practices in human values, the future of automation, and the pressing needs—and challenges—of data governance.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4425</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>113</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>We’re back, after a long and restful break, with a brand new season of Technically Human! In our first episode of the season, I am joined by a guest cohost, Dr. Morgan Ames, for a conversation with Janet Haven, Executive Director of Data and Society. We talk about the movement to root data and AI practices in human values, the future of automation, and the pressing needs—and challenges—of data governance. Janet Haven is the executive director of Data &amp;amp; Society. She has worked at the intersection of technology policy, governance, and accountability for more than twenty years, both domestically and internationally. Janet is a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden and the National AI Initiative Office on a range of issues related to artificial intelligence. She also acts as an advisor to the Trust and Safety Foundation, and has brought her expertise in non-profit governance to bear through varied board memberships. She writes and speaks regularly on matters related to technology and society, federal AI research and development, and AI governance and policy. Before joining D&amp;amp;S, Janet spent more than a decade at the Open Society Foundations. There, she oversaw funding strategies and worldwide grant-making related to technology, human rights, and governance, and played a substantial role in shaping the emerging international field focused on technology and accountability. Data &amp;amp; Society is an independent nonprofit research organization rooted in the belief that empirical evidence should directly inform the development and governance of new technologies — and that these technologies can and must be grounded in equity and human dignity. Recognizing that the concentrated, profit-driven power of corporations and tech platforms will not steer us toward a just future, our work foregrounds the power of the people and communities most impacted by technological change. Their work studies the social implications of data, automation, and AI, producing original research to ground informed public debate about emerging technology. Dr. Morgan Ames is an adjunct professor in the School of Information and interim associate director of research for the Center for Science, Technology, Medicine and Society (CSTMS) at the University of California, Berkeley, where she teaches in Data Science and administers the Designated Emphasis in Science and Technology Studies. She is also affiliated with the Algorithmic Fairness and Opacity Working Group (AFOG), the Center for Science, Technology, Society and Policy (CTSP), and the Berkeley Institute of Data Science (BIDS).</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>East Meets West: The place of Asia in the technological imagination</title>
        <itunes:title>East Meets West: The place of Asia in the technological imagination</itunes:title>
        <link>https://dmdonig.podbean.com/e/east-meets-west-the-place-of-asia-in-the-technological-imagination/</link>
                    <comments>https://dmdonig.podbean.com/e/east-meets-west-the-place-of-asia-in-the-technological-imagination/#comments</comments>        <pubDate>Fri, 16 Jun 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/07f1c750-3d51-3208-870a-fa8a7161d555</guid>
                                    <description><![CDATA[<p>Welcome to the final episode of the "Technically Human" season!</p>
<p>We’re ending the season with an episode of the 22 lessons on ethics and technology series, with a conversation featuring Dr. John Williams about the global imagination of tech.</p>
<p>Dr. John Williams is a professor of English Literature at Yale University. His work is focused on international histories of technological/media innovation and the perceived difference of racial and cultural otherness. His book, <a href='http://www.barnesandnoble.com/w/the-buddha-in-the-machine-prof-r-john-williams/1117252984?ean=9780300194470'>The Buddha in the Machine: Art, Technology, and The Meeting of East and West</a> (Yale University Press, 2014), examines the role of technological discourse in representations of Asian/American aesthetics in late-nineteenth and twentieth-century film and literature.  The book won the 2015 <a href='http://www.acla.org/harry-levin-prize-citation-2015'>Harry Levin Prize</a> from the American Comparative Literature Association. In the conversation, we explore the diverse international histories of technological innovation and how otherness and differences have been constructed across contexts and time.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to the final episode of the "Technically Human" season!</p>
<p>We’re ending the season with an episode of the 22 lessons on ethics and technology series, with a conversation featuring Dr. John Williams about the global imagination of tech.</p>
<p>Dr. John Williams is a professor of English Literature at Yale University. His work is focused on international histories of technological/media innovation and the perceived difference of racial and cultural otherness. His book, <em><a href='http://www.barnesandnoble.com/w/the-buddha-in-the-machine-prof-r-john-williams/1117252984?ean=9780300194470'>The Buddha in the Machine: Art, Technology, and The Meeting of East and West</a></em> (Yale University Press, 2014), examines the role of technological discourse in representations of Asian/American aesthetics in late-nineteenth and twentieth-century film and literature.  The book won the 2015 <a href='http://www.acla.org/harry-levin-prize-citation-2015'>Harry Levin Prize</a> from the American Comparative Literature Association. In the conversation, we explore the diverse international histories of technological innovation and how otherness and differences have been constructed across contexts and time.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="41563070" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/gsbef4/v2_ET22LEIT-drjohnwilliams_Mixdown_2_luffed76sk6.mp3"/>
        <itunes:summary>We’re ending the season with an episode of the 22 lessons on ethics and technology series, with a conversation featuring Dr. John Williams about the global imagination of tech. In the conversation, we explore the diverse international histories of technological innovation and how otherness and differences have been constructed across contexts and time.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3461</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>112</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to the final episode of the "Technically Human" season! We’re ending the season with an episode of the 22 lessons on ethics and technology series, with a conversation featuring Dr. John Williams about the global imagination of tech. Dr. John Williams is a professor of English Literature at Yale University. His work is focused on international histories of technological/media innovation and the perceived difference of racial and cultural otherness. His book, The Buddha in the Machine: Art, Technology, and The Meeting of East and West (Yale University Press, 2014), examines the role of technological discourse in representations of Asian/American aesthetics in late-nineteenth and twentieth-century film and literature.  The book won the 2015 Harry Levin Prize from the American Comparative Literature Association. In the conversation, we explore the diverse international histories of technological innovation and how otherness and differences have been constructed across contexts and time. The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>*From the Archives*: Tech, democracy, human rights, and the urgent crisis in Sudan</title>
        <itunes:title>*From the Archives*: Tech, democracy, human rights, and the urgent crisis in Sudan</itunes:title>
        <link>https://dmdonig.podbean.com/e/from-the-archives-tech-democracy-human-rights-and-the-urgent-crisis-in-sudan/</link>
                    <comments>https://dmdonig.podbean.com/e/from-the-archives-tech-democracy-human-rights-and-the-urgent-crisis-in-sudan/#comments</comments>        <pubDate>Fri, 02 Jun 2023 11:28:11 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/25466d9f-2832-3aa1-a116-bc53054dff71</guid>
                                    <description><![CDATA[<p>These past few weeks, as violence and instability have escalated in Sudan, I’ve had one particular conversation on my mind, an episode of the show that I recorded a few years back with Mohamed Abubakr.</p>
<p>In April of this year, clashes broke out in cities, with the fighting concentrated around the capital city of Khartoum and the Darfur region. As of 27 May, at least 1,800 people had been killed and more than 5,100 others had been injured.</p>
<p>The conflict began with attacks by the paramilitary group Rapid Support Forces (RSF) on government sites across Sudan. At present the conflict has killed hundreds, injured thousands, and triggered a humanitarian catastrophe with international sanctions and a global response emerging from governments, including the United States, and international groups.</p>
<p>In light of the conflict, I wanted to revisit the conversation I had with Mohamed, where we talked about the role that tech plays in democracy and revolution in the middle east to call attention to Sudan and those who are working passionately to help protect and restore democracy there, to recall the possibilities and optimism for a better Sudanese future, and to help remind us of our interconnectedness to others around the world.</p>
<p>Mohamed Abubakr is a Sudanese human rights activist and peacemaker with a decade and a half of civil society experience. Since high school, he has founded and led organizations and initiatives focused on humanitarian, human rights, youth empowerment and peace programs across the Middle East and Africa (MEA) including in Darfur, South Sudan, Sudan, Egypt, Israel, the Palestinian Territories and beyond. Mohamed has also documented, reported and mobilized against human rights abuses across MEA, and since arriving in the United States has become a sought after voice at the State Department and in Congress concerning policy and human rights issues in the region.</p>
<p>Mohamed Abubar is the president of the <a href='https://www.amelproject.org/'>African and Middle Eastern Leadership Project</a> (AMEL). AMEL empowers young activists from the Middle Eastern and African region, and connects them with one another and with peers, leaders and audiences in the global north, in order to advance human rights for all human beings. Using online platforms, social media networks, and technological innovation, AMEL provides training, mentoring, and advocacy to African and Middle Eastern activists, empowering them to step up their civil society activism, while at the same time building their skills and experience to ascend to top leadership positions.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>These past few weeks, as violence and instability have escalated in Sudan, I’ve had one particular conversation on my mind, an episode of the show that I recorded a few years back with Mohamed Abubakr.</p>
<p>In April of this year, clashes broke out in cities, with the fighting concentrated around the capital city of Khartoum and the Darfur region. As of 27 May, at least 1,800 people had been killed and more than 5,100 others had been injured.</p>
<p>The conflict began with attacks by the paramilitary group Rapid Support Forces (RSF) on government sites across Sudan. At present the conflict has killed hundreds, injured thousands, and triggered a humanitarian catastrophe with international sanctions and a global response emerging from governments, including the United States, and international groups.</p>
<p>In light of the conflict, I wanted to revisit the conversation I had with Mohamed, where we talked about the role that tech plays in democracy and revolution in the middle east to call attention to Sudan and those who are working passionately to help protect and restore democracy there, to recall the possibilities and optimism for a better Sudanese future, and to help remind us of our interconnectedness to others around the world.</p>
<p>Mohamed Abubakr is a Sudanese human rights activist and peacemaker with a decade and a half of civil society experience. Since high school, he has founded and led organizations and initiatives focused on humanitarian, human rights, youth empowerment and peace programs across the Middle East and Africa (MEA) including in Darfur, South Sudan, Sudan, Egypt, Israel, the Palestinian Territories and beyond. Mohamed has also documented, reported and mobilized against human rights abuses across MEA, and since arriving in the United States has become a sought after voice at the State Department and in Congress concerning policy and human rights issues in the region.</p>
<p>Mohamed Abubar is the president of the <a href='https://www.amelproject.org/'>African and Middle Eastern Leadership Project</a> (AMEL). AMEL empowers young activists from the Middle Eastern and African region, and connects them with one another and with peers, leaders and audiences in the global north, in order to advance human rights for all human beings. Using online platforms, social media networks, and technological innovation, AMEL provides training, mentoring, and advocacy to African and Middle Eastern activists, empowering them to step up their civil society activism, while at the same time building their skills and experience to ascend to top leadership positions.</p>
]]></content:encoded>
                                    
        <enclosure length="86080146" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/99jak5/Mohamed_2_mixdown6mxce.mp3"/>
        <itunes:summary><![CDATA[These past few weeks, as violence and instability have escalated in Sudan, I’ve had one particular conversation on my mind, an episode of the show that I recorded a few years back with Mohamed Abubakr.
In April of this year, clashes broke out in cities, with the fighting concentrated around the capital city of Khartoum and the Darfur region. As of 27 May, at least 1,800 people had been killed and more than 5,100 others had been injured.
The conflict began with attacks by the paramilitary group Rapid Support Forces (RSF) on government sites across Sudan. At present the conflict has killed hundreds, injured thousands, and triggered a humanitarian catastrophe with international sanctions and a global response emerging from governments, including the United States, and international groups.
In light of the conflict, I wanted to revisit the conversation I had with Mohamed, where we talked about the role that tech plays in democracy and revolution in the middle east to call attention to Sudan and those who are working passionately to help protect and restore democracy there, to recall the possibilities and optimism for a better Sudanese future, and to help remind us of our interconnectedness to others around the world.
Mohamed Abubakr is a Sudanese human rights activist and peacemaker with a decade and a half of civil society experience. Since high school, he has founded and led organizations and initiatives focused on humanitarian, human rights, youth empowerment and peace programs across the Middle East and Africa (MEA) including in Darfur, South Sudan, Sudan, Egypt, Israel, the Palestinian Territories and beyond. Mohamed has also documented, reported and mobilized against human rights abuses across MEA, and since arriving in the United States has become a sought after voice at the State Department and in Congress concerning policy and human rights issues in the region.
Mohamed Abubar is the president of the African and Middle Eastern Leadership Project (AMEL). AMEL empowers young activists from the Middle Eastern and African region, and connects them with one another and with peers, leaders and audiences in the global north, in order to advance human rights for all human beings. Using online platforms, social media networks, and technological innovation, AMEL provides training, mentoring, and advocacy to African and Middle Eastern activists, empowering them to step up their civil society activism, while at the same time building their skills and experience to ascend to top leadership positions.]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3586</itunes:duration>
                <itunes:episode>111</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>These past few weeks, as violence and instability have escalated in Sudan, I’ve had one particular conversation on my mind, an episode of the show that I recorded a few years back with Mohamed Abubakr. In April of this year, clashes broke out in cities, with the fighting concentrated around the capital city of Khartoum and the Darfur region. As of 27 May, at least 1,800 people had been killed and more than 5,100 others had been injured. The conflict began with attacks by the paramilitary group Rapid Support Forces (RSF) on government sites across Sudan. At present the conflict has killed hundreds, injured thousands, and triggered a humanitarian catastrophe with international sanctions and a global response emerging from governments, including the United States, and international groups. In light of the conflict, I wanted to revisit the conversation I had with Mohamed, where we talked about the role that tech plays in democracy and revolution in the middle east to call attention to Sudan and those who are working passionately to help protect and restore democracy there, to recall the possibilities and optimism for a better Sudanese future, and to help remind us of our interconnectedness to others around the world. Mohamed Abubakr is a Sudanese human rights activist and peacemaker with a decade and a half of civil society experience. Since high school, he has founded and led organizations and initiatives focused on humanitarian, human rights, youth empowerment and peace programs across the Middle East and Africa (MEA) including in Darfur, South Sudan, Sudan, Egypt, Israel, the Palestinian Territories and beyond. Mohamed has also documented, reported and mobilized against human rights abuses across MEA, and since arriving in the United States has become a sought after voice at the State Department and in Congress concerning policy and human rights issues in the region. Mohamed Abubar is the president of the African and Middle Eastern Leadership Project (AMEL). AMEL empowers young activists from the Middle Eastern and African region, and connects them with one another and with peers, leaders and audiences in the global north, in order to advance human rights for all human beings. Using online platforms, social media networks, and technological innovation, AMEL provides training, mentoring, and advocacy to African and Middle Eastern activists, empowering them to step up their civil society activism, while at the same time building their skills and experience to ascend to top leadership positions.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Compliance and Governance in the Age of Tech</title>
        <itunes:title>Compliance and Governance in the Age of Tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/compliance-and-governance-in-the-age-of-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/compliance-and-governance-in-the-age-of-tech/#comments</comments>        <pubDate>Fri, 26 May 2023 16:31:45 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/86ff6069-a662-39b7-95b9-26adc72018fc</guid>
                                    <description><![CDATA[<p>Today’s episode focuses on the growing field of compliance and regulation.</p>
<p>Compliance is a field that is growing in importance at both national and international level. In the EU where emerging ethical principles governing tech have led governments to pass new laws, and harms caused by the tech industry have provoked increasingly sharp public reactions, companies have realized that they now must abide by new reporting obligations, that seek to monitor and prevent environmental mismanagement, sexual harassment, questionable lobbying and tax offenses. Companies are increasingly seeking to protect themselves by introducing effective compliance systems so as to meet these new requirements.</p>
<p>In the episode, I speak with <a href='https://www.shieldfc.com/authors/ofir-shabtai/'>Ofir Shabtai,</a> the Co-Founder & CTO at Shield, a company building compliance systems that can serve as internal watchdogs to monitor and ensure compliance. We talk about the emerging models of governance and the compliance movements mobilizing around the world, what compliance work looks like, and how technological systems intersect with compliance and governance.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today’s episode focuses on the growing field of compliance and regulation.</p>
<p>Compliance is a field that is growing in importance at both national and international level. In the EU where emerging ethical principles governing tech have led governments to pass new laws, and harms caused by the tech industry have provoked increasingly sharp public reactions, companies have realized that they now must abide by new reporting obligations, that seek to monitor and prevent environmental mismanagement, sexual harassment, questionable lobbying and tax offenses. Companies are increasingly seeking to protect themselves by introducing effective compliance systems so as to meet these new requirements.</p>
<p>In the episode, I speak with <a href='https://www.shieldfc.com/authors/ofir-shabtai/'>Ofir Shabtai,</a> the Co-Founder & CTO at Shield, a company building compliance systems that can serve as internal watchdogs to monitor and ensure compliance. We talk about the emerging models of governance and the compliance movements mobilizing around the world, what compliance work looks like, and how technological systems intersect with compliance and governance.</p>
]]></content:encoded>
                                    
        <enclosure length="70833464" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/573wxn/shabtai_mixdown.mp3"/>
        <itunes:summary>In the episode, I speak with Ofir Shabtai, the Co-Founder &amp; CTO at Shield, a company building compliance systems that can serve as internal watchdogs to monitor and ensure compliance. We talk about the emerging models of governance and the compliance movements mobilizing around the world, what compliance work looks like, and how technological systems intersect with compliance and governance.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2950</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>110</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today’s episode focuses on the growing field of compliance and regulation. Compliance is a field that is growing in importance at both national and international level. In the EU where emerging ethical principles governing tech have led governments to pass new laws, and harms caused by the tech industry have provoked increasingly sharp public reactions, companies have realized that they now must abide by new reporting obligations, that seek to monitor and prevent environmental mismanagement, sexual harassment, questionable lobbying and tax offenses. Companies are increasingly seeking to protect themselves by introducing effective compliance systems so as to meet these new requirements. In the episode, I speak with Ofir Shabtai, the Co-Founder &amp; CTO at Shield, a company building compliance systems that can serve as internal watchdogs to monitor and ensure compliance. We talk about the emerging models of governance and the compliance movements mobilizing around the world, what compliance work looks like, and how technological systems intersect with compliance and governance.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Returning the Power of AI to the People</title>
        <itunes:title>Returning the Power of AI to the People</itunes:title>
        <link>https://dmdonig.podbean.com/e/returning-the-power-of-ai-to-the-people/</link>
                    <comments>https://dmdonig.podbean.com/e/returning-the-power-of-ai-to-the-people/#comments</comments>        <pubDate>Fri, 19 May 2023 17:54:57 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/dabc2fbf-cb92-3703-94f4-9b8bc2a297d9</guid>
                                    <description><![CDATA[<p>Any long-time listeners of the show know that I’m passionate about accessibility and disability technology. Technologies that support the idea that we can have an equitable world, and that creating a more accessible world makes things better not just for the group specifically considered in that technology, but for all of us, is a key idea to me. That’s why I wanted to sit down with Suman Kanuganti,  the former Co-founder and CEO of <a href='https://aira.io/'>Aira</a> Tech, a high-tech startup whose work helped pioneer a way to bridge the information gap for those who are blind or low vision. At Aira, Kanuganti transformed cities, airports, and universities across the country by helping to make those spaces accessible for people who are blind or low-vision.</p>
<p>After founding AIRA, Suman went on to start another company, PersonalAI, which is extending the principles of accessibility and mobility to the context of memory. In founding PersonalAI, Suman sought to create an AI to support memory, and to return data ownership back to the individual at this critical moment, when the assumptions that used to rule the web, where our personal data was the property of the companies whose products we use to move throughout digital space in our daily lives—Facebook, Google, WhatsApp—are in flux. In this conversation, we talk about the concept of memory and the transformation of this concept in the context of digital technologies; we talk about the challenges of, and possibilities, for creating accessibility technologies, and Suman shares his vision of returning data ownership to the people.</p>
<p>Suman Kanuganti is the CEO of <a href='https://www.personal.ai/'>PersonalAI</a>. He holds an MBA in Entrepreneurship / Entrepreneurial Studies from the UC San Diego Rady School of Management, a Master’s in Computer Engineering from the University of Missouri-Columbia, and a Bachelor’s in Electrical and Electronics Engineering from Kakatiya University, India.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Any long-time listeners of the show know that I’m passionate about accessibility and disability technology. Technologies that support the idea that we can have an equitable world, and that creating a more accessible world makes things better not just for the group specifically considered in that technology, but for all of us, is a key idea to me. That’s why I wanted to sit down with Suman Kanuganti,  the former Co-founder and CEO of <a href='https://aira.io/'>Aira</a> Tech, a high-tech startup whose work helped pioneer a way to bridge the information gap for those who are blind or low vision. At Aira, Kanuganti transformed cities, airports, and universities across the country by helping to make those spaces accessible for people who are blind or low-vision.</p>
<p>After founding AIRA, Suman went on to start another company, PersonalAI, which is extending the principles of accessibility and mobility to the context of memory. In founding PersonalAI, Suman sought to create an AI to support memory, and to return data ownership back to the individual at this critical moment, when the assumptions that used to rule the web, where our personal data was the property of the companies whose products we use to move throughout digital space in our daily lives—Facebook, Google, WhatsApp—are in flux. In this conversation, we talk about the concept of memory and the transformation of this concept in the context of digital technologies; we talk about the challenges of, and possibilities, for creating accessibility technologies, and Suman shares his vision of returning data ownership to the people.</p>
<p>Suman Kanuganti is the CEO of <a href='https://www.personal.ai/'>PersonalAI</a>. He holds an MBA in Entrepreneurship / Entrepreneurial Studies from the UC San Diego Rady School of Management, a Master’s in Computer Engineering from the University of Missouri-Columbia, and a Bachelor’s in Electrical and Electronics Engineering from Kakatiya University, India.</p>
]]></content:encoded>
                                    
        <enclosure length="77121608" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/7fq8c2/Suman_2_mixdown8lkwf.mp3"/>
        <itunes:summary>Any long-time listeners of the show know that I’m passionate about accessibility and disability technology. Technologies that support the idea that we can have an equitable world, and that creating a more accessible world makes things better not just for the group specifically considered in that technology, but for all of us, is a key idea to me. That’s why I wanted to sit down with Suman Kanuganti,  the former Co-founder and CEO of Aira Tech, a high-tech startup whose work helped pioneer a way to bridge the information gap for those who are blind or low-vision. At Aira, Kanuganti transformed cities, airports, and universities across the country by helping to make those spaces accessible for people who are blind or low-vision.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3213</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>109</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Any long-time listeners of the show know that I’m passionate about accessibility and disability technology. Technologies that support the idea that we can have an equitable world, and that creating a more accessible world makes things better not just for the group specifically considered in that technology, but for all of us, is a key idea to me. That’s why I wanted to sit down with Suman Kanuganti,  the former Co-founder and CEO of Aira Tech, a high-tech startup whose work helped pioneer a way to bridge the information gap for those who are blind or low vision. At Aira, Kanuganti transformed cities, airports, and universities across the country by helping to make those spaces accessible for people who are blind or low-vision. After founding AIRA, Suman went on to start another company, PersonalAI, which is extending the principles of accessibility and mobility to the context of memory. In founding PersonalAI, Suman sought to create an AI to support memory, and to return data ownership back to the individual at this critical moment, when the assumptions that used to rule the web, where our personal data was the property of the companies whose products we use to move throughout digital space in our daily lives—Facebook, Google, WhatsApp—are in flux. In this conversation, we talk about the concept of memory and the transformation of this concept in the context of digital technologies; we talk about the challenges of, and possibilities, for creating accessibility technologies, and Suman shares his vision of returning data ownership to the people. Suman Kanuganti is the CEO of PersonalAI. He holds an MBA in Entrepreneurship / Entrepreneurial Studies from the UC San Diego Rady School of Management, a Master’s in Computer Engineering from the University of Missouri-Columbia, and a Bachelor’s in Electrical and Electronics Engineering from Kakatiya University, India.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Indigeneity in the Digital Age</title>
        <itunes:title>Indigeneity in the Digital Age</itunes:title>
        <link>https://dmdonig.podbean.com/e/indigeneity-in-the-digital-age/</link>
                    <comments>https://dmdonig.podbean.com/e/indigeneity-in-the-digital-age/#comments</comments>        <pubDate>Fri, 12 May 2023 12:40:35 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/4a26e1ce-4a65-3713-9c56-afd543e872fc</guid>
                                    <description><![CDATA[<p>Welcome to another episode of the "22 Lessons on Ethics and Technology" series!</p>
<p>In this episode, I sit down with Jason Edward Lewis to talk about how Indigenous peoples are imagining the futures while drawing upon their heritage. How can we broaden the discussions regarding technology and society to include Indigenous perspectives? How can we design and create AI that centers Indigenous concerns and accommodates a multiplicity of thought? And how can art-led technology research and the use of computational art in imagining the future?</p>
<p><a href='https://jasonlewis.org/'>Jason Edward Lewis</a> is a digital media theorist, poet, and software designer. He founded <a href='http://www.obxlabs.net/'>Obx Laboratory for Experimental Media</a>, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is deeply committed to developing intriguing new forms of expression by working on conceptual, critical, creative and technical levels simultaneously. He is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well Professor of <a href='https://www.concordia.ca/finearts/design/programs/undergraduate/computation-arts-bfa.html'>Computation Arts</a> at <a href='http://www.concordia.ca/'>Concordia University</a>. Lewis was born and raised in northern California, and currently lives in Montreal.</p>
<p>Lewis directs the <a href='http://www.indigenousfutures.net/'>Initiative for Indigenous Futures</a>, and co-directs the <a href='https://milieux.concordia.ca/indigenous-futures-research-centre/'>Indigenous Futures Research Centre</a>, the <a href='http://www.indigenous-ai.net/'>Indigenous Protocol and AI Workshops</a>, <a href='http://www.abtec.org/'>the </a><a href='http://www.abtec.org/'>Aboriginal Territories in Cyberspace</a><a href='http://www.abtec.org/'> research network</a>, and the <a href='http://indigenousfutures.net/workshops/'>Skins Workshops on Aboriginal Storytelling and Video Game Design</a>.</p>
<p>Lewis’ creative and production work has been featured at Ars Electronica, Mobilefest, Elektra, Urban Screens, ISEA, SIGGRAPH, FILE and the Hawaiian International Film Festival, among other venues, and has been recognized with the inaugural Robert Coover Award for Best Work of Electronic Literature, two Prix Ars Electronica Honorable Mentions, several imagineNATIVE Best New Media awards and multiple solo exhibitions. His research interests include emergent media theory and history, and methodologies for conducting art-led technology research. In addition to being lead author on the award-winning “<a href='https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite'>Making Kin with the Machines</a>” essay and editor of the groundbreaking <a href='http://www.indigenous-ai.net/position-paper'>Indigenous Protocol and Artificial Intelligence Position Paper</a>, he has contributed to chapters in collected editions covering Indigenous futures, mobile media, video game design, machinima and experimental pedagogy with Indigenous communities.</p>
<p>Lewis has worked in a range of industrial research settings, including Interval Research, US West's Advanced Technology Group, and the Institute for Research on Learning, and, at the turn of the century, he founded and ran a research studio for the venture capital firm Arts Alliance.</p>
<p>Lewis is a Fellow of the Royal Society of Canada as well as a former Trudeau, Carnegie, and ISO-MIT Co-Creation Lab Fellow. He received a B.S. in Symbolic Systems (Cognitive Science) and B.A. in German Studies (Philosophy) from Stanford University, and an M.Phil. in Design from the Royal College of Art.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to another episode of the "22 Lessons on Ethics and Technology" series!</p>
<p>In this episode, I sit down with Jason Edward Lewis to talk about how Indigenous peoples are imagining the futures while drawing upon their heritage. How can we broaden the discussions regarding technology and society to include Indigenous perspectives? How can we design and create AI that centers Indigenous concerns and accommodates a multiplicity of thought? And how can art-led technology research and the use of computational art in imagining the future?</p>
<p><a href='https://jasonlewis.org/'>Jason Edward Lewis</a> is a digital media theorist, poet, and software designer. He founded <a href='http://www.obxlabs.net/'>Obx Laboratory for Experimental Media</a>, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is deeply committed to developing intriguing new forms of expression by working on conceptual, critical, creative and technical levels simultaneously. He is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well Professor of <a href='https://www.concordia.ca/finearts/design/programs/undergraduate/computation-arts-bfa.html'>Computation Arts</a> at <a href='http://www.concordia.ca/'>Concordia University</a>. Lewis was born and raised in northern California, and currently lives in Montreal.</p>
<p>Lewis directs the <a href='http://www.indigenousfutures.net/'>Initiative for Indigenous Futures</a>, and co-directs the <a href='https://milieux.concordia.ca/indigenous-futures-research-centre/'>Indigenous Futures Research Centre</a>, the <a href='http://www.indigenous-ai.net/'>Indigenous Protocol and AI Workshops</a>, <a href='http://www.abtec.org/'>the </a><a href='http://www.abtec.org/'>Aboriginal Territories in Cyberspace</a><a href='http://www.abtec.org/'> research network</a>, and the <a href='http://indigenousfutures.net/workshops/'>Skins Workshops on Aboriginal Storytelling and Video Game Design</a>.</p>
<p>Lewis’ creative and production work has been featured at Ars Electronica, Mobilefest, Elektra, Urban Screens, ISEA, SIGGRAPH, FILE and the Hawaiian International Film Festival, among other venues, and has been recognized with the inaugural Robert Coover Award for Best Work of Electronic Literature, two Prix Ars Electronica Honorable Mentions, several imagineNATIVE Best New Media awards and multiple solo exhibitions. His research interests include emergent media theory and history, and methodologies for conducting art-led technology research. In addition to being lead author on the award-winning “<a href='https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite'>Making Kin with the Machines</a>” essay and editor of the groundbreaking <a href='http://www.indigenous-ai.net/position-paper'><em>Indigenous Protocol and Artificial Intelligence Position Paper</em></a>, he has contributed to chapters in collected editions covering Indigenous futures, mobile media, video game design, machinima and experimental pedagogy with Indigenous communities.</p>
<p>Lewis has worked in a range of industrial research settings, including Interval Research, US West's Advanced Technology Group, and the Institute for Research on Learning, and, at the turn of the century, he founded and ran a research studio for the venture capital firm Arts Alliance.</p>
<p>Lewis is a Fellow of the Royal Society of Canada as well as a former Trudeau, Carnegie, and ISO-MIT Co-Creation Lab Fellow. He received a B.S. in Symbolic Systems (Cognitive Science) and B.A. in German Studies (Philosophy) from Stanford University, and an M.Phil. in Design from the Royal College of Art.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="58580998" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/8u8gqf/ET_-Ep_John_Lewis_v3_Mixdown_1ajtp7.mp3"/>
        <itunes:summary>In this episode, I sit down with Jason Edward Lewis to talk about how Indigenous peoples are imagining the futures while drawing upon their heritage. How can we broaden the discussions regarding technology and society to include Indigenous perspectives? How can we design and create AI that centers Indigenous concerns and accommodates a multiplicity of thought? And how can art-led technology research and the use of computational art in imagining the future?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4880</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>108</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to another episode of the "22 Lessons on Ethics and Technology" series! In this episode, I sit down with Jason Edward Lewis to talk about how Indigenous peoples are imagining the futures while drawing upon their heritage. How can we broaden the discussions regarding technology and society to include Indigenous perspectives? How can we design and create AI that centers Indigenous concerns and accommodates a multiplicity of thought? And how can art-led technology research and the use of computational art in imagining the future? Jason Edward Lewis is a digital media theorist, poet, and software designer. He founded Obx Laboratory for Experimental Media, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is deeply committed to developing intriguing new forms of expression by working on conceptual, critical, creative and technical levels simultaneously. He is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well Professor of Computation Arts at Concordia University. Lewis was born and raised in northern California, and currently lives in Montreal. Lewis directs the Initiative for Indigenous Futures, and co-directs the Indigenous Futures Research Centre, the Indigenous Protocol and AI Workshops, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design. Lewis’ creative and production work has been featured at Ars Electronica, Mobilefest, Elektra, Urban Screens, ISEA, SIGGRAPH, FILE and the Hawaiian International Film Festival, among other venues, and has been recognized with the inaugural Robert Coover Award for Best Work of Electronic Literature, two Prix Ars Electronica Honorable Mentions, several imagineNATIVE Best New Media awards and multiple solo exhibitions. His research interests include emergent media theory and history, and methodologies for conducting art-led technology research. In addition to being lead author on the award-winning “Making Kin with the Machines” essay and editor of the groundbreaking Indigenous Protocol and Artificial Intelligence Position Paper, he has contributed to chapters in collected editions covering Indigenous futures, mobile media, video game design, machinima and experimental pedagogy with Indigenous communities. Lewis has worked in a range of industrial research settings, including Interval Research, US West's Advanced Technology Group, and the Institute for Research on Learning, and, at the turn of the century, he founded and ran a research studio for the venture capital firm Arts Alliance. Lewis is a Fellow of the Royal Society of Canada as well as a former Trudeau, Carnegie, and ISO-MIT Co-Creation Lab Fellow. He received a B.S. in Symbolic Systems (Cognitive Science) and B.A. in German Studies (Philosophy) from Stanford University, and an M.Phil. in Design from the Royal College of Art.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Technology and Genocide: What the Holocaust can tell us about perils of technological utopianism</title>
        <itunes:title>Technology and Genocide: What the Holocaust can tell us about perils of technological utopianism</itunes:title>
        <link>https://dmdonig.podbean.com/e/technology-and-genocide-what-the-holocaust-can-tell-us-about-perils-of-technological-utopianism/</link>
                    <comments>https://dmdonig.podbean.com/e/technology-and-genocide-what-the-holocaust-can-tell-us-about-perils-of-technological-utopianism/#comments</comments>        <pubDate>Fri, 05 May 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ff3cca60-1f59-3fdd-8420-cec0f0e148eb</guid>
                                    <description><![CDATA[<p class="p1">Welcome back for another episode in the "22 Lessons on Ethics and Technology Series!</p>
<p class="p1">In this episode of the series, I speak to Dr. Eric Katz, and we take on the common utopian mythology of technology as inherently progressive, focusing specifically on the frequent slide from utopianism into terror. We talk about the uses of technology during the Holocaust and the specific ways in which scientists, architects, medical professionals, businessmen, and engineers participated in the planning and operation of the concentration and extermination camps that were the foundation of the 'final solution'. How can we think about the claims of technological progress in light of the Nazi's use of science and technology in their killing operations? And what can we learn from the Nazi past about how our commitment to a vision of technological progress can go horrifically wrong?</p>
<p class="p1"> </p>
<p class="font_8 wixui-rich-text__text"><a href='https://www.ippp.gmu.edu/eric-katz'>Dr. Eric Katz</a> is Professor Emeritus of Philosophy in the Department of Humanities at the New Jersey Institute of Technology.  He received a B.A. in Philosophy from Yale in 1974 and a Ph.D.in Philosophy from Boston Universityin 1983.  His research focuses on environmental ethics, philosophy of technology, engineering ethics, Holocaust studies, and the synergistic connections among these fields.  He is especially known for his criticism of the policy of ecological restoration. </p>
<p class="font_8 wixui-rich-text__text">Dr. Katz has published over 80 articles and essays in these fields, as well as two books: Anne Frank’s Tree: Nature’s Confrontation with Technology, Domination, and the Holocaust (White Horse Press, 2015) and Nature as Subject: Human Obligation and Natural Community (Rowman and Littlefield, 1997), winner of the CHOICE book award for “Outstanding Academic Books for 1997.” He is the editor of Death by Design: Science, Technology, and Engineering in Nazi Germany (Pearson/Longman, 2006).  He has co-edited (with Andrew Light) the collection Environmental Pragmatism  (London: Routledge, 1996) and (with Andrew Light and David Rothenberg) the collection Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology (Cambridge: MIT Press, 2000). He was the Book Review Editor of the journal Environmental Ethics from 1996-2014, and he was the founding Vice-President of the International Society for Environmental Ethics in 1990.  From 1991-2007 he was the Director of the Science, Technology, and Society (STS) program at NJIT. His current research projects involve science, technology, and environmental policy in Nazi Germany.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p class="p1">Welcome back for another episode in the "22 Lessons on Ethics and Technology Series!</p>
<p class="p1">In this episode of the series, I speak to Dr. Eric Katz, and we take on the common utopian mythology of technology as inherently progressive, focusing specifically on the frequent slide from utopianism into terror. We talk about the uses of technology during the Holocaust and the specific ways in which scientists, architects, medical professionals, businessmen, and engineers participated in the planning and operation of the concentration and extermination camps that were the foundation of the 'final solution'. How can we think about the claims of technological progress in light of the Nazi's use of science and technology in their killing operations? And what can we learn from the Nazi past about how our commitment to a vision of technological progress can go horrifically wrong?</p>
<p class="p1"> </p>
<p class="font_8 wixui-rich-text__text"><a href='https://www.ippp.gmu.edu/eric-katz'>Dr. Eric Katz</a> is Professor Emeritus of Philosophy in the Department of Humanities at the New Jersey Institute of Technology.  He received a B.A. in Philosophy from Yale in 1974 and a Ph.D.in Philosophy from Boston Universityin 1983.  His research focuses on environmental ethics, philosophy of technology, engineering ethics, Holocaust studies, and the synergistic connections among these fields.  He is especially known for his criticism of the policy of ecological restoration. </p>
<p class="font_8 wixui-rich-text__text">Dr. Katz has published over 80 articles and essays in these fields, as well as two books: Anne Frank’s Tree: Nature’s Confrontation with Technology, Domination, and the Holocaust (White Horse Press, 2015) and Nature as Subject: Human Obligation and Natural Community (Rowman and Littlefield, 1997), winner of the CHOICE book award for “Outstanding Academic Books for 1997.” He is the editor of Death by Design: Science, Technology, and Engineering in Nazi Germany (Pearson/Longman, 2006).  He has co-edited (with Andrew Light) the collection Environmental Pragmatism  (London: Routledge, 1996) and (with Andrew Light and David Rothenberg) the collection Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology (Cambridge: MIT Press, 2000). He was the Book Review Editor of the journal Environmental Ethics from 1996-2014, and he was the founding Vice-President of the International Society for Environmental Ethics in 1990.  From 1991-2007 he was the Director of the Science, Technology, and Society (STS) program at NJIT. His current research projects involve science, technology, and environmental policy in Nazi Germany.</p>
]]></content:encoded>
                                    
        <enclosure length="46635024" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9idgh7/ET22LEIT-500-EricKatz-final_Mixdown_16ohl1.mp3"/>
        <itunes:summary>Welcome back for another episode in the ”22 Lessons on Ethics and Technology Series! In this episode of the series, I speak to Dr. Eric Katz, and we take on the common utopian mythology of technology as inherently progressive, focusing specifically on the frequent slide from utopianism into terror. We talk about the uses of technology during the Holocaust and the specific ways in which scientists, architects, medical professionals, businessmen, and engineers participated in the planning and operation of the concentration and extermination camps that were the foundation of the ’final solution’. How can we think about the claims of technological progress in light of the Nazi’s use of science and technology in their killing operations? And what can we learn from the Nazi past about how our commitment to a vision of technological progress can go horrifically wrong?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3884</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>107</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back for another episode in the "22 Lessons on Ethics and Technology Series! In this episode of the series, I speak to Dr. Eric Katz, and we take on the common utopian mythology of technology as inherently progressive, focusing specifically on the frequent slide from utopianism into terror. We talk about the uses of technology during the Holocaust and the specific ways in which scientists, architects, medical professionals, businessmen, and engineers participated in the planning and operation of the concentration and extermination camps that were the foundation of the 'final solution'. How can we think about the claims of technological progress in light of the Nazi's use of science and technology in their killing operations? And what can we learn from the Nazi past about how our commitment to a vision of technological progress can go horrifically wrong?   Dr. Eric Katz is Professor Emeritus of Philosophy in the Department of Humanities at the New Jersey Institute of Technology.  He received a B.A. in Philosophy from Yale in 1974 and a Ph.D.in Philosophy from Boston Universityin 1983.  His research focuses on environmental ethics, philosophy of technology, engineering ethics, Holocaust studies, and the synergistic connections among these fields.  He is especially known for his criticism of the policy of ecological restoration.  Dr. Katz has published over 80 articles and essays in these fields, as well as two books: Anne Frank’s Tree: Nature’s Confrontation with Technology, Domination, and the Holocaust (White Horse Press, 2015) and Nature as Subject: Human Obligation and Natural Community (Rowman and Littlefield, 1997), winner of the CHOICE book award for “Outstanding Academic Books for 1997.” He is the editor of Death by Design: Science, Technology, and Engineering in Nazi Germany (Pearson/Longman, 2006).  He has co-edited (with Andrew Light) the collection Environmental Pragmatism  (London: Routledge, 1996) and (with Andrew Light and David Rothenberg) the collection Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology (Cambridge: MIT Press, 2000). He was the Book Review Editor of the journal Environmental Ethics from 1996-2014, and he was the founding Vice-President of the International Society for Environmental Ethics in 1990.  From 1991-2007 he was the Director of the Science, Technology, and Society (STS) program at NJIT. His current research projects involve science, technology, and environmental policy in Nazi Germany.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Instituting Integrity: The rise of the integrity worker collective</title>
        <itunes:title>Instituting Integrity: The rise of the integrity worker collective</itunes:title>
        <link>https://dmdonig.podbean.com/e/instituting-integrity-the-rise-of-the-integrity-worker-collective/</link>
                    <comments>https://dmdonig.podbean.com/e/instituting-integrity-the-rise-of-the-integrity-worker-collective/#comments</comments>        <pubDate>Fri, 28 Apr 2023 09:53:09 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/b4d67f54-4bc0-389c-8d04-f5de8779e901</guid>
                                    <description><![CDATA[<p>Today I’m sitting down with Talha Baig to talk about a new to me organization, the Integrity Institute. On the show, I’ve spent a lot of time talking about what I see as a new workforce emerging in the tech sector, of people working in jobs in the tech sector to try and understand, assess, and mitigate some of the harms caused by technologies. That’s why I was excited to learn about the <a href='https://integrityinstitute.org/'>Integrity Institute</a>, a cohort of engineers, product managers, researchers, analysts, data scientists, operations specialists, policy experts and more, who are coming together to leverage their combined experience and their understanding of the systemic causes of problems on the social internet to help mitigate these problems. They want to bring this experience and expertise directly to the people theorizing, building, and governing the social internet. So I wanted to talk to Talha, who hosts the Trust in Tech podcast out of the institute, about the concept, the function, and the future of integrity work.</p>
<p><a href='https://www.linkedin.com/in/talha-baig'>Talha Baig</a> is an expert on using machine learning to address platform integrity issues. He has spent 3 years as a Machine Learning Engineer reducing human, drugs, and weapon trafficking on Facebook Marketplace. He has insider knowledge on how platforms use AI for both good and bad, and shares his thoughts on his new podcast <a href='https://integrityinstitute.org/podcast'>Trust in Tech,</a> where he has in-depth conversations about the social internet with other platform integrity workers. They discuss the intersections between internet, society, culture, and philosophy with the goal of helping individuals, societies, and democracies to thrive.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today I’m sitting down with Talha Baig to talk about a new to me organization, the Integrity Institute. On the show, I’ve spent a lot of time talking about what I see as a new workforce emerging in the tech sector, of people working in jobs in the tech sector to try and understand, assess, and mitigate some of the harms caused by technologies. That’s why I was excited to learn about the <a href='https://integrityinstitute.org/'>Integrity Institute</a>, a cohort of engineers, product managers, researchers, analysts, data scientists, operations specialists, policy experts and more, who are coming together to leverage their combined experience and their understanding of the systemic causes of problems on the social internet to help mitigate these problems. They want to bring this experience and expertise directly to the people theorizing, building, and governing the social internet. So I wanted to talk to Talha, who hosts the Trust in Tech podcast out of the institute, about the concept, the function, and the future of integrity work.</p>
<p><a href='https://www.linkedin.com/in/talha-baig'>Talha Baig</a> is an expert on using machine learning to address platform integrity issues. He has spent 3 years as a Machine Learning Engineer reducing human, drugs, and weapon trafficking on Facebook Marketplace. He has insider knowledge on how platforms use AI for both good and bad, and shares his thoughts on his new podcast <a href='https://integrityinstitute.org/podcast'>Trust in Tech,</a> where he has in-depth conversations about the social internet with other platform integrity workers. They discuss the intersections between internet, society, culture, and philosophy with the goal of helping individuals, societies, and democracies to thrive.</p>
]]></content:encoded>
                                    
        <enclosure length="96494192" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/sxbky2/Talha_Baig_Episode_mixdown8llqk.mp3"/>
        <itunes:summary>Today I’m sitting down with Talha Baig to talk about a new to me organization, the Integrity Institute. On the show, I’ve spent a lot of time talking about what I see as a new workforce emerging in the tech sector, of people working in jobs in the tech sector to try and understand, assess, and mitigate some of the harms caused by technologies. That’s why I was excited to learn about the Integrity Institute, a cohort of engineers, product managers, researchers, analysts, data scientists, operations specialists, policy experts and more, who are coming together to leverage their combined experience and their understanding of the systemic causes of problems on the social internet to help mitigate these problems. They want to bring this experience and expertise directly to the people theorizing, building, and governing the social internet. So I wanted to talk to Talha, who hosts the Trust in Tech podcast out of the institute, about the concept, the function, and the future of integrity work.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4019</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>106</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today I’m sitting down with Talha Baig to talk about a new to me organization, the Integrity Institute. On the show, I’ve spent a lot of time talking about what I see as a new workforce emerging in the tech sector, of people working in jobs in the tech sector to try and understand, assess, and mitigate some of the harms caused by technologies. That’s why I was excited to learn about the Integrity Institute, a cohort of engineers, product managers, researchers, analysts, data scientists, operations specialists, policy experts and more, who are coming together to leverage their combined experience and their understanding of the systemic causes of problems on the social internet to help mitigate these problems. They want to bring this experience and expertise directly to the people theorizing, building, and governing the social internet. So I wanted to talk to Talha, who hosts the Trust in Tech podcast out of the institute, about the concept, the function, and the future of integrity work. Talha Baig is an expert on using machine learning to address platform integrity issues. He has spent 3 years as a Machine Learning Engineer reducing human, drugs, and weapon trafficking on Facebook Marketplace. He has insider knowledge on how platforms use AI for both good and bad, and shares his thoughts on his new podcast Trust in Tech, where he has in-depth conversations about the social internet with other platform integrity workers. They discuss the intersections between internet, society, culture, and philosophy with the goal of helping individuals, societies, and democracies to thrive.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>How We Breathe: how technology is changing approaches to ventilation</title>
        <itunes:title>How We Breathe: how technology is changing approaches to ventilation</itunes:title>
        <link>https://dmdonig.podbean.com/e/how-we-breathe-how-technology-is-changing-approaches-to-ventilation/</link>
                    <comments>https://dmdonig.podbean.com/e/how-we-breathe-how-technology-is-changing-approaches-to-ventilation/#comments</comments>        <pubDate>Fri, 21 Apr 2023 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/f52aaa84-2409-3131-8e77-f8d19e8e6e9c</guid>
                                    <description><![CDATA[<p>Between 2020 and 2022, I spent a lot of time reading about ventilators. So did a lot of the country. News coverage of the pandemic talked about everything from the serious shortage in ventilators around the country to new technologies available that might help save lives by helping victims of the virus breathe.</p>
<p>From the pandemic that started in March of 2020, to the wildfires in California in August of that same year that made it difficult to take the outside air, I have spent a lot of time over the last few years thinking about breathing, that simple and essential activity that we’ll do, mostly unconsciously, throughout our lives. And how that activity of breathing is, at this moment in history, connected to technology.</p>
<p>That’s why I wanted to talk to Aurika Savickaite, an- Acute Care Nurse Practitioner and medical professional at the University of Chicago who has spent her entire career providing top-quality patient care and advocating for the use of helmet-based ventilation to improve healthcare outcomes.

</p>
<p>Aurika is a recognized expert in noninvasive ventilation via the helmet interface and has garnered widespread respect within the medical community for her passionate work in this area. In 2014, she was involved in a successful three-year trial study at the University of Chicago Medical Center that tested the effectiveness of helmet-based ventilation in the ICU. Drawing on this experience, she authored a capstone paper on Noninvasive Positive Pressure Ventilation for the Treatment of Acute Respiratory Failure in Immunocompromised Patients, which has been instrumental in raising awareness about the benefits of this technology.</p>
<p>In March 2020, Aurika founded <a href='http://HelmetBasedVentilation.com'>HelmetBasedVentilation.com</a>, a website that has become a valuable resource for medical professionals seeking to learn more about the benefits of helmets and their use in treating patients with respiratory distress. Aurika continues to actively manage the website and update it with the latest research and information about helmet-based ventilation.</p>
<p>Today, Aurika is dedicated to educating clinicians about the use of helmet-based ventilation and she believes that the evidence-based information she provides can help save lives, shorten ICU stays, lower the workload for medical staff, and improve overall healthcare outcomes. Her goal is to promote the use of this technology in both ICU and non-ICU settings and help to make it more widely available to those who need it.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Between 2020 and 2022, I spent a lot of time reading about ventilators. So did a lot of the country. News coverage of the pandemic talked about everything from the serious shortage in ventilators around the country to new technologies available that might help save lives by helping victims of the virus breathe.</p>
<p>From the pandemic that started in March of 2020, to the wildfires in California in August of that same year that made it difficult to take the outside air, I have spent a lot of time over the last few years thinking about breathing, that simple and essential activity that we’ll do, mostly unconsciously, throughout our lives. And how that activity of breathing is, at this moment in history, connected to technology.</p>
<p>That’s why I wanted to talk to Aurika Savickaite, an- Acute Care Nurse Practitioner and medical professional at the University of Chicago who has spent her entire career providing top-quality patient care and advocating for the use of helmet-based ventilation to improve healthcare outcomes.<br>
<br>
</p>
<p>Aurika is a recognized expert in noninvasive ventilation via the helmet interface and has garnered widespread respect within the medical community for her passionate work in this area. In 2014, she was involved in a successful three-year trial study at the University of Chicago Medical Center that tested the effectiveness of helmet-based ventilation in the ICU. Drawing on this experience, she authored a capstone paper on Noninvasive Positive Pressure Ventilation for the Treatment of Acute Respiratory Failure in Immunocompromised Patients, which has been instrumental in raising awareness about the benefits of this technology.</p>
<p>In March 2020, Aurika founded <a href='http://HelmetBasedVentilation.com'>HelmetBasedVentilation.com</a>, a website that has become a valuable resource for medical professionals seeking to learn more about the benefits of helmets and their use in treating patients with respiratory distress. Aurika continues to actively manage the website and update it with the latest research and information about helmet-based ventilation.</p>
<p>Today, Aurika is dedicated to educating clinicians about the use of helmet-based ventilation and she believes that the evidence-based information she provides can help save lives, shorten ICU stays, lower the workload for medical staff, and improve overall healthcare outcomes. Her goal is to promote the use of this technology in both ICU and non-ICU settings and help to make it more widely available to those who need it.</p>
]]></content:encoded>
                                    
        <enclosure length="82116060" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/6niv94/Sauvikaite_mixdown.mp3"/>
        <itunes:summary>Between 2020 and 2022, I spent a lot of time reading about ventilators. So did a lot of the country. News coverage of the pandemic talked about everything from the serious shortage in ventilators around the country to new technologies available that might help save lives by helping victims of the virus breathe.

From the pandemic that started in March of 2020, to the wildfires in California in August of that same year that made it difficult to take the outside air, I have spent a lot of time over the last few years thinking about breathing, that simple and essential activity that we’ll do, mostly unconsciously, throughout our lives. And how that activity of breathing is, at this moment in history, connected to technology.

That’s why I wanted to talk to Aurika Savickaite, an- Acute Care Nurse Practitioner and medical professional at the University of Chicago who has spent her entire career providing top-quality patient care and advocating for the use of helmet-based ventilation to improve healthcare outcomes.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3421</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>105</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Between 2020 and 2022, I spent a lot of time reading about ventilators. So did a lot of the country. News coverage of the pandemic talked about everything from the serious shortage in ventilators around the country to new technologies available that might help save lives by helping victims of the virus breathe. From the pandemic that started in March of 2020, to the wildfires in California in August of that same year that made it difficult to take the outside air, I have spent a lot of time over the last few years thinking about breathing, that simple and essential activity that we’ll do, mostly unconsciously, throughout our lives. And how that activity of breathing is, at this moment in history, connected to technology. That’s why I wanted to talk to Aurika Savickaite, an- Acute Care Nurse Practitioner and medical professional at the University of Chicago who has spent her entire career providing top-quality patient care and advocating for the use of helmet-based ventilation to improve healthcare outcomes. Aurika is a recognized expert in noninvasive ventilation via the helmet interface and has garnered widespread respect within the medical community for her passionate work in this area. In 2014, she was involved in a successful three-year trial study at the University of Chicago Medical Center that tested the effectiveness of helmet-based ventilation in the ICU. Drawing on this experience, she authored a capstone paper on Noninvasive Positive Pressure Ventilation for the Treatment of Acute Respiratory Failure in Immunocompromised Patients, which has been instrumental in raising awareness about the benefits of this technology. In March 2020, Aurika founded HelmetBasedVentilation.com, a website that has become a valuable resource for medical professionals seeking to learn more about the benefits of helmets and their use in treating patients with respiratory distress. Aurika continues to actively manage the website and update it with the latest research and information about helmet-based ventilation. Today, Aurika is dedicated to educating clinicians about the use of helmet-based ventilation and she believes that the evidence-based information she provides can help save lives, shorten ICU stays, lower the workload for medical staff, and improve overall healthcare outcomes. Her goal is to promote the use of this technology in both ICU and non-ICU settings and help to make it more widely available to those who need it.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Technically Human Rights: How technologies are changing the state of human rights</title>
        <itunes:title>Technically Human Rights: How technologies are changing the state of human rights</itunes:title>
        <link>https://dmdonig.podbean.com/e/technically-human-rights-how-technologies-are-changing-the-state-of-human-rights/</link>
                    <comments>https://dmdonig.podbean.com/e/technically-human-rights-how-technologies-are-changing-the-state-of-human-rights/#comments</comments>        <pubDate>Fri, 14 Apr 2023 13:34:41 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e804c10a-40de-3456-a6a1-cc1ef9b92f4d</guid>
                                    <description><![CDATA[<p>Welcome back to another episode in the “22 Lessons on Ethics and Technology for the 21st Century” series. In this episode of the series, we take a deep dive into the history of how technology intersects with human rights. My thinking on ethics and technology has human rights at its foundations, so I was particularly excited to sit down with Dr. Jay Aronson, one of the leading thinkers on science, technology, and human rights. We explore how technologies have coincided with the development of human rights in ethical and political terms, and we look at the role that technologies play in our contemporary moment in enforcing human rights--and violating them.</p>
<p>Dr. Jay Aronson is the founder and director of the <a href='http://www.cmu.edu/chrs/index.html'>Center for Human Rights Science</a> at Carnegie Mellon University. He is also Professor of Science, Technology, and Society in the History Department. Aronson’s research and teaching examine the interactions of science, technology, law, media, and human rights in a variety of contexts. His current project focuses on the documentation and analysis of police-involved fatalities and deaths in custody in the United States.This work is being done through collaborations with the Pennsylvania Prison Society and Dr. Roger A. Mitchell, the Chief Medical Examiner of Washington, DC. In addition, he maintains an active interest in the use of digital evidence (especially video) in human rights investigations. In this context, he primarily facilitates partnerships between computer scientists and human rights practitioners to develop better tools and methods for acquiring, authenticating, analyzing, and archiving human rights media. Previously, Aronson spent nearly a decade examining the ethical, political, and social dimensions of post-conflict and post-disaster identification of the missing and disappeared in collaboration with a team of anthropologists, bioethicists, and forensic scientists he assembled. This work built on his doctoral dissertation, a study of the development of forensic DNA profiling within the American criminal justice system. His recent book, <a href='https://www.hup.harvard.edu/catalog.php?isbn=9780674971493'>Who Owns the Dead? The Science and Politics of Death at Ground Zero</a> (Harvard University Press, 2016), which analyzes the recovery, identification, and memorialization of the victims of the 9/11 World Trade Center attacks, is a culmination of this effort. Aronson has also been involved in a variety of projects with colleagues from statistics, political science, and conflict monitoring to improve the quality of civilian casualty recording and estimation in times of conflict. Aronson received his Ph.D. in the History of Science and Technology from the University of Minnesota and was both a pre- and postdoctoral fellow at Harvard University’s John F. Kennedy School of Government. His work is funded by generous grants from the MacArthur Foundation, the Oak Foundation, and the Open Society Foundations.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to another episode in the “22 Lessons on Ethics and Technology for the 21st Century” series. In this episode of the series, we take a deep dive into the history of how technology intersects with human rights. My thinking on ethics and technology has human rights at its foundations, so I was particularly excited to sit down with Dr. Jay Aronson, one of the leading thinkers on science, technology, and human rights. We explore how technologies have coincided with the development of human rights in ethical and political terms, and we look at the role that technologies play in our contemporary moment in enforcing human rights--and violating them.</p>
<p>Dr. Jay Aronson is the founder and director of the <a href='http://www.cmu.edu/chrs/index.html'>Center for Human Rights Science</a> at Carnegie Mellon University. He is also Professor of Science, Technology, and Society in the History Department. Aronson’s research and teaching examine the interactions of science, technology, law, media, and human rights in a variety of contexts. His current project focuses on the documentation and analysis of police-involved fatalities and deaths in custody in the United States.This work is being done through collaborations with the Pennsylvania Prison Society and Dr. Roger A. Mitchell, the Chief Medical Examiner of Washington, DC. In addition, he maintains an active interest in the use of digital evidence (especially video) in human rights investigations. In this context, he primarily facilitates partnerships between computer scientists and human rights practitioners to develop better tools and methods for acquiring, authenticating, analyzing, and archiving human rights media. Previously, Aronson spent nearly a decade examining the ethical, political, and social dimensions of post-conflict and post-disaster identification of the missing and disappeared in collaboration with a team of anthropologists, bioethicists, and forensic scientists he assembled. This work built on his doctoral dissertation, a study of the development of forensic DNA profiling within the American criminal justice system. His recent book,<em> <a href='https://www.hup.harvard.edu/catalog.php?isbn=9780674971493'>Who Owns the Dead? The Science and Politics of Death at Ground Zero</a></em> (Harvard University Press, 2016), which analyzes the recovery, identification, and memorialization of the victims of the 9/11 World Trade Center attacks, is a culmination of this effort. Aronson has also been involved in a variety of projects with colleagues from statistics, political science, and conflict monitoring to improve the quality of civilian casualty recording and estimation in times of conflict. Aronson received his Ph.D. in the History of Science and Technology from the University of Minnesota and was both a pre- and postdoctoral fellow at Harvard University’s John F. Kennedy School of Government. His work is funded by generous grants from the MacArthur Foundation, the Oak Foundation, and the Open Society Foundations.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="50275574" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/h6buim/ET22LEIT-500-jayarsonson_Mixdown_1_luffed646hf.mp3"/>
        <itunes:summary>Welcome back to another episode in the “22 Lessons on Ethics and Technology for the 21st Century” series. In this episode of the series, we take a deep dive into the history of how technology intersects with human rights. My thinking on ethics and technology has human rights at its foundations, so I was particularly excited to sit down with Dr. Jay Aronson, one of the leading thinkers on science, technology, and human rights. We explore how technologies have coincided with the development of human rights in ethical and political terms, and we look at the role that technologies play in our contemporary moment in enforcing human rights--and violating them.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4187</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>104</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to another episode in the “22 Lessons on Ethics and Technology for the 21st Century” series. In this episode of the series, we take a deep dive into the history of how technology intersects with human rights. My thinking on ethics and technology has human rights at its foundations, so I was particularly excited to sit down with Dr. Jay Aronson, one of the leading thinkers on science, technology, and human rights. We explore how technologies have coincided with the development of human rights in ethical and political terms, and we look at the role that technologies play in our contemporary moment in enforcing human rights--and violating them. Dr. Jay Aronson is the founder and director of the Center for Human Rights Science at Carnegie Mellon University. He is also Professor of Science, Technology, and Society in the History Department. Aronson’s research and teaching examine the interactions of science, technology, law, media, and human rights in a variety of contexts. His current project focuses on the documentation and analysis of police-involved fatalities and deaths in custody in the United States.This work is being done through collaborations with the Pennsylvania Prison Society and Dr. Roger A. Mitchell, the Chief Medical Examiner of Washington, DC. In addition, he maintains an active interest in the use of digital evidence (especially video) in human rights investigations. In this context, he primarily facilitates partnerships between computer scientists and human rights practitioners to develop better tools and methods for acquiring, authenticating, analyzing, and archiving human rights media. Previously, Aronson spent nearly a decade examining the ethical, political, and social dimensions of post-conflict and post-disaster identification of the missing and disappeared in collaboration with a team of anthropologists, bioethicists, and forensic scientists he assembled. This work built on his doctoral dissertation, a study of the development of forensic DNA profiling within the American criminal justice system. His recent book, Who Owns the Dead? The Science and Politics of Death at Ground Zero (Harvard University Press, 2016), which analyzes the recovery, identification, and memorialization of the victims of the 9/11 World Trade Center attacks, is a culmination of this effort. Aronson has also been involved in a variety of projects with colleagues from statistics, political science, and conflict monitoring to improve the quality of civilian casualty recording and estimation in times of conflict. Aronson received his Ph.D. in the History of Science and Technology from the University of Minnesota and was both a pre- and postdoctoral fellow at Harvard University’s John F. Kennedy School of Government. His work is funded by generous grants from the MacArthur Foundation, the Oak Foundation, and the Open Society Foundations.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Global Technological Imaginary: Sci-Fi, Tech, and the Ethics of Representation</title>
        <itunes:title>The Global Technological Imaginary: Sci-Fi, Tech, and the Ethics of Representation</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-global-technological-imaginary-sci-fi-tech-and-the-ethics-of-representation/</link>
                    <comments>https://dmdonig.podbean.com/e/the-global-technological-imaginary-sci-fi-tech-and-the-ethics-of-representation/#comments</comments>        <pubDate>Fri, 07 Apr 2023 09:34:10 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/16af546d-2690-3012-bb9b-9ac096786176</guid>
                                    <description><![CDATA[<p>Welcome back to a brand new season of “Technically Human!” Today’s episode features another conversation in the "22 Lessons on Ethics and Technology" series.</p>
<p>I teach science fiction as a way of thinking about ethics and technology, because I fundamentally believe that before we can build anything, we first have to imagine it. Science fiction is at the core of so many of our technological innovations, offering us utopian visions of how the world could be, or how our values might be captured and catapulted by new technologies—or dystopias about how technology’s promise can go terribly, horribly wrong. So I was thrilled to talk with Professor <a href='https://iac.gatech.edu/people/person/lisa-yaszek'>Lisa Yaszek</a>, one of the world’s leading experts on science fictions, for this episode, about the role of science fiction in creating a global imaginary about technology that crosses centuries, continents, and cultures.</p>
<p>Dr. Lisa Yaszek is Regents Professor of Science Fiction Studies in the <a href='https://www.lmc.gatech.edu/'>School of Literature, Media, and Communication</a> at <a href='http://www.gatech.edu/'>Georgia Tech</a>. She is particularly interested in issues of gender, race, and science and technology in science fiction across media as well as the recovery of lost voices in science fiction history and the discovery of new voices from around the globe.  </p>
<p>Dr. Yaszek’s books include <a href='https://www.amazon.com/Self-Wired-Technology-Subjectivity-Contemporary/dp/0415866960/ref=mt_paperback?_encoding=UTF8&me='>The Self-Wired: Technology and Subjectivity in Contemporary American Narrative</a> (Routledge 2002/2014); <a href='https://www.amazon.com/Galactic-Suburbia-Recovering-Science-Fiction/dp/0814251641/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>Galactic Suburbia: Recovering Women’s Science Fiction</a> (Ohio State, 2008); <a href='https://www.amazon.com/Sisters-Tomorrow-Science-Fiction-Classics/dp/0819576247/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1520365015&sr=1-1'>Sisters of Tomorrow: The First Women of Science Fiction</a> (Wesleyan 2016); and <a href='https://www.amazon.com/Literary-Afrofuturism-Twenty-First-Century-Suns/dp/0814255965'>Literary Afrofuturism in the Twenty-First Century</a> (OSUP Fall 2020). Her ideas about science fiction as the premiere story form of modernity have been featured in <a href='https://www.washingtonpost.com/entertainment/aliens-as-immigrants-how-arrival-became-the-latest-political-sci-fi-film/2017/02/23/b9975f08-f83e-11e6-bf01-d47f8cf9b643_story.html?utm_term=.8511092322a0'>The Washington Post</a>, <a href='http://www.foodandwine.com/news/spice-science-fiction'>Food and Wine Magazine</a>, and <a href='https://www.usatoday.com/story/life/movies/2018/02/20/why-smart-sci-fi-struggles-find-its-audience-star-wars-world/351380002/'>USA Today</a> and on the AMC miniseries, <a href='https://www.amc.com/shows/james-camerons-story-of-science-fiction'>James Cameron's Story of Science Fiction</a>. A past president of the <a href='http://www.sfra.org/'>Science Fiction Research Association</a>, Yaszek currently serves as an editor for the <a href='https://www.loa.org/'>Library of America</a> and as a juror for the <a href='http://www.sfcenter.ku.edu/campbell.htm'>John W. Campbell</a> and <a href='http://www.eugiefoster.com/eugieaward'>Eugie Foster</a> Science Fiction Awards.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to a brand new season of “Technically Human!” Today’s episode features another conversation in the "22 Lessons on Ethics and Technology" series.</p>
<p>I teach science fiction as a way of thinking about ethics and technology, because I fundamentally believe that before we can build anything, we first have to imagine it. Science fiction is at the core of so many of our technological innovations, offering us utopian visions of how the world could be, or how our values might be captured and catapulted by new technologies—or dystopias about how technology’s promise can go terribly, horribly wrong. So I was thrilled to talk with Professor <a href='https://iac.gatech.edu/people/person/lisa-yaszek'>Lisa Yaszek</a>, one of the world’s leading experts on science fictions, for this episode, about the role of science fiction in creating a global imaginary about technology that crosses centuries, continents, and cultures.</p>
<p>Dr. Lisa Yaszek is Regents Professor of Science Fiction Studies in the <a href='https://www.lmc.gatech.edu/'>School of Literature, Media, and Communication</a> at <a href='http://www.gatech.edu/'>Georgia Tech</a>. She is particularly interested in issues of gender, race, and science and technology in science fiction across media as well as the recovery of lost voices in science fiction history and the discovery of new voices from around the globe.  </p>
<p>Dr. Yaszek’s books include <em><a href='https://www.amazon.com/Self-Wired-Technology-Subjectivity-Contemporary/dp/0415866960/ref=mt_paperback?_encoding=UTF8&me='>The Self-Wired: Technology and Subjectivity in Contemporary American Narrative</a> </em>(Routledge 2002/2014); <em><a href='https://www.amazon.com/Galactic-Suburbia-Recovering-Science-Fiction/dp/0814251641/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>Galactic Suburbia: Recovering Women’s Science Fiction</a></em> (Ohio State, 2008); <em><a href='https://www.amazon.com/Sisters-Tomorrow-Science-Fiction-Classics/dp/0819576247/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=1520365015&sr=1-1'>Sisters of Tomorrow: The First Women of Science Fiction</a></em> (Wesleyan 2016); and <a href='https://www.amazon.com/Literary-Afrofuturism-Twenty-First-Century-Suns/dp/0814255965'>Literary Afrofuturism in the Twenty-First Century</a> (OSUP Fall 2020). Her ideas about science fiction as the premiere story form of modernity have been featured in <em><a href='https://www.washingtonpost.com/entertainment/aliens-as-immigrants-how-arrival-became-the-latest-political-sci-fi-film/2017/02/23/b9975f08-f83e-11e6-bf01-d47f8cf9b643_story.html?utm_term=.8511092322a0'>The Washington Post</a></em>, <em><a href='http://www.foodandwine.com/news/spice-science-fiction'>Food and Wine Magazine</a></em>, and <em><a href='https://www.usatoday.com/story/life/movies/2018/02/20/why-smart-sci-fi-struggles-find-its-audience-star-wars-world/351380002/'>USA Today</a> </em>and on the AMC miniseries, <em><a href='https://www.amc.com/shows/james-camerons-story-of-science-fiction'>James Cameron's Story of Science Fiction</a>.</em> A past president of the <a href='http://www.sfra.org/'>Science Fiction Research Association</a>, Yaszek currently serves as an editor for the <a href='https://www.loa.org/'>Library of America</a> and as a juror for the <a href='http://www.sfcenter.ku.edu/campbell.htm'>John W. Campbell</a> and <a href='http://www.eugiefoster.com/eugieaward'>Eugie Foster</a> Science Fiction Awards.</p>
]]></content:encoded>
                                    
        <enclosure length="48275215" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/wq7yzz/ET22LEIT-500-lISAyAZEK_Mixdown_1_luffed8u52n.mp3"/>
        <itunes:summary>Welcome back to a brand new season of “Technically Human!” Today’s episode features another conversation in the ”22 Lessons on Ethics and Technology” series. In this episode, I sit down with Professor Lisa Yaszek, one of the world’s leading experts on science fictions, for this episode, about the role of science fiction in creating a global imaginary about technology that crosses centuries, continents, and cultures.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4021</itunes:duration>
        <itunes:season>11</itunes:season>
        <itunes:episode>103</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to a brand new season of “Technically Human!” Today’s episode features another conversation in the "22 Lessons on Ethics and Technology" series. I teach science fiction as a way of thinking about ethics and technology, because I fundamentally believe that before we can build anything, we first have to imagine it. Science fiction is at the core of so many of our technological innovations, offering us utopian visions of how the world could be, or how our values might be captured and catapulted by new technologies—or dystopias about how technology’s promise can go terribly, horribly wrong. So I was thrilled to talk with Professor Lisa Yaszek, one of the world’s leading experts on science fictions, for this episode, about the role of science fiction in creating a global imaginary about technology that crosses centuries, continents, and cultures. Dr. Lisa Yaszek is Regents Professor of Science Fiction Studies in the School of Literature, Media, and Communication at Georgia Tech. She is particularly interested in issues of gender, race, and science and technology in science fiction across media as well as the recovery of lost voices in science fiction history and the discovery of new voices from around the globe.   Dr. Yaszek’s books include The Self-Wired: Technology and Subjectivity in Contemporary American Narrative (Routledge 2002/2014); Galactic Suburbia: Recovering Women’s Science Fiction (Ohio State, 2008); Sisters of Tomorrow: The First Women of Science Fiction (Wesleyan 2016); and Literary Afrofuturism in the Twenty-First Century (OSUP Fall 2020). Her ideas about science fiction as the premiere story form of modernity have been featured in The Washington Post, Food and Wine Magazine, and USA Today and on the AMC miniseries, James Cameron's Story of Science Fiction. A past president of the Science Fiction Research Association, Yaszek currently serves as an editor for the Library of America and as a juror for the John W. Campbell and Eugie Foster Science Fiction Awards.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Zoom Fatigue: Distance Learning and Social Engagement in the Age of Social Distancing</title>
        <itunes:title>Zoom Fatigue: Distance Learning and Social Engagement in the Age of Social Distancing</itunes:title>
        <link>https://dmdonig.podbean.com/e/zoom-fatigue-distance-learning-and-social-engagement-in-the-age-of-social-distancing/</link>
                    <comments>https://dmdonig.podbean.com/e/zoom-fatigue-distance-learning-and-social-engagement-in-the-age-of-social-distancing/#comments</comments>        <pubDate>Fri, 10 Mar 2023 08:30:25 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ba885ce1-29a5-3613-98b0-9294120c01d7</guid>
                                    <description><![CDATA[<p>Welcome back to another episode of the 22 lessons on ethics and technology series, in a conversation with Dr. Judith Kalb about the growth of online education and technologies of virtual meeting.</p>
<p>How have our human interactions changed with the introduction, and normalization, of online meetings? How have virtual technologies transformed our relationships to one another, and to the information we exchange when we meet?  What are the ethics of learning and the transformation of what it means to learn, to teach, and to interact with our colleagues, students, and bosses online?</p>
<p>Dr. Judith E. Kalb is a professor in the Department of Languages, Literatures, and culture at the University of South Carolina. She earned a BA in Slavic Languages and Literatures at Princeton University and a joint PhD in Slavic Languages and Literatures and Humanities at Stanford University. Dr. Kalb’s research focuses on the interactions between Russian culture and the Greco-Roman classical tradition. Her book Russia’s Rome: Imperial Visions, Messianic Dreams, 1890-1930, examines the image of ancient Rome in the writings of Russian modernists. Her new project focuses on Russia’s reception of Homer. An award-winning teacher and a pioneer in online teaching and pedagogy, Dr. Kalb enjoys introducing students to the incredible world of Russian culture and the larger European literary tradition of which it forms a part.  </p>
<p>That's all for this season of “Technically Human.” We will return with new episodes in April. In the meantime, check out our archive, of over 100 episodes of the show, featuring conversations with thinkers, critics, and leaders across fields, industries and from around the world about how we navigate our humanity in the age of technology. We’ll see you in April!</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to another episode of the 22 lessons on ethics and technology series, in a conversation with Dr. Judith Kalb about the growth of online education and technologies of virtual meeting.</p>
<p>How have our human interactions changed with the introduction, and normalization, of online meetings? How have virtual technologies transformed our relationships to one another, and to the information we exchange when we meet?  What are the ethics of learning and the transformation of what it means to learn, to teach, and to interact with our colleagues, students, and bosses online?</p>
<p>Dr. Judith E. Kalb is a professor in the Department of Languages, Literatures, and culture at the University of South Carolina. She earned a BA in Slavic Languages and Literatures at Princeton University and a joint PhD in Slavic Languages and Literatures and Humanities at Stanford University. Dr. Kalb’s research focuses on the interactions between Russian culture and the Greco-Roman classical tradition. Her book<em> Russia’s Rome: Imperial Visions, Messianic Dreams, 1890-1930</em>, examines the image of ancient Rome in the writings of Russian modernists. Her new project focuses on Russia’s reception of Homer. An award-winning teacher and a pioneer in online teaching and pedagogy, Dr. Kalb enjoys introducing students to the incredible world of Russian culture and the larger European literary tradition of which it forms a part.  </p>
<p>That's all for this season of “Technically Human.” We will return with new episodes in April. In the meantime, check out our archive, of over 100 episodes of the show, featuring conversations with thinkers, critics, and leaders across fields, industries and from around the world about how we navigate our humanity in the age of technology. We’ll see you in April!</p>
]]></content:encoded>
                                    
        <enclosure length="44754478" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/6nxkjd/ET22LEIT-400-JubdithKalb-final_Mixdown_167yv6.mp3"/>
        <itunes:summary>Welcome back to another episode of the 22 lessons on ethics and technology series, in a conversation with Dr. Judith Kalb about the growth of online education and technologies of virtual meeting.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3727</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>102</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to another episode of the 22 lessons on ethics and technology series, in a conversation with Dr. Judith Kalb about the growth of online education and technologies of virtual meeting. How have our human interactions changed with the introduction, and normalization, of online meetings? How have virtual technologies transformed our relationships to one another, and to the information we exchange when we meet?  What are the ethics of learning and the transformation of what it means to learn, to teach, and to interact with our colleagues, students, and bosses online? Dr. Judith E. Kalb is a professor in the Department of Languages, Literatures, and culture at the University of South Carolina. She earned a BA in Slavic Languages and Literatures at Princeton University and a joint PhD in Slavic Languages and Literatures and Humanities at Stanford University. Dr. Kalb’s research focuses on the interactions between Russian culture and the Greco-Roman classical tradition. Her book Russia’s Rome: Imperial Visions, Messianic Dreams, 1890-1930, examines the image of ancient Rome in the writings of Russian modernists. Her new project focuses on Russia’s reception of Homer. An award-winning teacher and a pioneer in online teaching and pedagogy, Dr. Kalb enjoys introducing students to the incredible world of Russian culture and the larger European literary tradition of which it forms a part.   That's all for this season of “Technically Human.” We will return with new episodes in April. In the meantime, check out our archive, of over 100 episodes of the show, featuring conversations with thinkers, critics, and leaders across fields, industries and from around the world about how we navigate our humanity in the age of technology. We’ll see you in April!</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Data Feminism</title>
        <itunes:title>Data Feminism</itunes:title>
        <link>https://dmdonig.podbean.com/e/data-feminism/</link>
                    <comments>https://dmdonig.podbean.com/e/data-feminism/#comments</comments>        <pubDate>Fri, 03 Mar 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6425bce3-2345-3458-943d-427e7c0a1edb</guid>
                                    <description><![CDATA[<p>Welcome back, for another episode of the “22 Lessons on Ethics and Technology” series. In this episode, I speak with Dr. Lauren Klein about the complicated relationship between data, race, and gender, and what she calls “data feminism.” What is the relationship between data visualizations, representation, and construction of categories—and difference? How have visualizations constructed race and gender? And how can a feminist data science approach help in constructing a more just and equal world?</p>
<p>Dr. Lauren Klein is an associate professor in the Departments of English and Quantitative Theory & Methods at Emory University. She received her A.B. from Harvard University and her Ph.D. from the Graduate Center of the City University of New York (CUNY). Her research interests include digital humanities, data science, data studies, and early American literature. Before arriving at Emory, Klein taught in the <a href='https://www.lmc.gatech.edu/'>School of Literature, Media, and Communication</a> at Georgia Tech where she directed the <a href='https://dhlab.lmc.gatech.edu/'>Digital Humanities Lab</a>.</p>
<p>She is currently at work on two major projects: the first, <a href='https://dhlab.lmc.gatech.edu/data-by-design/'>Data by Design</a>, is an interactive book on the history of data visualization. Awarded an <a href='https://www.neh.gov/sites/default/files/inline-files/NEH%20Mellon%20award%20list%202017-2018.pdf'>NEH-Mellon Fellowship for Digital Publication</a>, Data by Design emphasizes how the modern visualizing impulse emerged from a set of complex intellectually and politically-charged contexts in the United States and across the Atlantic.</p>
<p>Her second project, tentatively titled Vectors of Freedom, employs a range of quantitative methods in order to surface the otherwise invisible forms of labor, agency, and action involved in the abolitionist movement of the nineteenth-century United States.</p>
<p>Dr. Klein is the author of An Archive of Taste: Race and Eating in the Early United States (University of Minnesota Press, 2020). This book shows how thinking about eating can help to tell new stories about the range of people, from the nation’s first presidents to their enslaved chefs, who worked to establish a cultural foundation for the United States. Klein is also the co-author (with <a href='http://www.kanarinka.com/'>Catherine D’Ignazio</a>) of <a href='https://bookbook.pubpub.org/data-feminism'>Data Feminism</a> (MIT Press, 2020), a trade book that explores the intersection of feminist thinking and data science. With Matthew K. Gold, she edits <a href='https://dhdebates.gc.cuny.edu/'>Debates in the Digital Humanities</a> (University of Minnesota Press), a hybrid print/digital publication stream that explores debates in the field as they emerge. The most recent book in this series is <a href='https://www.upress.umn.edu/book-division/books/debates-in-the-digital-humanities-2019'>Debates in the Digital Humanities 2019</a>.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back, for another episode of the “22 Lessons on Ethics and Technology” series. In this episode, I speak with Dr. Lauren Klein about the complicated relationship between data, race, and gender, and what she calls “data feminism.” What is the relationship between data visualizations, representation, and construction of categories—and difference? How have visualizations constructed race and gender? And how can a feminist data science approach help in constructing a more just and equal world?</p>
<p>Dr. Lauren Klein is an associate professor in the Departments of English and Quantitative Theory & Methods at Emory University. She received her A.B. from Harvard University and her Ph.D. from the Graduate Center of the City University of New York (CUNY). Her research interests include digital humanities, data science, data studies, and early American literature. Before arriving at Emory, Klein taught in the <a href='https://www.lmc.gatech.edu/'>School of Literature, Media, and Communication</a> at Georgia Tech where she directed the <a href='https://dhlab.lmc.gatech.edu/'>Digital Humanities Lab</a>.</p>
<p>She is currently at work on two major projects: the first, <a href='https://dhlab.lmc.gatech.edu/data-by-design/'><em>Data by Design</em></a>, is an interactive book on the history of data visualization. Awarded an <a href='https://www.neh.gov/sites/default/files/inline-files/NEH%20Mellon%20award%20list%202017-2018.pdf'>NEH-Mellon Fellowship for Digital Publication</a>, <em>Data by Design</em> emphasizes how the modern visualizing impulse emerged from a set of complex intellectually and politically-charged contexts in the United States and across the Atlantic.</p>
<p>Her second project, tentatively titled <em>Vectors of Freedom</em>, employs a range of quantitative methods in order to surface the otherwise invisible forms of labor, agency, and action involved in the abolitionist movement of the nineteenth-century United States.</p>
<p>Dr. Klein is the author of <em>An Archive of Taste: Race and Eating in the Early United States</em> (University of Minnesota Press, 2020). This book shows how thinking about eating can help to tell new stories about the range of people, from the nation’s first presidents to their enslaved chefs, who worked to establish a cultural foundation for the United States. Klein is also the co-author (with <a href='http://www.kanarinka.com/'>Catherine D’Ignazio</a>) of <a href='https://bookbook.pubpub.org/data-feminism'><em>Data Feminism</em></a> (MIT Press, 2020), a trade book that explores the intersection of feminist thinking and data science. With Matthew K. Gold, she edits <a href='https://dhdebates.gc.cuny.edu/'><em>Debates in the Digital Humanities</em></a> (University of Minnesota Press), a hybrid print/digital publication stream that explores debates in the field as they emerge. The most recent book in this series is <a href='https://www.upress.umn.edu/book-division/books/debates-in-the-digital-humanities-2019'><em>Debates in the Digital Humanities 2019</em></a>.</p>
]]></content:encoded>
                                    
        <enclosure length="46130150" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/64i4zv/ET22LEIT-400-KLEIN_Mixdown_1_luffed780tr.mp3"/>
        <itunes:summary>Welcome back, for another episode of the “22 Lessons on Ethics and Technology” series. In this episode, I speak with Dr. Lauren Klein about the complicated relationship between data, race, and gender, and what she calls “data feminism.” What is the relationship between data visualizations, representation, and construction of categories—and difference? How have visualizations constructed race and gender? And how can a feminist data science approach help in constructing a more just and equal world?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3842</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>101</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back, for another episode of the “22 Lessons on Ethics and Technology” series. In this episode, I speak with Dr. Lauren Klein about the complicated relationship between data, race, and gender, and what she calls “data feminism.” What is the relationship between data visualizations, representation, and construction of categories—and difference? How have visualizations constructed race and gender? And how can a feminist data science approach help in constructing a more just and equal world? Dr. Lauren Klein is an associate professor in the Departments of English and Quantitative Theory &amp; Methods at Emory University. She received her A.B. from Harvard University and her Ph.D. from the Graduate Center of the City University of New York (CUNY). Her research interests include digital humanities, data science, data studies, and early American literature. Before arriving at Emory, Klein taught in the School of Literature, Media, and Communication at Georgia Tech where she directed the Digital Humanities Lab. She is currently at work on two major projects: the first, Data by Design, is an interactive book on the history of data visualization. Awarded an NEH-Mellon Fellowship for Digital Publication, Data by Design emphasizes how the modern visualizing impulse emerged from a set of complex intellectually and politically-charged contexts in the United States and across the Atlantic. Her second project, tentatively titled Vectors of Freedom, employs a range of quantitative methods in order to surface the otherwise invisible forms of labor, agency, and action involved in the abolitionist movement of the nineteenth-century United States. Dr. Klein is the author of An Archive of Taste: Race and Eating in the Early United States (University of Minnesota Press, 2020). This book shows how thinking about eating can help to tell new stories about the range of people, from the nation’s first presidents to their enslaved chefs, who worked to establish a cultural foundation for the United States. Klein is also the co-author (with Catherine D’Ignazio) of Data Feminism (MIT Press, 2020), a trade book that explores the intersection of feminist thinking and data science. With Matthew K. Gold, she edits Debates in the Digital Humanities (University of Minnesota Press), a hybrid print/digital publication stream that explores debates in the field as they emerge. The most recent book in this series is Debates in the Digital Humanities 2019.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Threshold: Leading in the Age of AI</title>
        <itunes:title>The Threshold: Leading in the Age of AI</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-threshold-leading-in-the-age-of-ai/</link>
                    <comments>https://dmdonig.podbean.com/e/the-threshold-leading-in-the-age-of-ai/#comments</comments>        <pubDate>Fri, 24 Feb 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/14effbd8-68ed-3f04-87ff-4765b0731dc5</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Dr. Nick Chatrath about the crucial role that leadership plays in the future of AI development. We talk about organizational culture, the very human leaders driving technological production, and why human independent thinking matters more than ever, in the age of artificial intelligence.</p>
<p>Dr. <a href='https://www.google.com/search?q=nick+chatrath&oq=Nick%C2%A0Chatrath&aqs=chrome.0.35i39j0i390l3j69i60j69i61l2.416j0j4&sourceid=chrome&ie=UTF-8'>Nick Chatrath</a> is an expert in leadership and organizational transformation with the aim of helping humans flourish. He holds a doctorate from Oxford University and serves as managing director for a global leadership training firm. His book, <a href='https://www.amazon.com/Threshold-Leading-Age-AI/dp/1635767989'>The Threshold: Leading in the Age of AI</a>, which comes out this week and is published by Diversion Books, offers a revolutionary framework for how leaders in all kinds of organizations can adapt to the new age of technology by leaning into the qualities and skills that make us uniquely human.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Dr. Nick Chatrath about the crucial role that leadership plays in the future of AI development. We talk about organizational culture, the very human leaders driving technological production, and why human independent thinking matters more than ever, in the age of artificial intelligence.</p>
<p>Dr. <a href='https://www.google.com/search?q=nick+chatrath&oq=Nick%C2%A0Chatrath&aqs=chrome.0.35i39j0i390l3j69i60j69i61l2.416j0j4&sourceid=chrome&ie=UTF-8'>Nick Chatrath</a> is an expert in leadership and organizational transformation with the aim of helping humans flourish. He holds a doctorate from Oxford University and serves as managing director for a global leadership training firm. His book, <a href='https://www.amazon.com/Threshold-Leading-Age-AI/dp/1635767989'><em>The Threshold: Leading in the Age of AI</em></a>, which comes out this week and is published by Diversion Books, offers a revolutionary framework for how leaders in all kinds of organizations can adapt to the new age of technology by leaning into the qualities and skills that make us uniquely human.</p>
]]></content:encoded>
                                    
        <enclosure length="95056562" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/xntqcg/Chatrath_mixdown.mp3"/>
        <itunes:summary>In this episode, I speak with Dr. Nick Chatrath about the crucial role that leadership plays in the future of AI development. We talk about organizational culture, the very human leaders driving technological production, and why human independent thinking matters more than ever, in the age of artificial intelligence.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3960</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>100</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Dr. Nick Chatrath about the crucial role that leadership plays in the future of AI development. We talk about organizational culture, the very human leaders driving technological production, and why human independent thinking matters more than ever, in the age of artificial intelligence. Dr. Nick Chatrath is an expert in leadership and organizational transformation with the aim of helping humans flourish. He holds a doctorate from Oxford University and serves as managing director for a global leadership training firm. His book, The Threshold: Leading in the Age of AI, which comes out this week and is published by Diversion Books, offers a revolutionary framework for how leaders in all kinds of organizations can adapt to the new age of technology by leaning into the qualities and skills that make us uniquely human.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Ethics of the Blockchain</title>
        <itunes:title>The Ethics of the Blockchain</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-ethics-of-the-blockchain/</link>
                    <comments>https://dmdonig.podbean.com/e/the-ethics-of-the-blockchain/#comments</comments>        <pubDate>Fri, 17 Feb 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c26e2c78-2ef7-38ca-818d-1125aa9549ee</guid>
                                    <description><![CDATA[<p>Today’s episode features a conversation with Medha Parlikar, about the ethics of the blockchain and cryptocurrency. We talk about the vision of what cryptocurrency could be, what dangers it might pose to our values, and what the future of cryptocurrency might look like in a web-3 world.</p>
<p>Medha Parlikar is co-founder and chief technology officer of CasperLabs. She has more than 30 years of tech experience and is one of the top women leaders in blockchain. She is a prolific speaker, having spoken at several global conferences including Davos, LA Blockchain Summit, and NFT.LA , among others. Medha is a mentor and has worked with organizations including Strongurl to elevate and encourage women in blockchain/tech.</p>
<p>
A quick note: Medha and I recorded this episode right before some really big things happened in the crypto world. Like the <a href='https://www.google.com/search?q=crypto+crash+December+of+2022&oq=crypto+crash+December+of+2022&aqs=chrome..69i57.4388j0j15&sourceid=chrome&ie=UTF-8'>crypto crash in December of 2022</a> when it was revealed that the notorious Crypto entrepreneur, investor, and billionaire “SBF,” or <a href='https://www.google.com/search?q=Sam+Bankman+Fried&oq=Sam+Bankman+Fried&aqs=chrome..69i57j0i271l3j69i60.211j0j4&sourceid=chrome&ie=UTF-8'>Sam Bankman Fried</a>, the founder and CEO of the cryptocurrency exchange FTX and associated trading firm Alameda Research, was discovered to have likely committed massive fraud. The discovery led to a <a href='https://en.wikipedia.org/wiki/Bankruptcy_of_FTX'>high-profile collapse</a> resulting in <a href='https://en.wikipedia.org/wiki/Chapter_11,_Title_11,_United_States_Code'>chapter 11 bankruptcy</a> in late 2022, and a massive shake to the crypto industry. Things might have changed in the crypto world a bit since then, but even so, neither blockchain technology, or the place of cryptocurrencies in the financial industry, seem to be going anywhere, and I think the conversation stands up to time. You be the judge! </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today’s episode features a conversation with Medha Parlikar, about the ethics of the blockchain and cryptocurrency. We talk about the vision of what cryptocurrency could be, what dangers it might pose to our values, and what the future of cryptocurrency might look like in a web-3 world.</p>
<p>Medha Parlikar is co-founder and chief technology officer of CasperLabs. She has more than 30 years of tech experience and is one of the top women leaders in blockchain. She is a prolific speaker, having spoken at several global conferences including Davos, LA Blockchain Summit, and NFT.LA , among others. Medha is a mentor and has worked with organizations including Strongurl to elevate and encourage women in blockchain/tech.</p>
<p><br>
A quick note: Medha and I recorded this episode right before some really big things happened in the crypto world. Like the <a href='https://www.google.com/search?q=crypto+crash+December+of+2022&oq=crypto+crash+December+of+2022&aqs=chrome..69i57.4388j0j15&sourceid=chrome&ie=UTF-8'>crypto crash in December of 2022</a> when it was revealed that the notorious Crypto entrepreneur, investor, and billionaire “SBF,” or <a href='https://www.google.com/search?q=Sam+Bankman+Fried&oq=Sam+Bankman+Fried&aqs=chrome..69i57j0i271l3j69i60.211j0j4&sourceid=chrome&ie=UTF-8'>Sam Bankman Fried</a>, the founder and CEO of the cryptocurrency exchange FTX and associated trading firm Alameda Research, was discovered to have likely committed massive fraud. The discovery led to a <a href='https://en.wikipedia.org/wiki/Bankruptcy_of_FTX'>high-profile collapse</a> resulting in <a href='https://en.wikipedia.org/wiki/Chapter_11,_Title_11,_United_States_Code'>chapter 11 bankruptcy</a> in late 2022, and a massive shake to the crypto industry. Things might have changed in the crypto world a bit since then, but even so, neither blockchain technology, or the place of cryptocurrencies in the financial industry, seem to be going anywhere, and I think the conversation stands up to time. You be the judge! </p>
]]></content:encoded>
                                    
        <enclosure length="97930686" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/bwvv6m/Parlikar_2_mixdownaldp5.mp3"/>
        <itunes:summary><![CDATA[Today’s episode features a conversation with Medha Parlikar, about the ethics of the blockchain and cryptocurrency. We talk about the vision of what cryptocurrency could be, what dangers it might pose to our values, and what the future of cryptocurrency might look like in a web-3 world.
Medha Parlikar is co-founder and chief technology officer of CasperLabs. She has more than 30 years of tech experience and is one of the top women leaders in blockchain. She is a prolific speaker, having spoken at several global conferences including Davos, LA Blockchain Summit, and NFT.LA , among others. Medha is a mentor and has worked with organizations including Strongurl to elevate and encourage women in blockchain/tech.
A quick note: Medha and I recorded this episode right before some really big things happened in the crypto world. Like the crypto crash in December of 2022 when it was revealed that the notorious Crypto entrepreneur, investor, and billionaire “SBF,” or Sam Bankman Fried, the founder and CEO of the cryptocurrency exchange FTX and associated trading firm Alameda Research, was discovered to have likely committed massive fraud. The discovery led to a high-profile collapse resulting in chapter 11 bankruptcy in late 2022, and a massive shake to the crypto industry. Things might have changed in the crypto world a bit since then, but even so, neither blockchain technology, or the place of cryptocurrencies in the financial industry, seem to be going anywhere, and I think the conversation stands up to time. You be the judge! ]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4080</itunes:duration>
                <itunes:episode>99</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today’s episode features a conversation with Medha Parlikar, about the ethics of the blockchain and cryptocurrency. We talk about the vision of what cryptocurrency could be, what dangers it might pose to our values, and what the future of cryptocurrency might look like in a web-3 world. Medha Parlikar is co-founder and chief technology officer of CasperLabs. She has more than 30 years of tech experience and is one of the top women leaders in blockchain. She is a prolific speaker, having spoken at several global conferences including Davos, LA Blockchain Summit, and NFT.LA , among others. Medha is a mentor and has worked with organizations including Strongurl to elevate and encourage women in blockchain/tech. A quick note: Medha and I recorded this episode right before some really big things happened in the crypto world. Like the crypto crash in December of 2022 when it was revealed that the notorious Crypto entrepreneur, investor, and billionaire “SBF,” or Sam Bankman Fried, the founder and CEO of the cryptocurrency exchange FTX and associated trading firm Alameda Research, was discovered to have likely committed massive fraud. The discovery led to a high-profile collapse resulting in chapter 11 bankruptcy in late 2022, and a massive shake to the crypto industry. Things might have changed in the crypto world a bit since then, but even so, neither blockchain technology, or the place of cryptocurrencies in the financial industry, seem to be going anywhere, and I think the conversation stands up to time. You be the judge! </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Digital Democracy: How Tech Shapes Democratic Participation and Social Justice</title>
        <itunes:title>Digital Democracy: How Tech Shapes Democratic Participation and Social Justice</itunes:title>
        <link>https://dmdonig.podbean.com/e/digital-democracy-how-tech-shapes-democratic-participation-and-social-justice/</link>
                    <comments>https://dmdonig.podbean.com/e/digital-democracy-how-tech-shapes-democratic-participation-and-social-justice/#comments</comments>        <pubDate>Fri, 10 Feb 2023 05:03:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a6e98747-66ba-31e4-91ab-87d97fea3b17</guid>
                                    <description><![CDATA[<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">In this week’s edition of the “22 lessons on ethics and technology series,” I speak with Dr. Nassim Parvin. We talk about the ethical and political dimensions of design and technology, especially as related to values of democratic participation and social justice. How have digital technologies impacted, and how do they continue to impact, the future of social and collective interactions, particularly in the arenas of political participation and social justice? How do the designs of technologies create platforms for participation--or inhibit it? And how have the values of democracy, equity, and justice nfluence the way we imagine and design the technologies that we claim will serve these values?</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Dr. Nassim Parvin is an Associate Professor at the <a href='http://dm.lmc.gatech.edu/'>Digital Media</a> program at Georgia Tech, where she also directs the <a href='http://designstudio.gatech.edu/'>Design and Social Justice Studio</a>. Her research explores the ethical and political dimensions of design and technology, especially as related to questions of democracy and justice. Rooted in pragmatist ethics and feminist theory, she critically engages emerging digital technologies—such as smart cities or artificial intelligence—in their wide-ranging and transformative effect on the future of collective and social interactions.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Her interdisciplinary research integrates theoretically-driven humanistic scholarship and design-based inquiry, including publishing both traditional scholarly papers and creating digital artifacts that illustrate how humanistic values may be cultivated to produce radically different artifacts and infrastructures. Her scholarship appears across disciplinary venues in design (such as Design Issues), Human-Computer Interaction (such as ACM CSCW), Science and Technology Studies (such as Science, Technology, and Human Values), as well as philosophy (such as Hypatia: Journal of Feminist Philosophy). Her designs have been deployed at non-profit organizations such as the Mayo Clinic and exhibited in venues such as the Smithsonian Museum, receiving multiple awards and recognitions.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">She is an editor of <a href='http://catalystjournal.org/ojs/index.php/catalyst/index'>Catalyst: Feminism, Theory, Technoscience</a>, an award-winning journal in the expanding interdisciplinary field of STS and serve on the editorial board of <a href='https://www.mitpressjournals.org/loi/desi'>Design Issues</a>. My teaching has also received multiple recognitions inclusive of the campus-wide 2017 GATECH CETL/BP Junior Faculty Teaching Excellence Award.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Dr. Parvin received her PhD in Design from Carnegie Mellon University. She holds an MS in Information Design and Technology from Georgia Tech and a BS in Electrical Engineering from the University of Tehran, Iran.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">In this week’s edition of the “22 lessons on ethics and technology series,” I speak with Dr. Nassim Parvin. We talk about the ethical and political dimensions of design and technology, especially as related to values of democratic participation and social justice. How have digital technologies impacted, and how do they continue to impact, the future of social and collective interactions, particularly in the arenas of political participation and social justice? How do the designs of technologies create platforms for participation--or inhibit it? And how have the values of democracy, equity, and justice nfluence the way we imagine and design the technologies that we claim will serve these values?</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Dr. Nassim Parvin is an Associate Professor at the <a href='http://dm.lmc.gatech.edu/'>Digital Media</a> program at Georgia Tech, where she also directs the <a href='http://designstudio.gatech.edu/'>Design and Social Justice Studio</a>. Her research explores the ethical and political dimensions of design and technology, especially as related to questions of democracy and justice. Rooted in pragmatist ethics and feminist theory, she critically engages emerging digital technologies—such as smart cities or artificial intelligence—in their wide-ranging and transformative effect on the future of collective and social interactions.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Her interdisciplinary research integrates theoretically-driven humanistic scholarship and design-based inquiry, including publishing both traditional scholarly papers and creating digital artifacts that illustrate how humanistic values may be cultivated to produce radically different artifacts and infrastructures. Her scholarship appears across disciplinary venues in design (such as <em>Design Issues</em>), Human-Computer Interaction (such as <em>ACM CSCW</em>), Science and Technology Studies (such as <em>Science, Technology, and Human Values</em>), as well as philosophy (such as <em>Hypatia: Journal of Feminist Philosophy</em>). Her designs have been deployed at non-profit organizations such as the Mayo Clinic and exhibited in venues such as the Smithsonian Museum, receiving multiple awards and recognitions.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">She is an editor of <a href='http://catalystjournal.org/ojs/index.php/catalyst/index'>Catalyst: Feminism, Theory, Technoscience</a>, an award-winning journal in the expanding interdisciplinary field of STS and serve on the editorial board of <em><a href='https://www.mitpressjournals.org/loi/desi'>Design Issues</a>.</em> My teaching has also received multiple recognitions inclusive of the campus-wide 2017 GATECH CETL/BP Junior Faculty Teaching Excellence Award.</p>
<p style="background:#FFFFFF;margin:0in 0in 7.5pt 0in;">Dr. Parvin received her PhD in Design from Carnegie Mellon University. She holds an MS in Information Design and Technology from Georgia Tech and a BS in Electrical Engineering from the University of Tehran, Iran.</p>
]]></content:encoded>
                                    
        <enclosure length="46852271" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qchut5/ET22LEIT-400-NasimParvin-final_Mixdown_2_luffed7abr9.mp3"/>
        <itunes:summary><![CDATA[In this week’s edition of the “22 lessons on ethics and technology series,” I speak with Dr. Nassim Parvin. We talk about the ethical and political dimensions of design and technology, especially as related to values of democratic participation and social justice. How have digital technologies impacted, and how do they continue to impact, the future of social and collective interactions, particularly in the arenas of political participation and social justice? How do the designs of technologies create platforms for participation--or inhibit it? And how have the values of democracy, equity, and justice nfluence the way we imagine and design the technologies that we claim will serve these values?
Dr. Nassim Parvin is an Associate Professor at the Digital Media program at Georgia Tech, where she also directs the Design and Social Justice Studio. Her research explores the ethical and political dimensions of design and technology, especially as related to questions of democracy and justice. Rooted in pragmatist ethics and feminist theory, she critically engages emerging digital technologies—such as smart cities or artificial intelligence—in their wide-ranging and transformative effect on the future of collective and social interactions.
Her interdisciplinary research integrates theoretically-driven humanistic scholarship and design-based inquiry, including publishing both traditional scholarly papers and creating digital artifacts that illustrate how humanistic values may be cultivated to produce radically different artifacts and infrastructures. Her scholarship appears across disciplinary venues in design (such as Design Issues), Human-Computer Interaction (such as ACM CSCW), Science and Technology Studies (such as Science, Technology, and Human Values), as well as philosophy (such as Hypatia: Journal of Feminist Philosophy). Her designs have been deployed at non-profit organizations such as the Mayo Clinic and exhibited in venues such as the Smithsonian Museum, receiving multiple awards and recognitions.
She is an editor of Catalyst: Feminism, Theory, Technoscience, an award-winning journal in the expanding interdisciplinary field of STS and serve on the editorial board of Design Issues. My teaching has also received multiple recognitions inclusive of the campus-wide 2017 GATECH CETL/BP Junior Faculty Teaching Excellence Award.
Dr. Parvin received her PhD in Design from Carnegie Mellon University. She holds an MS in Information Design and Technology from Georgia Tech and a BS in Electrical Engineering from the University of Tehran, Iran.]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3902</itunes:duration>
                <itunes:episode>98</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week’s edition of the “22 lessons on ethics and technology series,” I speak with Dr. Nassim Parvin. We talk about the ethical and political dimensions of design and technology, especially as related to values of democratic participation and social justice. How have digital technologies impacted, and how do they continue to impact, the future of social and collective interactions, particularly in the arenas of political participation and social justice? How do the designs of technologies create platforms for participation--or inhibit it? And how have the values of democracy, equity, and justice nfluence the way we imagine and design the technologies that we claim will serve these values? Dr. Nassim Parvin is an Associate Professor at the Digital Media program at Georgia Tech, where she also directs the Design and Social Justice Studio. Her research explores the ethical and political dimensions of design and technology, especially as related to questions of democracy and justice. Rooted in pragmatist ethics and feminist theory, she critically engages emerging digital technologies—such as smart cities or artificial intelligence—in their wide-ranging and transformative effect on the future of collective and social interactions. Her interdisciplinary research integrates theoretically-driven humanistic scholarship and design-based inquiry, including publishing both traditional scholarly papers and creating digital artifacts that illustrate how humanistic values may be cultivated to produce radically different artifacts and infrastructures. Her scholarship appears across disciplinary venues in design (such as Design Issues), Human-Computer Interaction (such as ACM CSCW), Science and Technology Studies (such as Science, Technology, and Human Values), as well as philosophy (such as Hypatia: Journal of Feminist Philosophy). Her designs have been deployed at non-profit organizations such as the Mayo Clinic and exhibited in venues such as the Smithsonian Museum, receiving multiple awards and recognitions. She is an editor of Catalyst: Feminism, Theory, Technoscience, an award-winning journal in the expanding interdisciplinary field of STS and serve on the editorial board of Design Issues. My teaching has also received multiple recognitions inclusive of the campus-wide 2017 GATECH CETL/BP Junior Faculty Teaching Excellence Award. Dr. Parvin received her PhD in Design from Carnegie Mellon University. She holds an MS in Information Design and Technology from Georgia Tech and a BS in Electrical Engineering from the University of Tehran, Iran.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Computing Women: Gender Disparity in STEM Education</title>
        <itunes:title>Computing Women: Gender Disparity in STEM Education</itunes:title>
        <link>https://dmdonig.podbean.com/e/computing-women-gender-disparity-in-stem-education/</link>
                    <comments>https://dmdonig.podbean.com/e/computing-women-gender-disparity-in-stem-education/#comments</comments>        <pubDate>Fri, 03 Feb 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9ece1cec-84d5-3ace-86aa-94d3d80834c6</guid>
                                    <description><![CDATA[<p>We’re back for another installment of the “22 Lessons on Ethics and Technology” special series. In this week’s episode of the series, I am joined by Dr. Mar Hicks. This episode tells the story of labor and gender discrimination in the tech industry. Dr. Hicks explains the historical background of gendered technological production that has influenced the development of computing. In her historical outline, she explains that while women were a hidden engine of growth in high technology from World War II to the 1960s, American and British computing in the 1970s experienced a gender flip, becoming male-identified in the 1960s and 1970s. What can this history teach us about the need for gender equity in technological production now? And what are the consequences of continued gender inequity for our future?</p>
<p>Professor Mar Hicks is a historian of technology, gender, and labor, specializing in the history of computing. Dr. Hicks’s book, <a href='https://mitpress.mit.edu/books/programmed-inequality'>Programmed Inequality (MIT Press, 2017)</a> investigates how Britain lost its early lead in computing by discarding the majority of their computer workers and experts--simply because they were women. Dr. Hicks’s current project looks at transgender citizens’ interactions with the computerized systems of the British welfare state in the 20th century, and how these computerized systems determined whose bodies and identities were allowed to exist. Hicks's work studies how collective understandings of progress are defined by competing discourses of social value and economic productivity, and how technologies often hide regressive ideals while espousing "revolutionary" or "disruptive" goals. Dr. Hicks is also co-editing a volume on computing history called Your Computer Is On Fire (MIT Press, 2020). Dr. Hicks runs the <a href='http://bit.ly/digitalhistorylab'>Digital History Lab</a> at Illinois Tech.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>We’re back for another installment of the “22 Lessons on Ethics and Technology” special series. In this week’s episode of the series, I am joined by Dr. Mar Hicks. This episode tells the story of labor and gender discrimination in the tech industry. Dr. Hicks explains the historical background of gendered technological production that has influenced the development of computing. In her historical outline, she explains that while women were a hidden engine of growth in high technology from World War II to the 1960s, American and British computing in the 1970s experienced a gender flip, becoming male-identified in the 1960s and 1970s. What can this history teach us about the need for gender equity in technological production now? And what are the consequences of continued gender inequity for our future?</p>
<p>Professor Mar Hicks is a historian of technology, gender, and labor, specializing in the history of computing. Dr. Hicks’s book, <a href='https://mitpress.mit.edu/books/programmed-inequality'><em>Programmed Inequality</em> (MIT Press, 2017)</a> investigates how Britain lost its early lead in computing by discarding the majority of their computer workers and experts--simply because they were women. Dr. Hicks’s current project looks at transgender citizens’ interactions with the computerized systems of the British welfare state in the 20th century, and how these computerized systems determined whose bodies and identities were allowed to exist. Hicks's work studies how collective understandings of progress are defined by competing discourses of social value and economic productivity, and how technologies often hide regressive ideals while espousing "revolutionary" or "disruptive" goals. Dr. Hicks is also co-editing a volume on computing history called <em>Your Computer Is On Fire</em> (MIT Press, 2020). Dr. Hicks runs the <a href='http://bit.ly/digitalhistorylab'>Digital History Lab</a> at Illinois Tech.</p>
]]></content:encoded>
                                    
        <enclosure length="58536806" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9km8fw/v2_ET22LEIT-MarHicks-final_Mixdown_1_luffed9n4bf.mp3"/>
        <itunes:summary>We’re back for another installment of the “22 Lessons on Ethics and Technology” special series. In this week’s episode of the series, I am joined by Dr. Mar Hicks. This episode tells the story of labor and gender discrimination in the tech industry. Dr. Hicks explains the historical background of gendered technological production that has influenced the development of computing. In her historical outline, she explains that while women were a hidden engine of growth in high technology from World War II to the 1960s, American and British computing in the 1970s experienced a gender flip, becoming male-identified in the 1960s and 1970s. What can this history teach us about the need for gender equity in technological production now? And what are the consequences of continued gender inequity for our future?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3658</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>97</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>We’re back for another installment of the “22 Lessons on Ethics and Technology” special series. In this week’s episode of the series, I am joined by Dr. Mar Hicks. This episode tells the story of labor and gender discrimination in the tech industry. Dr. Hicks explains the historical background of gendered technological production that has influenced the development of computing. In her historical outline, she explains that while women were a hidden engine of growth in high technology from World War II to the 1960s, American and British computing in the 1970s experienced a gender flip, becoming male-identified in the 1960s and 1970s. What can this history teach us about the need for gender equity in technological production now? And what are the consequences of continued gender inequity for our future? Professor Mar Hicks is a historian of technology, gender, and labor, specializing in the history of computing. Dr. Hicks’s book, Programmed Inequality (MIT Press, 2017) investigates how Britain lost its early lead in computing by discarding the majority of their computer workers and experts--simply because they were women. Dr. Hicks’s current project looks at transgender citizens’ interactions with the computerized systems of the British welfare state in the 20th century, and how these computerized systems determined whose bodies and identities were allowed to exist. Hicks's work studies how collective understandings of progress are defined by competing discourses of social value and economic productivity, and how technologies often hide regressive ideals while espousing "revolutionary" or "disruptive" goals. Dr. Hicks is also co-editing a volume on computing history called Your Computer Is On Fire (MIT Press, 2020). Dr. Hicks runs the Digital History Lab at Illinois Tech.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Human First AI</title>
        <itunes:title>Human First AI</itunes:title>
        <link>https://dmdonig.podbean.com/e/human-first-ai/</link>
                    <comments>https://dmdonig.podbean.com/e/human-first-ai/#comments</comments>        <pubDate>Fri, 27 Jan 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8e0ea347-efce-3b8a-9876-eaa72d68e353</guid>
                                    <description><![CDATA[<p>In this week's episode, I am joined by Dr. Christopher Nguyen. We talk about the emerging concept of "human first AI," and the changing terrain of both AI ethics, and AI development. We imagine what a human-first approach to AI might look like, and what gets in the way of developing an ethical approach to AI in the tech industry.</p>
<p>Christopher Nguyen’s career spans four decades, and he has become an industry leader in the field of Engineering broadly, and AI specifically. Since fleeing Vietnam in 1978, he has founded multiple tech companies and has played key roles in everything from building the first flash memory transistors at Intel to spearheading the development of Google Apps as its first Engineering Director. As a professor, Christopher co-founded the Computer Engineering program at the Hong Kong University of Science and Technology, or HKUST. He earned his Bachelor of Science. degree from the University of California-Berkeley, summa cum lauday, and a PhD. from Stanford University. Today, he’s become an outspoken proponent of the emerging field of “AI Engineering” and a thought leader in the space of ethical, human-centric AI. With his latest company, Aitomatic, he’s hoping to redefine how companies approach AI in the context of life-critical, industrial applications.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week's episode, I am joined by Dr. Christopher Nguyen. We talk about the emerging concept of "human first AI," and the changing terrain of both AI ethics, and AI development. We imagine what a human-first approach to AI might look like, and what gets in the way of developing an ethical approach to AI in the tech industry.</p>
<p>Christopher Nguyen’s career spans four decades, and he has become an industry leader in the field of Engineering broadly, and AI specifically. Since fleeing Vietnam in 1978, he has founded multiple tech companies and has played key roles in everything from building the first flash memory transistors at Intel to spearheading the development of Google Apps as its first Engineering Director. As a professor, Christopher co-founded the Computer Engineering program at the Hong Kong University of Science and Technology, or HKUST. He earned his Bachelor of Science. degree from the University of California-Berkeley, summa cum lauday, and a PhD. from Stanford University. Today, he’s become an outspoken proponent of the emerging field of “AI Engineering” and a thought leader in the space of ethical, human-centric AI. With his latest company, Aitomatic, he’s hoping to redefine how companies approach AI in the context of life-critical, industrial applications.</p>
]]></content:encoded>
                                    
        <enclosure length="87365888" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/d2sxgj/Nguyn_mixdown.mp3"/>
        <itunes:summary>In this week’s episode, I am joined by Dr. Christopher Nguyen. We talk about the emerging concept of ”human first AI,” and the changing terrain of both AI ethics, and AI development. We imagine what a human-first approach to AI might look like, and what gets in the way of developing an ethical approach to AI in the tech industry.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3638</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>96</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week's episode, I am joined by Dr. Christopher Nguyen. We talk about the emerging concept of "human first AI," and the changing terrain of both AI ethics, and AI development. We imagine what a human-first approach to AI might look like, and what gets in the way of developing an ethical approach to AI in the tech industry. Christopher Nguyen’s career spans four decades, and he has become an industry leader in the field of Engineering broadly, and AI specifically. Since fleeing Vietnam in 1978, he has founded multiple tech companies and has played key roles in everything from building the first flash memory transistors at Intel to spearheading the development of Google Apps as its first Engineering Director. As a professor, Christopher co-founded the Computer Engineering program at the Hong Kong University of Science and Technology, or HKUST. He earned his Bachelor of Science. degree from the University of California-Berkeley, summa cum lauday, and a PhD. from Stanford University. Today, he’s become an outspoken proponent of the emerging field of “AI Engineering” and a thought leader in the space of ethical, human-centric AI. With his latest company, Aitomatic, he’s hoping to redefine how companies approach AI in the context of life-critical, industrial applications.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Science for the 21st Century: Understanding Systems Biology</title>
        <itunes:title>Science for the 21st Century: Understanding Systems Biology</itunes:title>
        <link>https://dmdonig.podbean.com/e/science-for-the-21st-century-understanding-systems-biology/</link>
                    <comments>https://dmdonig.podbean.com/e/science-for-the-21st-century-understanding-systems-biology/#comments</comments>        <pubDate>Fri, 20 Jan 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/59a5cd56-1386-31fe-94ad-f440439238e4</guid>
                                    <description><![CDATA[<p>This week, I turn my mic over to a guest host, for an interview with Dr. Jared Roach about the growing field of systems biology, an interdisciplinary field of study taking over the biological sciences, focused on complex interactions within biological systems. How can we update the study of biology for the 21st century? How can computational and mathematical analysis help us understand biological systems? And what can we newly see or understand about ourselves if we the way that complex networks interact within our bodies? </p>
<p>Today's host, Zoë Gray, is a math major honor student at Cal Poly. She has a background in electrical engineering, and she is particularly interested in considering the pace of technological development, and the ethics of a system of technological production that moves so quickly.  </p>
<p>Dr. Jared Roach, MD, PhD is a Senior Research Scientist at <a href='https://isbscience.org/'>The Institute for Systems Biology</a>. Starting as a graduate student in the 1990s, Roach worked on the Human Genome Project from its early days through the end of the project. Dr. Roach contributed strategic and algorithmic designs to the Human Genome Project, including the pairwise end-sequencing strategy. He was a Senior Fellow at the Department of Molecular Biotechnology at the University of Washington from 1999-2000. In 2001, he became a Research Scientist at the Institute for Systems Biology. His group currently applies systems biology and genomics to complex diseases, focusing on the systems biology architecture of Alzheimer’s disease.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week, I turn my mic over to a guest host, for an interview with Dr. Jared Roach about the growing field of systems biology, an interdisciplinary field of study taking over the biological sciences, focused on complex interactions within biological systems. How can we update the study of biology for the 21st century? How can computational and mathematical analysis help us understand biological systems? And what can we newly see or understand about ourselves if we the way that complex networks interact within our bodies? </p>
<p>Today's host, Zoë Gray, is a math major honor student at Cal Poly. She has a background in electrical engineering, and she is particularly interested in considering the pace of technological development, and the ethics of a system of technological production that moves so quickly.  </p>
<p>Dr. Jared Roach, MD, PhD is a Senior Research Scientist at <a href='https://isbscience.org/'>The Institute for Systems Biology</a>. Starting as a graduate student in the 1990s, Roach worked on the Human Genome Project from its early days through the end of the project. Dr. Roach contributed strategic and algorithmic designs to the Human Genome Project, including the pairwise end-sequencing strategy. He was a Senior Fellow at the Department of Molecular Biotechnology at the University of Washington from 1999-2000. In 2001, he became a Research Scientist at the Institute for Systems Biology. His group currently applies systems biology and genomics to complex diseases, focusing on the systems biology architecture of Alzheimer’s disease.</p>
]]></content:encoded>
                                    
        <enclosure length="61449892" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/bxtp5g/Raoch_3_mixdownba1kd.mp3"/>
        <itunes:summary><![CDATA[This week, I turn my mic over to a guest host, for an interview with Dr. Jared Roach about the growing field of systems biology, an interdisciplinary field of study taking over the biological sciences, focused on complex interactions within biological systems. How can we update the study of biology for the 21st century? How can computational and mathematical analysis help us understand biological systems? And what can we newly see or understand about ourselves if we the way that complex networks interact within our bodies? 
Today's host, Zoë Gray, is a math major honor student at Cal Poly. She has a background in electrical engineering, and she is particularly interested in considering the pace of technological development, and the ethics of a system of technological production that moves so quickly.  
Dr. Jared Roach, MD, PhD is a Senior Research Scientist at The Institute for Systems Biology. Starting as a graduate student in the 1990s, Roach worked on the Human Genome Project from its early days through the end of the project. Dr. Roach contributed strategic and algorithmic designs to the Human Genome Project, including the pairwise end-sequencing strategy. He was a Senior Fellow at the Department of Molecular Biotechnology at the University of Washington from 1999-2000. In 2001, he became a Research Scientist at the Institute for Systems Biology. His group currently applies systems biology and genomics to complex diseases, focusing on the systems biology architecture of Alzheimer’s disease.]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2560</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>95</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week, I turn my mic over to a guest host, for an interview with Dr. Jared Roach about the growing field of systems biology, an interdisciplinary field of study taking over the biological sciences, focused on complex interactions within biological systems. How can we update the study of biology for the 21st century? How can computational and mathematical analysis help us understand biological systems? And what can we newly see or understand about ourselves if we the way that complex networks interact within our bodies?  Today's host, Zoë Gray, is a math major honor student at Cal Poly. She has a background in electrical engineering, and she is particularly interested in considering the pace of technological development, and the ethics of a system of technological production that moves so quickly.   Dr. Jared Roach, MD, PhD is a Senior Research Scientist at The Institute for Systems Biology. Starting as a graduate student in the 1990s, Roach worked on the Human Genome Project from its early days through the end of the project. Dr. Roach contributed strategic and algorithmic designs to the Human Genome Project, including the pairwise end-sequencing strategy. He was a Senior Fellow at the Department of Molecular Biotechnology at the University of Washington from 1999-2000. In 2001, he became a Research Scientist at the Institute for Systems Biology. His group currently applies systems biology and genomics to complex diseases, focusing on the systems biology architecture of Alzheimer’s disease.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Diversity Challenge:  Race, gender, and how the histories of medicine and technology got made</title>
        <itunes:title>The Diversity Challenge:  Race, gender, and how the histories of medicine and technology got made</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-history-of-race-in-science-medicine-and-technological-development/</link>
                    <comments>https://dmdonig.podbean.com/e/the-history-of-race-in-science-medicine-and-technological-development/#comments</comments>        <pubDate>Fri, 13 Jan 2023 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e7fad838-6a6c-3ed3-a9ab-be616a11c977</guid>
                                    <description><![CDATA[<p>In this week's “22 Lessons on Ethics and Technology" special series, I sit down with Dr. Evelynn Hammonds to talk about how race and gender have shaped the histories of science, medicine, and technological development. We explore the divisions between investigations of gender within scientific and technological inquiry, and race within these same fields. How can an intersectional approach challenge our science and technologies to better serve, and include, a broader diversity of people? How have our concepts of science and technology, and our assumptions about what they can and should do, been shaped by exclusions? How can those trained and working in the Humanities can learn from those trained in and working in the Sciences and Technology fields, and vice-versa? How does an understanding of the history of ideas, and the people and forces that have shaped them, inform our ability to build, innovate, and create work cultures that are more ethical and equitable?</p>
<p><a href='https://histsci.fas.harvard.edu/people/evelynn-hammonds'>Professor Hammonds</a> is the Barbara Gutmann Rosenkrantz Professor of the History of Science and Professor of African and African American Studies in the Faculty of Arts and Sciences, and Professor of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health at Harvard University.  She was the first Senior Vice Provost for Faculty Development and Diversity at Harvard University (2005-2008). From 2008-2013 she served as Dean of Harvard College and Chair of the Department of History of Science (2017-2022). Professor Hammonds’ areas of research include the histories of science, medicine and public health in the United States; race, gender and sexuality in science studies; feminist theory and African American history.  She has published articles on the history of disease, race and science, African American feminism, African-American women and the epidemic of HIV/AIDS; analyses of gender and race in science, medicine and public health and the history of health disparities in the U.S.. Professor Hammonds’ current work focuses on the history of the intersection of scientific, medical and socio-political concepts of race in the United States.  She is currently director of the Project on Race & Gender in Science & Medicine at the Hutchins Center for African and African American Research at Harvard.</p>
<p>Prof. Hammonds holds a B.S. in physics from Spelman College, a B.E.E. in electrical engineering from Ga. Tech and an SM in Physics from MIT.  She earned the PhD in the history of science from Harvard University. She served as a Sigma Xi Distinguished Lecturer (2003-2005), a visiting scholar at the Max Planck Institute for the History of Science in Berlin, a Post-doctoral Fellow in the School of Social Science at the Institute for Advanced Study in Princeton, and a Visiting Professor at UCLA and at Hampshire College. Professor Hammonds was named a Fellow of the Association of Women in Science (AWIS) in 2008.  She served on the Board of Trustees of Spelman and Bennett Colleges and currently on the Board of the Arcus Foundation, and the Board of Trustees of Bates College.</p>
<p>In 2010, she was appointed to President Barack Obama’s Board of Advisers on Historically Black Colleges and Universities and in 2014 to the President’s Advisory Committee on Excellence in Higher Education for African Americans. She served two terms as a member of the Committee on Equal Opportunity in Science and Engineering (CEOSE), the congressionally mandated oversight committee of the National Science Foundation (NSF), the Advisory Committee of the EHR directorate of the NSF, and the Advisory Committee on the Merit Review Process of the NSF. Professor Hammonds is the current vice president/president-elect of the History of Science Society.</p>
<p>At Harvard, she served on the President’s Initiative on Harvard and the Legacy of Slavery; the Faculty Executive Committee of the Peabody Museum and she chaired the University-wide Steering Committee on Human Remains in the Harvard Museum Collections.  She also works on projects to increase the participation of men and women of color in STEM fields. Prof. Hammonds is the co-author of the National Academy of Sciences (NAS) recently released report (December 9, 2021) <a href='https://nap.nationalacademies.org/catalog/26345/transforming-trajectories-for-women-of-color-in-tech'>Transforming Technologies: Women of Color in Tech</a>. She is a member of the Committee on Women in Science, Engineering, and Medicine (CWSEM) of the NAS and the NAS Roundtable on Black Men and Black Women in Science, Engineering and Medicine. She is an elected member of the National Academy of Medicine (NAM) and the American Academy of Arts and Sciences. She holds honorary degrees from Spelman College and Bates College. For the academic year 2022-2023, Prof. Hammonds is the inaugural Audre Lorde Visiting Professor of Queer Studies at Spelman College.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week's “22 Lessons on Ethics and Technology" special series, I sit down with Dr. Evelynn Hammonds to talk about how race and gender have shaped the histories of science, medicine, and technological development. We explore the divisions between investigations of gender within scientific and technological inquiry, and race within these same fields. How can an intersectional approach challenge our science and technologies to better serve, and include, a broader diversity of people? How have our concepts of science and technology, and our assumptions about what they can and should do, been shaped by exclusions? How can those trained and working in the Humanities can learn from those trained in and working in the Sciences and Technology fields, and vice-versa? How does an understanding of the history of ideas, and the people and forces that have shaped them, inform our ability to build, innovate, and create work cultures that are more ethical and equitable?</p>
<p><a href='https://histsci.fas.harvard.edu/people/evelynn-hammonds'>Professor Hammonds</a> is the Barbara Gutmann Rosenkrantz Professor of the History of Science and Professor of African and African American Studies in the Faculty of Arts and Sciences, and Professor of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health at Harvard University.  She was the first Senior Vice Provost for Faculty Development and Diversity at Harvard University (2005-2008). From 2008-2013 she served as Dean of Harvard College and Chair of the Department of History of Science (2017-2022). Professor Hammonds’ areas of research include the histories of science, medicine and public health in the United States; race, gender and sexuality in science studies; feminist theory and African American history.  She has published articles on the history of disease, race and science, African American feminism, African-American women and the epidemic of HIV/AIDS; analyses of gender and race in science, medicine and public health and the history of health disparities in the U.S.. Professor Hammonds’ current work focuses on the history of the intersection of scientific, medical and socio-political concepts of race in the United States.  She is currently director of the Project on Race & Gender in Science & Medicine at the Hutchins Center for African and African American Research at Harvard.</p>
<p>Prof. Hammonds holds a B.S. in physics from Spelman College, a B.E.E. in electrical engineering from Ga. Tech and an SM in Physics from MIT.  She earned the PhD in the history of science from Harvard University. She served as a Sigma Xi Distinguished Lecturer (2003-2005), a visiting scholar at the Max Planck Institute for the History of Science in Berlin, a Post-doctoral Fellow in the School of Social Science at the Institute for Advanced Study in Princeton, and a Visiting Professor at UCLA and at Hampshire College. Professor Hammonds was named a Fellow of the Association of Women in Science (AWIS) in 2008.  She served on the Board of Trustees of Spelman and Bennett Colleges and currently on the Board of the Arcus Foundation, and the Board of Trustees of Bates College.</p>
<p>In 2010, she was appointed to President Barack Obama’s Board of Advisers on Historically Black Colleges and Universities and in 2014 to the President’s Advisory Committee on Excellence in Higher Education for African Americans. She served two terms as a member of the Committee on Equal Opportunity in Science and Engineering (CEOSE), the congressionally mandated oversight committee of the National Science Foundation (NSF), the Advisory Committee of the EHR directorate of the NSF, and the Advisory Committee on the Merit Review Process of the NSF. Professor Hammonds is the current vice president/president-elect of the History of Science Society.</p>
<p>At Harvard, she served on the President’s Initiative on Harvard and the Legacy of Slavery; the Faculty Executive Committee of the Peabody Museum and she chaired the University-wide Steering Committee on Human Remains in the Harvard Museum Collections.  She also works on projects to increase the participation of men and women of color in STEM fields. Prof. Hammonds is the co-author of the National Academy of Sciences (NAS) recently released report (December 9, 2021) <a href='https://nap.nationalacademies.org/catalog/26345/transforming-trajectories-for-women-of-color-in-tech'><em>Transforming Technologies: Women of Color in Tech</em></a>. She is a member of the Committee on Women in Science, Engineering, and Medicine (CWSEM) of the NAS and the NAS Roundtable on Black Men and Black Women in Science, Engineering and Medicine. She is an elected member of the National Academy of Medicine (NAM) and the American Academy of Arts and Sciences. She holds honorary degrees from Spelman College and Bates College. For the academic year 2022-2023, Prof. Hammonds is the inaugural Audre Lorde Visiting Professor of Queer Studies at Spelman College.</p>
]]></content:encoded>
                                    
        <enclosure length="46317459" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/347pyk/ET22LEIT-EHammonds-final_Mixdown_1_luffed69snz.mp3"/>
        <itunes:summary>In this week’s “22 Lessons on Ethics and Technology” special series, I sit down with Dr. Evelynn Hammonds to talk about how race and gender have shaped the histories of science, medicine, and technological development. We explore the divisions between investigations of gender within scientific and technological inquiry, and race within these same fields. How can an intersectional approach challenge our science and technologies to better serve, and include, a broader diversity of people? How have our concepts of science and technology, and our assumptions about what they can and should do, been shaped by exclusions? How can those trained and working in the Humanities can learn from those trained in and working in the Sciences and Technology fields, and vice-versa? How does an understanding of the history of ideas, and the people and forces that have shaped them, inform our ability to build, innovate, and create work cultures that are more ethical and equitable?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3859</itunes:duration>
        <itunes:season>10</itunes:season>
        <itunes:episode>94</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week's “22 Lessons on Ethics and Technology" special series, I sit down with Dr. Evelynn Hammonds to talk about how race and gender have shaped the histories of science, medicine, and technological development. We explore the divisions between investigations of gender within scientific and technological inquiry, and race within these same fields. How can an intersectional approach challenge our science and technologies to better serve, and include, a broader diversity of people? How have our concepts of science and technology, and our assumptions about what they can and should do, been shaped by exclusions? How can those trained and working in the Humanities can learn from those trained in and working in the Sciences and Technology fields, and vice-versa? How does an understanding of the history of ideas, and the people and forces that have shaped them, inform our ability to build, innovate, and create work cultures that are more ethical and equitable? Professor Hammonds is the Barbara Gutmann Rosenkrantz Professor of the History of Science and Professor of African and African American Studies in the Faculty of Arts and Sciences, and Professor of Social and Behavioral Sciences at the Harvard T.H. Chan School of Public Health at Harvard University.  She was the first Senior Vice Provost for Faculty Development and Diversity at Harvard University (2005-2008). From 2008-2013 she served as Dean of Harvard College and Chair of the Department of History of Science (2017-2022). Professor Hammonds’ areas of research include the histories of science, medicine and public health in the United States; race, gender and sexuality in science studies; feminist theory and African American history.  She has published articles on the history of disease, race and science, African American feminism, African-American women and the epidemic of HIV/AIDS; analyses of gender and race in science, medicine and public health and the history of health disparities in the U.S.. Professor Hammonds’ current work focuses on the history of the intersection of scientific, medical and socio-political concepts of race in the United States.  She is currently director of the Project on Race &amp; Gender in Science &amp; Medicine at the Hutchins Center for African and African American Research at Harvard. Prof. Hammonds holds a B.S. in physics from Spelman College, a B.E.E. in electrical engineering from Ga. Tech and an SM in Physics from MIT.  She earned the PhD in the history of science from Harvard University. She served as a Sigma Xi Distinguished Lecturer (2003-2005), a visiting scholar at the Max Planck Institute for the History of Science in Berlin, a Post-doctoral Fellow in the School of Social Science at the Institute for Advanced Study in Princeton, and a Visiting Professor at UCLA and at Hampshire College. Professor Hammonds was named a Fellow of the Association of Women in Science (AWIS) in 2008.  She served on the Board of Trustees of Spelman and Bennett Colleges and currently on the Board of the Arcus Foundation, and the Board of Trustees of Bates College. In 2010, she was appointed to President Barack Obama’s Board of Advisers on Historically Black Colleges and Universities and in 2014 to the President’s Advisory Committee on Excellence in Higher Education for African Americans. She served two terms as a member of the Committee on Equal Opportunity in Science and Engineering (CEOSE), the congressionally mandated oversight committee of the National Science Foundation (NSF), the Advisory Committee of the EHR directorate of the NSF, and the Advisory Committee on the Merit Review Process of the NSF. Professor Hammonds is the current vice president/president-elect of the History of Science Society. At Harvard, she served on the President’s Initiative on Harvard and the Legacy of Slavery; the Faculty Executive Committee of the Peabody Museum and she chaired the University-wide Steering Committee on Human Remains in the Harvard Museum Collections.  She also works on projects to increase the participation of men and women of color in STEM fields. Prof. Hammonds is the co-author of the National Academy of Sciences (NAS) recently released report (December 9, 2021) Transforming Technologies: Women of Color in Tech. She is a member of the Committee on Women in Science, Engineering, and Medicine (CWSEM) of the NAS and the NAS Roundtable on Black Men and Black Women in Science, Engineering and Medicine. She is an elected member of the National Academy of Medicine (NAM) and the American Academy of Arts and Sciences. She holds honorary degrees from Spelman College and Bates College. For the academic year 2022-2023, Prof. Hammonds is the inaugural Audre Lorde Visiting Professor of Queer Studies at Spelman College.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Ethic of Life</title>
        <itunes:title>The Ethic of Life</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-ethic-of-life/</link>
                    <comments>https://dmdonig.podbean.com/e/the-ethic-of-life/#comments</comments>        <pubDate>Fri, 02 Dec 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/b3822900-8127-3c5d-b66f-42bdef132bf6</guid>
                                    <description><![CDATA[<p>This week, we continue our “22 Lessons on Ethics and Technology series” with a conversation with Dr. John Basl about how our relationship with tech is changing what he calls an “ethic of life, an ethical perspective on which all living things deserve some level of moral concern.</p>
<p><a href='http://johnbasl.net/'>Professor Basl</a> is an associate professor of philosophy in the department of philosophy & religion at Northeastern University and a faculty associate at the Edmond J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society at Harvard University. He works primarily in moral philosophy and applied ethics, especially on issues related to emerging technologies. He is an editorial board member for the new journal <a href='https://www.springer.com/journal/43681'>AI and Ethics</a>. His most recent book, The Death of the Ethic of Life, is available from <a href='https://global.oup.com/academic/product/the-death-of-the-ethic-of-life-9780190923877?cc=us&lang=en&'>Oxford University Press</a>. </p>
<p>And that’s all for this season! We are staying off our technologies for the winter break—we’ll be back with more episodes of the Technically Human podcast in 2023.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.</p>
<p>Don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--<a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950?i=1000565138067'>Apple podcasts</a>, <a href='https://www.audible.com/pd/The-Technically-Human-Podcast-Podcast/B08JJNTLF7?action_code=ASSGB149080119000H&shareTest=TestShare&share_location=pdp'>Google Play</a>, <a href='https://open.spotify.com/show/7rqXHfZhm68ws1UaiG4xN5?si=0e906856fb4a4806'>Spotify</a>—or wherever you get your podcasts.  Enjoy the break, and we’ll see you in January.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week, we continue our “22 Lessons on Ethics and Technology series” with a conversation with Dr. John Basl about how our relationship with tech is changing what he calls an “ethic of life, an ethical perspective on which all living things deserve some level of moral concern.</p>
<p><a href='http://johnbasl.net/'>Professor Basl</a> is an associate professor of philosophy in the department of philosophy & religion at Northeastern University and a faculty associate at the Edmond J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society at Harvard University. He works primarily in moral philosophy and applied ethics, especially on issues related to emerging technologies. He is an editorial board member for the new journal <a href='https://www.springer.com/journal/43681'>AI and Ethics</a>. His most recent book, The Death of the Ethic of Life, is available from <a href='https://global.oup.com/academic/product/the-death-of-the-ethic-of-life-9780190923877?cc=us&lang=en&'>Oxford University Press</a>. </p>
<p>And that’s all for this season! We are staying off our technologies for the winter break—we’ll be back with more episodes of the Technically Human podcast in 2023.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman.</p>
<p>Don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--<a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950?i=1000565138067'>Apple podcasts</a>, <a href='https://www.audible.com/pd/The-Technically-Human-Podcast-Podcast/B08JJNTLF7?action_code=ASSGB149080119000H&shareTest=TestShare&share_location=pdp'>Google Play</a>, <a href='https://open.spotify.com/show/7rqXHfZhm68ws1UaiG4xN5?si=0e906856fb4a4806'>Spotify</a>—or wherever you get your podcasts.  Enjoy the break, and we’ll see you in January.</p>
]]></content:encoded>
                                    
        <enclosure length="40904124" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/rvez83/ET22LEIT-404-JohnBasl-final_Mixdown_17xp1d.mp3"/>
        <itunes:summary>This week, we continue our “22 Lessons on Ethics and Technology series” with a conversation with Dr. John Basl about how our relationship with tech is changing what he calls an “ethic of life, an ethical perspective on which all living things deserve some level of moral concern.

Professor Basl is an associate professor of philosophy in the department of philosophy &amp; religion at Northeastern University and a faculty associate at the Edmond J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society at Harvard University. He works primarily in moral philosophy and applied ethics, especially on issues related to emerging technologies. He is an editorial board member for the new journal AI and Ethics. His most recent book, The Death of the Ethic of Life, is available from Oxford University Press.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3408</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>93</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week, we continue our “22 Lessons on Ethics and Technology series” with a conversation with Dr. John Basl about how our relationship with tech is changing what he calls an “ethic of life, an ethical perspective on which all living things deserve some level of moral concern. Professor Basl is an associate professor of philosophy in the department of philosophy &amp; religion at Northeastern University and a faculty associate at the Edmond J. Safra Center for Ethics and the Berkman Klein Center for Internet and Society at Harvard University. He works primarily in moral philosophy and applied ethics, especially on issues related to emerging technologies. He is an editorial board member for the new journal AI and Ethics. His most recent book, The Death of the Ethic of Life, is available from Oxford University Press.  And that’s all for this season! We are staying off our technologies for the winter break—we’ll be back with more episodes of the Technically Human podcast in 2023. The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by me, Deb Donig, with production support from Matthew Harsh and Elise St. John. Thanks to Jake Garner and Emma Zumbro for production coordination. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Art by Desi Aleman. Don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--Apple podcasts, Google Play, Spotify—or wherever you get your podcasts.  Enjoy the break, and we’ll see you in January.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Socio Paths: Navigating the terrain of sociotechnical systems</title>
        <itunes:title>Socio Paths: Navigating the terrain of sociotechnical systems</itunes:title>
        <link>https://dmdonig.podbean.com/e/socio-paths-navigating-the-terrain-of-sociotechnical-systems/</link>
                    <comments>https://dmdonig.podbean.com/e/socio-paths-navigating-the-terrain-of-sociotechnical-systems/#comments</comments>        <pubDate>Fri, 18 Nov 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/695f11f7-b340-35da-8f9f-ba14bcf05cac</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I host Chris Leong and Maria Santacaterina for a conversation about the growing pervasiveness of sociotechnical systems. You may not know the term "sociotechnical system," but if you've booked a flight online, tried to reach an agent on the DMV's hotline, or  tried to contact your congressperson, you almost certainly have interacted with one of them. How have sociotechnical systems changed the way we access services, the way we spend our time, and the way we interact with one another? What are the benefits--and the consequences--of living in a world increasingly organized and processed through these systems?</p>
<p>Maria Santacaterina is a Global Strategic Leader & Board Executive Advisor, who has worked in 100+ markets and has over 30 years international experience. She focuses on leading growth, strategic change and digital business transformation, particularly on the level of corporate culture and strategy. She advocates for a new approach to futurist imagining, which she calls “adaptive resilience,” in order to build enduring value and values; while responding to an accelerating rate of change, complexity and exponential technological disruption.</p>
<p>Chris Leong is a Transformation and Change Leader with a career spanning over 30 years in financial services, enterprise software and consulting industries globally. He thinks about, writes, and advises on the impacts of automated decision-making and profiling outcomes from all digital services on customers and consumers, the trustworthiness of Socio-Technical Systems and the organisations that deploy them.</p>
<p> </p>
<p>Together, Maria and Chris have co-authored several landmark articles on STSs, including their piece "<a>Responsible Innovation: Living with socio-technical systems</a>" and "<a href='https://www.linkedin.com/pulse/responsible-innovation-have-you-outsourced-system-chris-leong-fhca/?trackingId=jPeR%2FyW1RmqnodxjT9ueww%3D%3D'>Have you outsourced to a sociotechnical system</a>." 

</p>
<p>Enjoy the episode, and thanks for tuning in! We’re off next week for the thanksgiving break—join us the first week of December for a new episode of the “22 Lessons in Ethics and Technology” series. To learn more about the 22 Lessons on Ethical Technology series, visit <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a>. And don’t forget to subscribe to the show so that you don’t miss an episode. You can find us on Apple podcasts, Google Podcasts, Spotify, or wherever you get your podcasts! We’ll see you in December.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I host Chris Leong and Maria Santacaterina for a conversation about the growing pervasiveness of sociotechnical systems. You may not know the term "sociotechnical system," but if you've booked a flight online, tried to reach an agent on the DMV's hotline, or  tried to contact your congressperson, you almost certainly have interacted with one of them. How have sociotechnical systems changed the way we access services, the way we spend our time, and the way we interact with one another? What are the benefits--and the consequences--of living in a world increasingly organized and processed through these systems?</p>
<p>Maria Santacaterina is a Global Strategic Leader & Board Executive Advisor, who has worked in 100+ markets and has over 30 years international experience. She focuses on leading growth, strategic change and digital business transformation, particularly on the level of corporate culture and strategy. She advocates for a new approach to futurist imagining, which she calls “adaptive resilience,” in order to build enduring value and values; while responding to an accelerating rate of change, complexity and exponential technological disruption.</p>
<p>Chris Leong is a Transformation and Change Leader with a career spanning over 30 years in financial services, enterprise software and consulting industries globally. He thinks about, writes, and advises on the impacts of automated decision-making and profiling outcomes from all digital services on customers and consumers, the trustworthiness of Socio-Technical Systems and the organisations that deploy them.</p>
<p> </p>
<p>Together, Maria and Chris have co-authored several landmark articles on STSs, including their piece "<a>Responsible Innovation: Living with socio-technical systems</a>" and "<a href='https://www.linkedin.com/pulse/responsible-innovation-have-you-outsourced-system-chris-leong-fhca/?trackingId=jPeR%2FyW1RmqnodxjT9ueww%3D%3D'>Have you outsourced to a sociotechnical system</a>." <br>
<br>
</p>
<p>Enjoy the episode, and thanks for tuning in! We’re off next week for the thanksgiving break—join us the first week of December for a new episode of the “22 Lessons in Ethics and Technology” series. To learn more about the 22 Lessons on Ethical Technology series, visit <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a>. And don’t forget to subscribe to the show so that you don’t miss an episode. You can find us on Apple podcasts, Google Podcasts, Spotify, or wherever you get your podcasts! We’ll see you in December.</p>
]]></content:encoded>
                                    
        <enclosure length="87851160" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/zm6t68/LS_mixdown.mp3"/>
        <itunes:summary>In this episode of ”Technically Human,” I host Chris Leong and Maria Santacaterina for a conversation about the growing pervasiveness of sociotechnical systems. You may not know the term ”sociotechnical system,” but if you’ve booked a flight online, tried to reach an agent on the DMV’s hotline, or  tried to contact your congressperson, you almost certainly have interacted with one of them. How have sociotechnical systems changed the way we access services, the way we spend our time, and the way we interact with one another? What are the benefits--and the consequences--of living in a world increasingly organized and processed through these systems?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3660</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>92</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I host Chris Leong and Maria Santacaterina for a conversation about the growing pervasiveness of sociotechnical systems. You may not know the term "sociotechnical system," but if you've booked a flight online, tried to reach an agent on the DMV's hotline, or  tried to contact your congressperson, you almost certainly have interacted with one of them. How have sociotechnical systems changed the way we access services, the way we spend our time, and the way we interact with one another? What are the benefits--and the consequences--of living in a world increasingly organized and processed through these systems? Maria Santacaterina is a Global Strategic Leader &amp; Board Executive Advisor, who has worked in 100+ markets and has over 30 years international experience. She focuses on leading growth, strategic change and digital business transformation, particularly on the level of corporate culture and strategy. She advocates for a new approach to futurist imagining, which she calls “adaptive resilience,” in order to build enduring value and values; while responding to an accelerating rate of change, complexity and exponential technological disruption. Chris Leong is a Transformation and Change Leader with a career spanning over 30 years in financial services, enterprise software and consulting industries globally. He thinks about, writes, and advises on the impacts of automated decision-making and profiling outcomes from all digital services on customers and consumers, the trustworthiness of Socio-Technical Systems and the organisations that deploy them.   Together, Maria and Chris have co-authored several landmark articles on STSs, including their piece "Responsible Innovation: Living with socio-technical systems" and "Have you outsourced to a sociotechnical system."  Enjoy the episode, and thanks for tuning in! We’re off next week for the thanksgiving break—join us the first week of December for a new episode of the “22 Lessons in Ethics and Technology” series. To learn more about the 22 Lessons on Ethical Technology series, visit www.etcalpoly.org. And don’t forget to subscribe to the show so that you don’t miss an episode. You can find us on Apple podcasts, Google Podcasts, Spotify, or wherever you get your podcasts! We’ll see you in December.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Outside Voices: Transcisciplinary Approaches to Ethics and Technology</title>
        <itunes:title>Outside Voices: Transcisciplinary Approaches to Ethics and Technology</itunes:title>
        <link>https://dmdonig.podbean.com/e/outside-voices-transcisciplinary-approaches-to-ethics-and-technology/</link>
                    <comments>https://dmdonig.podbean.com/e/outside-voices-transcisciplinary-approaches-to-ethics-and-technology/#comments</comments>        <pubDate>Fri, 11 Nov 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/5a5d9cc6-00e6-3481-af34-1c0bc8679ee4</guid>
                                    <description><![CDATA[<p>Welcome to another interview in the "22 Lessons in Ethics and Technology" series! In this episode, I speak with Dr. Pavel Cenkl, about the need for intellectual diversity and multidimensional approaches to technological solutions to the major problems of our time. Professor Cenkl discusses how the major problems we face require that we bring together people trained in a wide variety of approaches. Focusing on environmental issues--climate change, ecological destruction, and the possible proliferation of future pandemics--we consider how ethical approaches to technology depend on thinking across boundaries of ideas and including voices across a variety of institutions, cultures, and experiences.</p>
<p><a href='https://campus.dartington.org/dr-pavel-cenkl/'>Dr. Pavel Cenkl</a> is the Head of Schumacher College and Director of Learning and Land at Dartington Trust. He has worked for more than two decades in higher education in America and has always been drawn to colleges and universities whose curriculum fully integrates learning with practice and thinking with embodiment.</p>
<p>Having taught and served as Dean for nearly 15 years at Vermont’s Sterling College, Pavel brings a depth of experience to Schumacher College’s unique approach to experiential learning. While pursuing research in ecologically-minded curriculum design and teaching courses in environmental philosophy, Dr. Cenkl is also a passionate endurance and adventure runner. Over the past five years through a project called Climate Run, he has covered hundreds of miles in the Arctic and subarctic on foot in order to bring attention to the connections between our bodies and the more-than-human world in the face of a rapidly changing climate.</p>
<p>Dr. Cenkl holds a Ph.D. in English and is the author of many articles, chapters, and two books: <a href='https://www.amazon.com/Nature-Culture-Northern-Forest-Environment/dp/1587298562'>Nature and Culture in the Northern Forest: Region, Heritage, and Environment in the Rural Northeast</a> (Iowa City: University of Iowa Press, 2010); and <a href='https://uipress.uiowa.edu/books/vast-book-nature'>This Vast Book of Nature: Writing the Landscape of New Hampshire’s White Mountains, 1784-1911</a> (Iowa City: University of Iowa Press, 2006). He is currently working on a book titled Resilience in the North: Adventure, Endurance, and the Limits of the Human, which threads together personal narrative and observation with environmental philosophy and reflections on what it means to be human.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to another interview in the "22 Lessons in Ethics and Technology" series! In this episode, I speak with Dr. Pavel Cenkl, about the need for intellectual diversity and multidimensional approaches to technological solutions to the major problems of our time. Professor Cenkl discusses how the major problems we face require that we bring together people trained in a wide variety of approaches. Focusing on environmental issues--climate change, ecological destruction, and the possible proliferation of future pandemics--we consider how ethical approaches to technology depend on thinking across boundaries of ideas and including voices across a variety of institutions, cultures, and experiences.</p>
<p><a href='https://campus.dartington.org/dr-pavel-cenkl/'>Dr. Pavel Cenkl</a> is the Head of Schumacher College and Director of Learning and Land at Dartington Trust. He has worked for more than two decades in higher education in America and has always been drawn to colleges and universities whose curriculum fully integrates learning with practice and thinking with embodiment.</p>
<p>Having taught and served as Dean for nearly 15 years at Vermont’s Sterling College, Pavel brings a depth of experience to Schumacher College’s unique approach to experiential learning. While pursuing research in ecologically-minded curriculum design and teaching courses in environmental philosophy, Dr. Cenkl is also a passionate endurance and adventure runner. Over the past five years through a project called Climate Run, he has covered hundreds of miles in the Arctic and subarctic on foot in order to bring attention to the connections between our bodies and the more-than-human world in the face of a rapidly changing climate.</p>
<p>Dr. Cenkl holds a Ph.D. in English and is the author of many articles, chapters, and two books: <a href='https://www.amazon.com/Nature-Culture-Northern-Forest-Environment/dp/1587298562'><em>Nature and Culture in the Northern Forest: Region, Heritage, and Environment in the Rural Northeast</em></a> (Iowa City: University of Iowa Press, 2010); and <a href='https://uipress.uiowa.edu/books/vast-book-nature'><em>This Vast Book of Nature: Writing the Landscape of New Hampshire’s White Mountains, 1784-1911</em></a> (Iowa City: University of Iowa Press, 2006). He is currently working on a book titled Resilience in the North: Adventure, Endurance, and the Limits of the Human, which threads together personal narrative and observation with environmental philosophy and reflections on what it means to be human.</p>
]]></content:encoded>
                                    
        <enclosure length="32733259" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kbdcrt/ET22LEIT-404-Pavel_Mixdown_17lper.mp3"/>
        <itunes:summary>Welcome to another interview in the ”22 Lessons in Ethics and Technology” series! In this episode, I speak with Dr. Pavel Cenkl, about the need for intellectual diversity and multidimensional approaches to technological solutions to the major problems of our time. Professor Cenkl discusses how the major problems we face require that we bring together people trained in a wide variety of approaches. Focusing on environmental issues--climate change, ecological destruction, and the possible proliferation of future pandemics--we consider how ethical approaches to technology depend on thinking across boundaries of ideas and including voices across a variety of institutions, cultures, and experiences.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2727</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>91</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to another interview in the "22 Lessons in Ethics and Technology" series! In this episode, I speak with Dr. Pavel Cenkl, about the need for intellectual diversity and multidimensional approaches to technological solutions to the major problems of our time. Professor Cenkl discusses how the major problems we face require that we bring together people trained in a wide variety of approaches. Focusing on environmental issues--climate change, ecological destruction, and the possible proliferation of future pandemics--we consider how ethical approaches to technology depend on thinking across boundaries of ideas and including voices across a variety of institutions, cultures, and experiences. Dr. Pavel Cenkl is the Head of Schumacher College and Director of Learning and Land at Dartington Trust. He has worked for more than two decades in higher education in America and has always been drawn to colleges and universities whose curriculum fully integrates learning with practice and thinking with embodiment. Having taught and served as Dean for nearly 15 years at Vermont’s Sterling College, Pavel brings a depth of experience to Schumacher College’s unique approach to experiential learning. While pursuing research in ecologically-minded curriculum design and teaching courses in environmental philosophy, Dr. Cenkl is also a passionate endurance and adventure runner. Over the past five years through a project called Climate Run, he has covered hundreds of miles in the Arctic and subarctic on foot in order to bring attention to the connections between our bodies and the more-than-human world in the face of a rapidly changing climate. Dr. Cenkl holds a Ph.D. in English and is the author of many articles, chapters, and two books: Nature and Culture in the Northern Forest: Region, Heritage, and Environment in the Rural Northeast (Iowa City: University of Iowa Press, 2010); and This Vast Book of Nature: Writing the Landscape of New Hampshire’s White Mountains, 1784-1911 (Iowa City: University of Iowa Press, 2006). He is currently working on a book titled Resilience in the North: Adventure, Endurance, and the Limits of the Human, which threads together personal narrative and observation with environmental philosophy and reflections on what it means to be human.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Age of Posthumanism</title>
        <itunes:title>The Age of Posthumanism</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-age-of-posthumanism/</link>
                    <comments>https://dmdonig.podbean.com/e/the-age-of-posthumanism/#comments</comments>        <pubDate>Fri, 04 Nov 2022 06:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/36183db4-5897-3301-80d1-ba15cce3967e</guid>
                                    <description><![CDATA[<p>Welcome to our 3rd episode of the "22 Lessons on Ethical Technology" series! We will be releasing new episodes in the series every first and second Friday of the month through the duration of the series.</p>
<p class="p1">In this episode, I sit down with Dr. N. Kate Hayles, one of the founding theorists of posthumanism, a key term to understanding the changing and dynamic relationship between humans and machines in the digital age. What is the role of the Humanities in understanding our relationship to technology? How have our technological innovations have changed the nature of “the human?" And what is the future of the human relationship to our machines--and to our understanding of ourselves?</p>
<p><a href='https://english.ucla.edu/people-faculty/hayles-katherine-n/'>Dr. N. Katherine Hayles</a> is a Distinguished Research Professor of English at the University of California, Los Angeles and the James B. Duke Professor of Literature Emerita at Duke University. She teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Her most recent book, <a href='http://cup.columbia.edu/book/postprint/9780231198257'>Postprint: Books and Becoming Computational</a>, was published by the Columbia University Press (Spring 2021). Among her many books is her landmark work <a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo3769963.html'>How We Became Posthuman: Virtual Bodies in Cybernetics</a>, Literature and Informatics, which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and <a href='https://mitpress.mit.edu/9780262582155/writing-machines/'>Writing Machines</a>, which won the Suzanne Langer Award for Outstanding Scholarship. She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships. </p>
<p>Dr. Hayles is a member of the American Academy of Arts and Science. She holds a B.S. from the Rochester Institute of Technology, an M.S. from the California Institute of Technology, an M.A. from Michigan State University, and a Ph.D. from the University of Rochester. Within the field of Posthuman Studies, Dr. Hayles'  book How We Became Posthuman is considered "the key text which brought posthumanism to broad international attention. Her work has laid the foundations for multiple areas of thinking across a wide variety of urgent issues at the intersection of technology, including cybernetic history, feminism, postmodernism, cultural and literary criticism, and is vital to our ongoing conversations about the changing relationship between humans and the technologies we create.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to our 3rd episode of the "22 Lessons on Ethical Technology" series! We will be releasing new episodes in the series every first and second Friday of the month through the duration of the series.</p>
<p class="p1">In this episode, I sit down with Dr. N. Kate Hayles, one of the founding theorists of posthumanism, a key term to understanding the changing and dynamic relationship between humans and machines in the digital age. What is the role of the Humanities in understanding our relationship to technology? How have our technological innovations have changed the nature of “the human?" And what is the future of the human relationship to our machines--and to our understanding of ourselves?</p>
<p><a href='https://english.ucla.edu/people-faculty/hayles-katherine-n/'>Dr. N. Katherine Hayles</a> is a Distinguished Research Professor of English at the University of California, Los Angeles and the James B. Duke Professor of Literature Emerita at Duke University. She teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Her most recent book, <a href='http://cup.columbia.edu/book/postprint/9780231198257'><em>Postprint: Books and Becoming Computational</em></a>, was published by the Columbia University Press (Spring 2021). Among her many books is her landmark work <a href='https://press.uchicago.edu/ucp/books/book/chicago/H/bo3769963.html'><em>How We Became Posthuman: Virtual Bodies in Cybernetics</em></a>, Literature and Informatics, which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and <a href='https://mitpress.mit.edu/9780262582155/writing-machines/'><em>Writing Machines</em></a>, which won the Suzanne Langer Award for Outstanding Scholarship. She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships. </p>
<p>Dr. Hayles is a member of the American Academy of Arts and Science. She holds a B.S. from the Rochester Institute of Technology, an M.S. from the California Institute of Technology, an M.A. from Michigan State University, and a Ph.D. from the University of Rochester. Within the field of Posthuman Studies, Dr. Hayles'  book How We Became Posthuman is considered "the key text which brought posthumanism to broad international attention. Her work has laid the foundations for multiple areas of thinking across a wide variety of urgent issues at the intersection of technology, including cybernetic history, feminism, postmodernism, cultural and literary criticism, and is vital to our ongoing conversations about the changing relationship between humans and the technologies we create.</p>
]]></content:encoded>
                                    
        <enclosure length="67923911" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kbn467/ET22LEIT-403-final_Mixdown_1_luffed70h7t.mp3"/>
        <itunes:summary>Welcome to our 3rd episode of the ”22 Lessons on Ethical Technology” series! In this episode, I sit down with Dr. N. Kate Hayles, one of the founding theorists of posthumanism, a key term to understanding the changing and dynamic relationship between humans and machines in the digital age. What is the role of the Humanities in understanding our relationship to technology? How have our technological innovations have changed the nature of “the human?” And what is the future of the human relationship to our machines--and to our understanding of ourselves?</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4245</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>90</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to our 3rd episode of the "22 Lessons on Ethical Technology" series! We will be releasing new episodes in the series every first and second Friday of the month through the duration of the series. In this episode, I sit down with Dr. N. Kate Hayles, one of the founding theorists of posthumanism, a key term to understanding the changing and dynamic relationship between humans and machines in the digital age. What is the role of the Humanities in understanding our relationship to technology? How have our technological innovations have changed the nature of “the human?" And what is the future of the human relationship to our machines--and to our understanding of ourselves? Dr. N. Katherine Hayles is a Distinguished Research Professor of English at the University of California, Los Angeles and the James B. Duke Professor of Literature Emerita at Duke University. She teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Her most recent book, Postprint: Books and Becoming Computational, was published by the Columbia University Press (Spring 2021). Among her many books is her landmark work How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, which won the Rene Wellek Prize for the Best Book in Literary Theory for 1998-99, and Writing Machines, which won the Suzanne Langer Award for Outstanding Scholarship. She has been recognized by many fellowships and awards, including two NEH Fellowships, a Guggenheim, a Rockefeller Residential Fellowship at Bellagio, and two University of California Presidential Research Fellowships.  Dr. Hayles is a member of the American Academy of Arts and Science. She holds a B.S. from the Rochester Institute of Technology, an M.S. from the California Institute of Technology, an M.A. from Michigan State University, and a Ph.D. from the University of Rochester. Within the field of Posthuman Studies, Dr. Hayles'  book How We Became Posthuman is considered "the key text which brought posthumanism to broad international attention. Her work has laid the foundations for multiple areas of thinking across a wide variety of urgent issues at the intersection of technology, including cybernetic history, feminism, postmodernism, cultural and literary criticism, and is vital to our ongoing conversations about the changing relationship between humans and the technologies we create.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>What it Means to Care: Ethical medicine in the age of tech</title>
        <itunes:title>What it Means to Care: Ethical medicine in the age of tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/american-health-care/</link>
                    <comments>https://dmdonig.podbean.com/e/american-health-care/#comments</comments>        <pubDate>Fri, 28 Oct 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d566254e-cffb-3e27-8600-af9ab528a046</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I give my mic over to two guest hosts, David Geitner and Roman Rosser, to interview Dr. Robert Pearl about the intersection between tech, medicine, and our health. Dr. Pearl answers questions about the way that technologies are radically reshaping health care; the hosts ask questions about bias in medicine; and the group discusses the ways in which our current system fails to treat us, well, well.</p>
<p class="p5">Dr. Robert Pearl is the former CEO of The Permanente Medical Group (1999-2017), the nation’s largest medical group, and former president of The Mid-Atlantic Permanente Medical Group (2009-2017). He serves as a clinical professor of plastic surgery at Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business, where he teaches courses on strategy and leadership, and lectures on information technology and health care policy. He is the author of <a href='https://www.amazon.com/Mistreated-Getting-Health-Care-Usually/dp/1610397657'>Mistreated: Why We think We’re Getting Good Healthcare—And Why We’re Usually Wrong</a>, a Washington Post bestseller that offers a roadmap for transforming American healthcare. His new book, <a href='https://www.publicaffairsbooks.com/titles/robert-pearl/uncaring/9781541758254'>Uncaring: How the Culture of Medicine Kills Doctors & Patients</a> is available now. All proceeds from these books go to Doctors Without Borders. </p>
<p class="p5">Dr. Pearl hosts the popular podcasts "<a href='https://www.fixinghealthcarepodcast.com/'>Fixing Healthcare</a>" and <a href='https://www.fixinghealthcarepodcast.com/category/coronavirus/'>Coronavirus: The Truth</a>. He publishes a newsletter with over 12,000 subscribers called  HYPERLINK "https://robertpearlmd.com/newsletter/" Monthly Musings on American Healthcare and is a <a href='https://www.forbes.com/sites/robertpearl'>regular contributor to Forbes</a>. He has been featured on CBS This Morning, CNBC, NPR, and in TIME, USA Today and Bloomberg News. </p>
<p class="p1">David Geitner is a third-year Biological Sciences major and Frost Scholar at California Polytechnic State University. He grew up in Yuba City California where he learned to love science, sports, community service, and the outdoors. He works in an on-campus research lab working with protein phosphomimetics for protein-to-protein interactions. David aspires to be a dentist as quality dental care is a necessity for society. David hopes to go into the military as a dentist and provide a service to his country. Roman Rosser is a student studying Aerospace Engineering at Cal Poly, San Luis Obispo. He recently joined the PROVE team which is building a long-distance electric car. Roman hopes to work on designing or building new vehicles and has a particular passion for orbital rockets. His hobbies include lifting, backpacking, surfing and reading.</p>
<p class="p5">A special thank you to David Geitner and Roman Rosser for hosting this week’s episode, and to Dr. Pearl for joining us for the show. We’ll be back next week with another episode of the “22 Lessons in Ethical Technology special series,” so stay tuned! You can find more information about the 22 Lessons series and the Technically Human Podcast, on our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org.</a> </p>
<p class="p5">And don’t forget to subscribe to the show! You can find us on Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts.</p>
<p class="p7"> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I give my mic over to two guest hosts, David Geitner and Roman Rosser, to interview Dr. Robert Pearl about the intersection between tech, medicine, and our health. Dr. Pearl answers questions about the way that technologies are radically reshaping health care; the hosts ask questions about bias in medicine; and the group discusses the ways in which our current system fails to treat us, well, <em>well.</em></p>
<p class="p5">Dr. Robert Pearl is the former CEO of The Permanente Medical Group (1999-2017), the nation’s largest medical group, and former president of The Mid-Atlantic Permanente Medical Group (2009-2017). He serves as a clinical professor of plastic surgery at Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business, where he teaches courses on strategy and leadership, and lectures on information technology and health care policy. He is the author of <a href='https://www.amazon.com/Mistreated-Getting-Health-Care-Usually/dp/1610397657'><em>Mistreated: Why We think We’re Getting Good Healthcare—And Why We’re Usually Wrong</em></a>, a <em>Washington Post</em> bestseller that offers a roadmap for transforming American healthcare. His new book, <a href='https://www.publicaffairsbooks.com/titles/robert-pearl/uncaring/9781541758254'><em>Uncaring: How the Culture of Medicine Kills Doctors & Patients</em></a> is available now. All proceeds from these books go to Doctors Without Borders. </p>
<p class="p5">Dr. Pearl hosts the popular podcasts "<a href='https://www.fixinghealthcarepodcast.com/'>Fixing Healthcare</a>" and <a href='https://www.fixinghealthcarepodcast.com/category/coronavirus/'>Coronavirus: The Truth</a>. He publishes a newsletter with over 12,000 subscribers called  HYPERLINK "https://robertpearlmd.com/newsletter/" Monthly Musings on American Healthcare and is a <a href='https://www.forbes.com/sites/robertpearl'>regular contributor to Forbes</a>. He has been featured on CBS This Morning, CNBC, NPR, and in <em>TIME</em>,<em> USA Today</em> and <em>Bloomberg News</em>. </p>
<p class="p1">David Geitner is a third-year Biological Sciences major and Frost Scholar at California Polytechnic State University. He grew up in Yuba City California where he learned to love science, sports, community service, and the outdoors. He works in an on-campus research lab working with protein phosphomimetics for protein-to-protein interactions. David aspires to be a dentist as quality dental care is a necessity for society. David hopes to go into the military as a dentist and provide a service to his country. Roman Rosser is a student studying Aerospace Engineering at Cal Poly, San Luis Obispo. He recently joined the PROVE team which is building a long-distance electric car. Roman hopes to work on designing or building new vehicles and has a particular passion for orbital rockets. His hobbies include lifting, backpacking, surfing and reading.</p>
<p class="p5">A special thank you to David Geitner and Roman Rosser for hosting this week’s episode, and to Dr. Pearl for joining us for the show. We’ll be back next week with another episode of the “22 Lessons in Ethical Technology special series,” so stay tuned! You can find more information about the 22 Lessons series and the Technically Human Podcast, on our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org.</a> </p>
<p class="p5">And don’t forget to subscribe to the show! You can find us on Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts.</p>
<p class="p7"> </p>
]]></content:encoded>
                                    
        <enclosure length="84987916" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2sv8vg/Pearl_mixdown_2aecag.mp3"/>
        <itunes:summary>In this episode of ”Technically Human,” I give my mic over to two guest hosts, David Geitner and Roman Rosser, to interview Dr. Robert Pearl about the intersection between tech, medicine, and our health. Dr. Pearl answers questions about the way that technologies are radically reshaping health care; the hosts ask questions about bias in medicine; and the group discusses the ways in which our current system fails to treat us, well, well.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3540</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>89</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I give my mic over to two guest hosts, David Geitner and Roman Rosser, to interview Dr. Robert Pearl about the intersection between tech, medicine, and our health. Dr. Pearl answers questions about the way that technologies are radically reshaping health care; the hosts ask questions about bias in medicine; and the group discusses the ways in which our current system fails to treat us, well, well. Dr. Robert Pearl is the former CEO of The Permanente Medical Group (1999-2017), the nation’s largest medical group, and former president of The Mid-Atlantic Permanente Medical Group (2009-2017). He serves as a clinical professor of plastic surgery at Stanford University School of Medicine and is on the faculty of the Stanford Graduate School of Business, where he teaches courses on strategy and leadership, and lectures on information technology and health care policy. He is the author of Mistreated: Why We think We’re Getting Good Healthcare—And Why We’re Usually Wrong, a Washington Post bestseller that offers a roadmap for transforming American healthcare. His new book, Uncaring: How the Culture of Medicine Kills Doctors &amp; Patients is available now. All proceeds from these books go to Doctors Without Borders.  Dr. Pearl hosts the popular podcasts "Fixing Healthcare" and Coronavirus: The Truth. He publishes a newsletter with over 12,000 subscribers called  HYPERLINK "https://robertpearlmd.com/newsletter/" Monthly Musings on American Healthcare and is a regular contributor to Forbes. He has been featured on CBS This Morning, CNBC, NPR, and in TIME, USA Today and Bloomberg News.  David Geitner is a third-year Biological Sciences major and Frost Scholar at California Polytechnic State University. He grew up in Yuba City California where he learned to love science, sports, community service, and the outdoors. He works in an on-campus research lab working with protein phosphomimetics for protein-to-protein interactions. David aspires to be a dentist as quality dental care is a necessity for society. David hopes to go into the military as a dentist and provide a service to his country. Roman Rosser is a student studying Aerospace Engineering at Cal Poly, San Luis Obispo. He recently joined the PROVE team which is building a long-distance electric car. Roman hopes to work on designing or building new vehicles and has a particular passion for orbital rockets. His hobbies include lifting, backpacking, surfing and reading. A special thank you to David Geitner and Roman Rosser for hosting this week’s episode, and to Dr. Pearl for joining us for the show. We’ll be back next week with another episode of the “22 Lessons in Ethical Technology special series,” so stay tuned! You can find more information about the 22 Lessons series and the Technically Human Podcast, on our website, www.etcalpoly.org.  And don’t forget to subscribe to the show! You can find us on Apple Podcasts, Google Podcasts, Spotify, or wherever you get your podcasts.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The age of privacism</title>
        <itunes:title>The age of privacism</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-age-of-privacism/</link>
                    <comments>https://dmdonig.podbean.com/e/the-age-of-privacism/#comments</comments>        <pubDate>Fri, 21 Oct 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ecfc1f75-1a27-3c77-b478-9d72ec182291</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with <a href='https://www.imperial.ac.uk/people/m.huth'>Dr. Michael Huth</a> to talk about the ethics of data collection, privacy, and the new age of “privacism.” We talk about his new platform, <a href='https://www.xayn.com/blog/privacism'>Xayn</a>, we discuss what it looks like to build a company based on ethical principles like privacy and user autonomy, and Michael explains why we should care about our privacy online.</p>
<p>Professor Michael Huth is Co-Founder and Chief Research Officer of Xayn. He teaches at Imperial College London, where he is on the faculty of the department of Engineering, and he serves as the Head of the Department of Computing, at the Imperial College London. His research focuses on Cybersecurity, Cryptography, Mathematical Modeling, as well as security and privacy in Machine Learning, with with expertise in trust and policy. He served as the technical lead of the Harnessing Economic Value theme at PETRAS IoT Cybersecurity Research Hub in the UK. He holds associations with the <a href='http://www.imperial.ac.uk/cryptocurrency'>Centre for Cryptocurrency Research and Engineering</a>; the <a href='http://www.imperial.ac.uk/smart-connected-futures'>Centre for Smart Connected Futures</a>; the <a href='http://www.imperial.ac.uk/secure-software-systems'>Engineering Secure Software Systems</a>; the <a href='http://www.imperial.ac.uk/immunopathology-network'>Immuno-Pathology Network</a>; and the <a href='http://wp.doc.ic.ac.uk/quads'>Quantitative Analysis and Decision Science</a> Section. In 2017, he founded the privacy tech company together with Leif-Nissen Lundbæk and Felix Hahmann. Xayn offers a privacy-protecting search engine that enables users to gain back control over algorithms and data harvesting.</p>
<p>Production and research support from Jared Maslin.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with <a href='https://www.imperial.ac.uk/people/m.huth'>Dr. Michael Huth</a> to talk about the ethics of data collection, privacy, and the new age of “privacism.” We talk about his new platform, <a href='https://www.xayn.com/blog/privacism'>Xayn</a>, we discuss what it looks like to build a company based on ethical principles like privacy and user autonomy, and Michael explains why we should care about our privacy online.</p>
<p>Professor Michael Huth is Co-Founder and Chief Research Officer of Xayn. He teaches at Imperial College London, where he is on the faculty of the department of Engineering, and he serves as the Head of the Department of Computing, at the Imperial College London. His research focuses on Cybersecurity, Cryptography, Mathematical Modeling, as well as security and privacy in Machine Learning, with with expertise in trust and policy. He served as the technical lead of the Harnessing Economic Value theme at PETRAS IoT Cybersecurity Research Hub in the UK. He holds associations with the <a href='http://www.imperial.ac.uk/cryptocurrency'>Centre for Cryptocurrency Research and Engineering</a>; the <a href='http://www.imperial.ac.uk/smart-connected-futures'>Centre for Smart Connected Futures</a>; the <a href='http://www.imperial.ac.uk/secure-software-systems'>Engineering Secure Software Systems</a>; the <a href='http://www.imperial.ac.uk/immunopathology-network'>Immuno-Pathology Network</a>; and the <a href='http://wp.doc.ic.ac.uk/quads'>Quantitative Analysis and Decision Science</a> Section. In 2017, he founded the privacy tech company together with Leif-Nissen Lundbæk and Felix Hahmann. Xayn offers a privacy-protecting search engine that enables users to gain back control over algorithms and data harvesting.</p>
<p>Production and research support from Jared Maslin.</p>
]]></content:encoded>
                                    
        <enclosure length="101773804" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/73aqi7/Michael_Huth_mixdown8b97t.mp3"/>
        <itunes:summary>In this episode, I sit down with Dr. Michael Huth to talk about the ethics of data collection, privacy, and the new age of “privacism.” We talk about his new platform, Xayn,  we discuss what it looks like to build a company based on ethical principles like privacy and user autonomy, and Michael explains why we should care about our privacy online.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4240</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>88</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with Dr. Michael Huth to talk about the ethics of data collection, privacy, and the new age of “privacism.” We talk about his new platform, Xayn, we discuss what it looks like to build a company based on ethical principles like privacy and user autonomy, and Michael explains why we should care about our privacy online. Professor Michael Huth is Co-Founder and Chief Research Officer of Xayn. He teaches at Imperial College London, where he is on the faculty of the department of Engineering, and he serves as the Head of the Department of Computing, at the Imperial College London. His research focuses on Cybersecurity, Cryptography, Mathematical Modeling, as well as security and privacy in Machine Learning, with with expertise in trust and policy. He served as the technical lead of the Harnessing Economic Value theme at PETRAS IoT Cybersecurity Research Hub in the UK. He holds associations with the Centre for Cryptocurrency Research and Engineering; the Centre for Smart Connected Futures; the Engineering Secure Software Systems; the Immuno-Pathology Network; and the Quantitative Analysis and Decision Science Section. In 2017, he founded the privacy tech company together with Leif-Nissen Lundbæk and Felix Hahmann. Xayn offers a privacy-protecting search engine that enables users to gain back control over algorithms and data harvesting. Production and research support from Jared Maslin.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>From Tech to Action: Are our technologies changing our ethics?</title>
        <itunes:title>From Tech to Action: Are our technologies changing our ethics?</itunes:title>
        <link>https://dmdonig.podbean.com/e/technoethics-how-technology-change-our-ethics/</link>
                    <comments>https://dmdonig.podbean.com/e/technoethics-how-technology-change-our-ethics/#comments</comments>        <pubDate>Fri, 14 Oct 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8b3b6752-fc45-3ca1-9df6-0984f784fa23</guid>
                                    <description><![CDATA[<p>Welcome to our 2nd episode of the "22 Lessons on Ethical Technology" series!</p>
<p>In this episode, I sit down with Dr. Mark Coeckelbergh, one of the world's leading experts on ethics and technology, in particular robotics and artificial intelligence. We talk about the way that technologies are changing our understanding of ethics and philosophical thinking, how technologies have added to and altered philosophical thinking throughout history, how new technologies--particularly robots, AI, cybernetics, and memory devices--are changing the way we think, and how we understand our ethical obligations to the world, and to each other. </p>
<p>Prof. Dr. Mark Coeckelbergh is a Professor of Philosophy of Media and Technology in the Philosophy of Department at the University of Vienna, and until recently Vice Dean of the Faculty of Philosophy and Education. He is also the former President of the Society for Philosophy and Technology (SPT). His expertise focuses on ethics and technology, in particular robotics and artificial intelligence. He is a member of various entities that support policy building in the area of robotics and artificial intelligence, such as the European Commission’s High-Level Expert Group on Artificial Intelligence, the Austrian Council on Robotics and Artificial Intelligence, and the Austrian Advisory Council on Automated Mobility. He is the author of 16 philosophy books and numerous articles, and is involved in several European research projects on robotics.</p>
<p>From 2012-2014, Prof. Coeckelbergh served as the Managing Director of the 3TU Centre for Ethics and Technology), and from 2013-2015, he served as the co‐chair of the Technical Committee ‘Robot Ethics’ of the <a href='http://www.ieee-ras.org/technical-committees'>IEEE Robotics & Automation Society</a>.</p>
<p>He serves on numerous journal advisory boards at the intersection of ethics, society, and technology; he is a fellow of the World Technology Network (WTN) and a finalist of the <a href='http://bit.ly/2B76tzO'>2017 World Technology Awards</a> in the category “Ethics”. His new book, <a href='https://coeckelbergh.wordpress.com/robot-ethics/'>Robot Ethics </a>(MIT Press, 2022) is a landmark guide to the ethical questions that arise from our use of industrial robots, robot companions, self-driving cars, and other robotic devices.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to our 2nd episode of the "22 Lessons on Ethical Technology" series!</p>
<p>In this episode, I sit down with Dr. Mark Coeckelbergh, one of the world's leading experts on ethics and technology, in particular robotics and artificial intelligence. We talk about the way that technologies are changing our understanding of ethics and philosophical thinking, how technologies have added to and altered philosophical thinking throughout history, how new technologies--particularly robots, AI, cybernetics, and memory devices--are changing the way we think, and how we understand our ethical obligations to the world, and to each other. </p>
<p>Prof. Dr. Mark Coeckelbergh is a Professor of Philosophy of Media and Technology in the Philosophy of Department at the University of Vienna, and until recently Vice Dean of the Faculty of Philosophy and Education. He is also the former President of the Society for Philosophy and Technology (SPT). His expertise focuses on ethics and technology, in particular robotics and artificial intelligence. He is a member of various entities that support policy building in the area of robotics and artificial intelligence, such as the European Commission’s High-Level Expert Group on Artificial Intelligence, the Austrian Council on Robotics and Artificial Intelligence, and the Austrian Advisory Council on Automated Mobility. He is the author of 16 philosophy books and numerous articles, and is involved in several European research projects on robotics.</p>
<p>From 2012-2014, Prof. Coeckelbergh served as the Managing Director of the 3TU Centre for Ethics and Technology), and from 2013-2015, he served as the co‐chair of the Technical Committee ‘Robot Ethics’ of the <a href='http://www.ieee-ras.org/technical-committees'>IEEE Robotics & Automation Society</a>.</p>
<p>He serves on numerous journal advisory boards at the intersection of ethics, society, and technology; he is a fellow of the World Technology Network (WTN) and a finalist of the <a href='http://bit.ly/2B76tzO'>2017 World Technology Awards</a> in the category “Ethics”. His new book, <a href='https://coeckelbergh.wordpress.com/robot-ethics/'>Robot Ethics </a>(MIT Press, 2022) is a landmark guide to the ethical questions that arise from our use of industrial robots, robot companions, self-driving cars, and other robotic devices.</p>
]]></content:encoded>
                                    
        <enclosure length="42229679" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/smph96/22LIET-402-Cokelburg_Mixdown_1_luffed8rmpz.mp3"/>
        <itunes:summary>Welcome to our 2nd episode of the ”22 Lessons on Ethical Technology” series!
In this episode, I sit down with Dr. Mark Coeckelbergh, one of the world’s leading experts on ethics and technology, in particular robotics and artificial intelligence. We talk about the way that technologies are changing our understanding of ethics and philosophical thinking, how technologies have added to and altered philosophical thinking throughout history, how new technologies--particularly robots, AI, cybernetics, and memory devices--are changing the way we think, and how we understand our ethical obligations to the world, and to each other.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3519</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>87</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to our 2nd episode of the "22 Lessons on Ethical Technology" series! In this episode, I sit down with Dr. Mark Coeckelbergh, one of the world's leading experts on ethics and technology, in particular robotics and artificial intelligence. We talk about the way that technologies are changing our understanding of ethics and philosophical thinking, how technologies have added to and altered philosophical thinking throughout history, how new technologies--particularly robots, AI, cybernetics, and memory devices--are changing the way we think, and how we understand our ethical obligations to the world, and to each other.  Prof. Dr. Mark Coeckelbergh is a Professor of Philosophy of Media and Technology in the Philosophy of Department at the University of Vienna, and until recently Vice Dean of the Faculty of Philosophy and Education. He is also the former President of the Society for Philosophy and Technology (SPT). His expertise focuses on ethics and technology, in particular robotics and artificial intelligence. He is a member of various entities that support policy building in the area of robotics and artificial intelligence, such as the European Commission’s High-Level Expert Group on Artificial Intelligence, the Austrian Council on Robotics and Artificial Intelligence, and the Austrian Advisory Council on Automated Mobility. He is the author of 16 philosophy books and numerous articles, and is involved in several European research projects on robotics. From 2012-2014, Prof. Coeckelbergh served as the Managing Director of the 3TU Centre for Ethics and Technology), and from 2013-2015, he served as the co‐chair of the Technical Committee ‘Robot Ethics’ of the IEEE Robotics &amp; Automation Society. He serves on numerous journal advisory boards at the intersection of ethics, society, and technology; he is a fellow of the World Technology Network (WTN) and a finalist of the 2017 World Technology Awards in the category “Ethics”. His new book, Robot Ethics (MIT Press, 2022) is a landmark guide to the ethical questions that arise from our use of industrial robots, robot companions, self-driving cars, and other robotic devices.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Defining ethical technology: Urgent debates, global dilemmas, and key definitions</title>
        <itunes:title>Defining ethical technology: Urgent debates, global dilemmas, and key definitions</itunes:title>
        <link>https://dmdonig.podbean.com/e/defining-ethical-technology-urgent-debates-global-dilemmas-and-key-definitions/</link>
                    <comments>https://dmdonig.podbean.com/e/defining-ethical-technology-urgent-debates-global-dilemmas-and-key-definitions/#comments</comments>        <pubDate>Fri, 07 Oct 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8e82ead5-d1cb-3e93-87c5-360cbcf8a032</guid>
                                    <description><![CDATA[<p>Welcome to our very first episode of the "22 Lessons on Ethical Technology" series!</p>
<p> </p>
<p>In this episode, I sit down with Dr. Herman Tavani to introduce some of the foundational principles of ethical technology, particularly in computing and digital contexts. We focus on how the current need for an ethics of technology developed, and the debates and key moments that gave rise to the current debates about ethics and technology. Professor Tavani introduces listeners  to issues and controversies that comprise the relatively new field of  digital ethics, or “cyberethics.” We discuss a wide range of ethical issues in digital technologies--from specific issues of moral responsibility that directly affect computer and information technology (IT) professionals to broader social and ethical concerns that affect each of us in our day-to-day lives. We discuss how modern day controversies created by emerging technologies can be analyzed from the perspective of standard ethical concepts and theories.</p>
<p> </p>
<p>Herman T. Tavani, Ph.D., is Professor Emeritus of Philosophy at Rivier University and currently a visiting scholar (applied ethics) at the Harvard T.H. Chan School of Public Health.  He is the author of Ethics and Technology (Wiley), a widely–used textbook that is currently in its fifth edition.  His academic publications include six other books and more than 100 articles, reviews, and edited works.  He has presented more than 100 invited talks and conference papers at colleges and universities throughout the U.S. and in twelve countries in Europe, Asia, and South America.  Prof. Tavani has been active in several professional academic organizations; he served as an executive director and later as President of the International Society for Ethics and Information Technology, and served two terms as President of the Northern New England Philosophical association.  He has been the Book Review Editor of the journal, Ethics and Information Technology since 1998. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome to our very first episode of the "22 Lessons on Ethical Technology" series!</p>
<p> </p>
<p>In this episode, I sit down with Dr. Herman Tavani to introduce some of the foundational principles of ethical technology, particularly in computing and digital contexts. We focus on how the current need for an ethics of technology developed, and the debates and key moments that gave rise to the current debates about ethics and technology. Professor Tavani introduces listeners  to issues and controversies that comprise the relatively new field of  digital ethics, or “cyberethics.” We discuss a wide range of ethical issues in digital technologies--from specific issues of moral responsibility that directly affect computer and information technology (IT) professionals to broader social and ethical concerns that affect each of us in our day-to-day lives. We discuss how modern day controversies created by emerging technologies can be analyzed from the perspective of standard ethical concepts and theories.</p>
<p> </p>
<p>Herman T. Tavani, Ph.D., is Professor Emeritus of Philosophy at Rivier University and currently a visiting scholar (applied ethics) at the Harvard T.H. Chan School of Public Health.  He is the author of <em>Ethics and Technology</em> (Wiley), a widely–used textbook that is currently in its fifth edition.  His academic publications include six other books and more than 100 articles, reviews, and edited works.  He has presented more than 100 invited talks and conference papers at colleges and universities throughout the U.S. and in twelve countries in Europe, Asia, and South America.  Prof. Tavani has been active in several professional academic organizations; he served as an executive director and later as President of the International Society for Ethics and Information Technology, and served two terms as President of the Northern New England Philosophical association.  He has been the Book Review Editor of the journal, <em>Ethics and Information Technology </em>since 1998. </p>
]]></content:encoded>
                                    
        <enclosure length="44596272" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/hdcd36/22LIET-401-Tavani_Mixdown_2_luffeda6fsz.mp3"/>
        <itunes:summary>Welcome to our very first episode of the ”22 Lessons on Ethical Technology” series! In this episode, I sit down with Dr. Herman Tavani to introduce some of the foundational principles of ethical technology, particularly in computing and digital contexts. We focus on how the current need for an ethics of technology developed, and the debates and key moments that gave rise to the current debates about ethics and technology. We discuss how modern-day controversies created by emerging technologies can be analyzed from the perspective of standard ethical concepts and theories.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3716</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>86</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome to our very first episode of the "22 Lessons on Ethical Technology" series!   In this episode, I sit down with Dr. Herman Tavani to introduce some of the foundational principles of ethical technology, particularly in computing and digital contexts. We focus on how the current need for an ethics of technology developed, and the debates and key moments that gave rise to the current debates about ethics and technology. Professor Tavani introduces listeners  to issues and controversies that comprise the relatively new field of  digital ethics, or “cyberethics.” We discuss a wide range of ethical issues in digital technologies--from specific issues of moral responsibility that directly affect computer and information technology (IT) professionals to broader social and ethical concerns that affect each of us in our day-to-day lives. We discuss how modern day controversies created by emerging technologies can be analyzed from the perspective of standard ethical concepts and theories.   Herman T. Tavani, Ph.D., is Professor Emeritus of Philosophy at Rivier University and currently a visiting scholar (applied ethics) at the Harvard T.H. Chan School of Public Health.  He is the author of Ethics and Technology (Wiley), a widely–used textbook that is currently in its fifth edition.  His academic publications include six other books and more than 100 articles, reviews, and edited works.  He has presented more than 100 invited talks and conference papers at colleges and universities throughout the U.S. and in twelve countries in Europe, Asia, and South America.  Prof. Tavani has been active in several professional academic organizations; he served as an executive director and later as President of the International Society for Ethics and Information Technology, and served two terms as President of the Northern New England Philosophical association.  He has been the Book Review Editor of the journal, Ethics and Information Technology since 1998. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Gary Bengier’s Unfettered Journey</title>
        <itunes:title>Gary Bengier’s Unfettered Journey</itunes:title>
        <link>https://dmdonig.podbean.com/e/gary-bengier/</link>
                    <comments>https://dmdonig.podbean.com/e/gary-bengier/#comments</comments>        <pubDate>Fri, 30 Sep 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a55ef658-0798-3a20-bc01-9eb6f826d86b</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human,” I interview Gary Bengier, the author of the award-winning science fiction novel, Unfettered Journey. We talk about the relationship between his prior work as a technologist, and his current career as a writer of science fiction; we talk about the relationship between technology, philosophy, and science fiction; and we talk about the possibility of making moral choices in a world governed by deterministic technologies.</p>
<p><a href='https://garyfbengier.com/'>Gary F. Bengier </a>is a writer, philosopher, and technologist. After a career in Silicon Valley, Gary pursued multiple projects animated by his intellectual passions, studying astrophysics and philosophy. He is the author of the award winning science fiction novel, Unfettered Journey. Before turning to writing speculative fiction, Gary worked in a variety of Silicon Valley tech companies. He was eBay's Chief Financial Officer, and led the company's initial and secondary public offerings. Gary has an MBA from Harvard Business School, and an MA in philosophy from San Francisco State University.</p>
<p>Set in a richly imagined near future, Gary’s novel, <a href='https://garyfbengier.com/unfettered-journey/'>Unfettered Journey </a>is a cross-genre novel combining thrilling action, adventure, and a love story. It traces an epic journey – from inside the human mind to the vastness of space, from AIs battling in the desert to the peace of a mountain refuge. It asks social, spiritual, and philosophical questions that reach into some of the major topics of this show. How do human values interact with technological products? How does ethics—that is to say, what we should do or ought to do, change and our respond to our new technological world? And how can science fiction itself transform our vision of who we want to become?</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human,” I interview Gary Bengier, the author of the award-winning science fiction novel, <em>Unfettered Journey</em>. We talk about the relationship between his prior work as a technologist, and his current career as a writer of science fiction; we talk about the relationship between technology, philosophy, and science fiction; and we talk about the possibility of making moral choices in a world governed by deterministic technologies.</p>
<p><a href='https://garyfbengier.com/'>Gary F. Bengier </a>is a writer, philosopher, and technologist. After a career in Silicon Valley, Gary pursued multiple projects animated by his intellectual passions, studying astrophysics and philosophy. He is the author of the award winning science fiction novel, <em>Unfettered Journey.</em> Before turning to writing speculative fiction, Gary worked in a variety of Silicon Valley tech companies. He was eBay's Chief Financial Officer, and led the company's initial and secondary public offerings. Gary has an MBA from Harvard Business School, and an MA in philosophy from San Francisco State University.</p>
<p>Set in a richly imagined near future, Gary’s novel, <a href='https://garyfbengier.com/unfettered-journey/'><em>Unfettered Journey </em></a>is a cross-genre novel combining thrilling action, adventure, and a love story. It traces an epic journey – from inside the human mind to the vastness of space, from AIs battling in the desert to the peace of a mountain refuge. It asks social, spiritual, and philosophical questions that reach into some of the major topics of this show. How do human values interact with technological products? How does ethics—that is to say, what we <em>should do </em>or <em>ought to do</em>, change and our respond to our new technological world? And how can science fiction itself transform our vision of who we want to become?</p>
]]></content:encoded>
                                    
        <enclosure length="87145715" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kpywy6/Gary_Bengier_Podcast_mixdown8dm2j.mp3"/>
        <itunes:summary>In this episode of “Technically Human,” I interview Gary Bengier, the author of the award-winning science fiction novel, Unfettered Journey. We talk about the relationship between his prior work as a technologist, and his current career as a writer of science fiction; we talk about the relationship between technology, philosophy, and science fiction; and we talk about the possibility of making moral choices in a world governed by deterministic technologies.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3630</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>85</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human,” I interview Gary Bengier, the author of the award-winning science fiction novel, Unfettered Journey. We talk about the relationship between his prior work as a technologist, and his current career as a writer of science fiction; we talk about the relationship between technology, philosophy, and science fiction; and we talk about the possibility of making moral choices in a world governed by deterministic technologies. Gary F. Bengier is a writer, philosopher, and technologist. After a career in Silicon Valley, Gary pursued multiple projects animated by his intellectual passions, studying astrophysics and philosophy. He is the author of the award winning science fiction novel, Unfettered Journey. Before turning to writing speculative fiction, Gary worked in a variety of Silicon Valley tech companies. He was eBay's Chief Financial Officer, and led the company's initial and secondary public offerings. Gary has an MBA from Harvard Business School, and an MA in philosophy from San Francisco State University. Set in a richly imagined near future, Gary’s novel, Unfettered Journey is a cross-genre novel combining thrilling action, adventure, and a love story. It traces an epic journey – from inside the human mind to the vastness of space, from AIs battling in the desert to the peace of a mountain refuge. It asks social, spiritual, and philosophical questions that reach into some of the major topics of this show. How do human values interact with technological products? How does ethics—that is to say, what we should do or ought to do, change and our respond to our new technological world? And how can science fiction itself transform our vision of who we want to become?</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Command Code: Ethics, technology, and the debate about free will</title>
        <itunes:title>Command Code: Ethics, technology, and the debate about free will</itunes:title>
        <link>https://dmdonig.podbean.com/e/command-code-ethics-technology-and-the-debate-about-free-will/</link>
                    <comments>https://dmdonig.podbean.com/e/command-code-ethics-technology-and-the-debate-about-free-will/#comments</comments>        <pubDate>Fri, 23 Sep 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/fe3db738-adff-35e7-b60c-6cea5ce88fc0</guid>
                                    <description><![CDATA[<p>Welcome back to a new season of “Technically Human!” To kick off the year, I wanted to start out with a topic that has been coming up for me increasingly as I talk to people in Silicon Valley: free will.</p>
<p> </p>
<p>OK, so I know it might seem a bit odd for a show about ethics and technology to feature what might seem like a purely philosophical concept. But spending time talking to folks in the tech scene, I discovered that the topic of free will comes up quite a lot. I wanted to understand why. The conversations made me wonder what it is about our technological culture—and maybe even our technologies themselves--that has reinvigorated this ancient debate, which extends back into the earliest philosophical traditions, and which is crucial to any concept of ethics.</p>
<p> </p>
<p>In an age of algorithmic predictions, with tech companies and digital technologies that can anticipate and pinpoint our every move, can we still have free will as we know it? What happens to free will when our genetic technologies can plan what we’ll look like, how physically able we will be, and even who we’re likely to become? How free really are our actions when where we decide to eat is influenced by review sites that promote paid sponsors; where how we spend our money is dictated by data giants who tell us what we should like; and where even who we love is determined by algorithms on dating apps? How do we understand freedom of thought, and action, in an age where our biotechnologies not only record, but also predict and proscribe, how thoughts move around in our mind, and how they become actions?</p>
<p> </p>
<p>To understand these questions, I turned to David Lawrence, the author of “Are We Biochemical Robots,” a book he wrote in opposition to Sam Harris’s popular argument against free will, a viewpoint endorsed by many in Silicon Valley. Lawrence, who holds a degree in philosophy from UCLA and a degree in law from USC, is a philosopher social critic, and a philosophical proponent of free will, opposing the determinist views held by many new media personalities. Here’s our conversation.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to a new season of “Technically Human!” To kick off the year, I wanted to start out with a topic that has been coming up for me increasingly as I talk to people in Silicon Valley: free will.</p>
<p> </p>
<p>OK, so I know it might seem a bit odd for a show about ethics and technology to feature what might seem like a purely philosophical concept. But spending time talking to folks in the tech scene, I discovered that the topic of free will comes up quite a lot. I wanted to understand why. The conversations made me wonder what it is about our technological culture—and maybe even our technologies themselves--that has reinvigorated this ancient debate, which extends back into the earliest philosophical traditions, and which is crucial to any concept of ethics.</p>
<p> </p>
<p>In an age of algorithmic predictions, with tech companies and digital technologies that can anticipate and pinpoint our every move, can we still have free will as we know it? What happens to free will when our genetic technologies can plan what we’ll look like, how physically able we will be, and even who we’re likely to become? How free really are our actions when where we decide to eat is influenced by review sites that promote paid sponsors; where how we spend our money is dictated by data giants who tell us what we <em>should</em> like; and where even who we love is determined by algorithms on dating apps? How do we understand freedom of thought, and action, in an age where our biotechnologies not only record, but also predict and proscribe, how thoughts move around in our mind, and how they become actions?</p>
<p> </p>
<p>To understand these questions, I turned to David Lawrence, the author of “Are We Biochemical Robots,” a book he wrote in opposition to Sam Harris’s popular argument against free will, a viewpoint endorsed by many in Silicon Valley. Lawrence, who holds a degree in philosophy from UCLA and a degree in law from USC, is a philosopher social critic, and a philosophical proponent of free will, opposing the determinist views held by many new media personalities. Here’s our conversation.</p>
]]></content:encoded>
                                    
        <enclosure length="76815494" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ijuddp/David_Lawrence_Podcast_mixdown7vulr.mp3"/>
        <itunes:summary>OK, so I know it might seem a bit odd for a show about ethics and technology to feature what might seem like a purely philosophical concept. But spending time talking to folks in the tech scene, I discovered that the topic of free will comes up quite a lot. I wanted to understand why. The conversations made me wonder what it is about our technological culture—and maybe even our technologies themselves--that has reinvigorated this ancient debate, which extends back into the earliest philosophical traditions, and which is crucial to any concept of ethics.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3200</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>84</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to a new season of “Technically Human!” To kick off the year, I wanted to start out with a topic that has been coming up for me increasingly as I talk to people in Silicon Valley: free will.   OK, so I know it might seem a bit odd for a show about ethics and technology to feature what might seem like a purely philosophical concept. But spending time talking to folks in the tech scene, I discovered that the topic of free will comes up quite a lot. I wanted to understand why. The conversations made me wonder what it is about our technological culture—and maybe even our technologies themselves--that has reinvigorated this ancient debate, which extends back into the earliest philosophical traditions, and which is crucial to any concept of ethics.   In an age of algorithmic predictions, with tech companies and digital technologies that can anticipate and pinpoint our every move, can we still have free will as we know it? What happens to free will when our genetic technologies can plan what we’ll look like, how physically able we will be, and even who we’re likely to become? How free really are our actions when where we decide to eat is influenced by review sites that promote paid sponsors; where how we spend our money is dictated by data giants who tell us what we should like; and where even who we love is determined by algorithms on dating apps? How do we understand freedom of thought, and action, in an age where our biotechnologies not only record, but also predict and proscribe, how thoughts move around in our mind, and how they become actions?   To understand these questions, I turned to David Lawrence, the author of “Are We Biochemical Robots,” a book he wrote in opposition to Sam Harris’s popular argument against free will, a viewpoint endorsed by many in Silicon Valley. Lawrence, who holds a degree in philosophy from UCLA and a degree in law from USC, is a philosopher social critic, and a philosophical proponent of free will, opposing the determinist views held by many new media personalities. Here’s our conversation.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>NEW! ”22 Lessons on Ethical Technology for the 21st Century” Special Series Trailer</title>
        <itunes:title>NEW! ”22 Lessons on Ethical Technology for the 21st Century” Special Series Trailer</itunes:title>
        <link>https://dmdonig.podbean.com/e/new-22-lessons-on-ethical-technology-special-series-trailer/</link>
                    <comments>https://dmdonig.podbean.com/e/new-22-lessons-on-ethical-technology-special-series-trailer/#comments</comments>        <pubDate>Fri, 09 Sep 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/717e3b1a-0fef-30d0-b771-c3526414f772</guid>
                                    <description><![CDATA[<p>Hey Technically Human listeners! We’re very excited to introduce a special series of episodes that we’ll run throughout the year—“22 Lessons on Ethical Technology for the 21st Century.” The series features 22 of the most important thinkers at the intersection of tech, ethics, and human values, from around the world. In the series, I speak with Kate Hayles about how we became posthuman. I sit down with Evelynn Hammonds to talk about race, medicine, science, and technology. Jason Lewis and I talk about indigenous knowledge and technology. And more! Over the series of these 22 interviews, we hope to bring you a panoramic picture of how technology is changing what it means to be human—and how essential features of human society--like art, culture, philosophy, politics, and justice are entangled with tech culture and production. We hope you’ll stay tuned.</p>
<p>In addition to the “22 Lessons” special series, this season, we’re coming back with a ton of new exciting episodes from landmark thinkers and leaders in the industry, with guests like Gary Bengier, who debuted his first science fiction novel after a career in tech, notably as E-Bay’s Chief Financial Officer; Dr. Robert Pearl, the CEO of Kaiser health, in an episode about tech, medicine, and our health, and Medha Parlikar, the CTO of Casper Labs, for a discussion about the ethics of the blockchain. It’ll be an exciting year. If you want to learn more, or contact us with suggestions, complaints, or ideas, you can <a href='mailto:ddonig@calpoly.edu'>contact us</a>. Be in touch! </p>
<p>And don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--<a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950?i=1000565138067'>apple podcasts</a>, <a href='https://www.audible.com/pd/The-Technically-Human-Podcast-Podcast/B08JJNTLF7?action_code=ASSGB149080119000H&shareTest=TestShare&share_location=pdp'>Google Play</a>, <a href='https://open.spotify.com/show/7rqXHfZhm68ws1UaiG4xN5?si=0e906856fb4a4806'>Spotify</a>—or wherever you get your podcasts.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by Deb Donig, with production support from Matthew Harsh and Elise St. John. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick.</p>
<p>Rate or review us on <a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950'>Apple Podcasts</a>, and feel free to contact us with any suggestions, complaints, or ideas. To learn more about the 22 Lessons on Ethical Technology series, visit <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a>. And don’t forget to subscribe to the show so that you don’t miss an episode!</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Hey Technically Human listeners! We’re very excited to introduce a special series of episodes that we’ll run throughout the year—“22 Lessons on Ethical Technology for the 21st Century.” The series features 22 of the most important thinkers at the intersection of tech, ethics, and human values, from around the world. In the series, I speak with Kate Hayles about how we became posthuman. I sit down with Evelynn Hammonds to talk about race, medicine, science, and technology. Jason Lewis and I talk about indigenous knowledge and technology. And more! Over the series of these 22 interviews, we hope to bring you a panoramic picture of how technology is changing what it means to be human—and how essential features of human society--like art, culture, philosophy, politics, and justice are entangled with tech culture and production. We hope you’ll stay tuned.</p>
<p>In addition to the “22 Lessons” special series, this season, we’re coming back with a ton of new exciting episodes from landmark thinkers and leaders in the industry, with guests like Gary Bengier, who debuted his first science fiction novel after a career in tech, notably as E-Bay’s Chief Financial Officer; Dr. Robert Pearl, the CEO of Kaiser health, in an episode about tech, medicine, and our health, and Medha Parlikar, the CTO of Casper Labs, for a discussion about the ethics of the blockchain. It’ll be an exciting year. If you want to learn more, or contact us with suggestions, complaints, or ideas, you can <a href='mailto:ddonig@calpoly.edu'>contact us</a>. Be in touch! </p>
<p>And don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--<a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950?i=1000565138067'>apple podcasts</a>, <a href='https://www.audible.com/pd/The-Technically-Human-Podcast-Podcast/B08JJNTLF7?action_code=ASSGB149080119000H&shareTest=TestShare&share_location=pdp'>Google Play</a>, <a href='https://open.spotify.com/show/7rqXHfZhm68ws1UaiG4xN5?si=0e906856fb4a4806'>Spotify</a>—or wherever you get your podcasts.</p>
<p>The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by Deb Donig, with production support from Matthew Harsh and Elise St. John. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick.</p>
<p>Rate or review us on <a href='https://podcasts.apple.com/us/podcast/the-technically-human-podcast/id1508661950'>Apple Podcasts</a>, and feel free to contact us with any suggestions, complaints, or ideas. To learn more about the 22 Lessons on Ethical Technology series, visit <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a>. And don’t forget to subscribe to the show so that you don’t miss an episode!</p>
]]></content:encoded>
                                    
        <enclosure length="3259304" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/z962kk/TH-22_Lessons-TRAILER-S4_Mixdown_1_ET71bku.mp3"/>
        <itunes:summary>Hey Technically Human listeners! We’re very excited to introduce a special series of episodes that we’ll run throughout the year—“22 Lessons on Ethical Technology for the 21st Century.” The series features 22 of the most important thinkers at the intersection of tech, ethics, and human values, from around the world. Here’s what’s in store!</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>203</itunes:duration>
        <itunes:season>9</itunes:season>
        <itunes:episode>83</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Hey Technically Human listeners! We’re very excited to introduce a special series of episodes that we’ll run throughout the year—“22 Lessons on Ethical Technology for the 21st Century.” The series features 22 of the most important thinkers at the intersection of tech, ethics, and human values, from around the world. In the series, I speak with Kate Hayles about how we became posthuman. I sit down with Evelynn Hammonds to talk about race, medicine, science, and technology. Jason Lewis and I talk about indigenous knowledge and technology. And more! Over the series of these 22 interviews, we hope to bring you a panoramic picture of how technology is changing what it means to be human—and how essential features of human society--like art, culture, philosophy, politics, and justice are entangled with tech culture and production. We hope you’ll stay tuned. In addition to the “22 Lessons” special series, this season, we’re coming back with a ton of new exciting episodes from landmark thinkers and leaders in the industry, with guests like Gary Bengier, who debuted his first science fiction novel after a career in tech, notably as E-Bay’s Chief Financial Officer; Dr. Robert Pearl, the CEO of Kaiser health, in an episode about tech, medicine, and our health, and Medha Parlikar, the CTO of Casper Labs, for a discussion about the ethics of the blockchain. It’ll be an exciting year. If you want to learn more, or contact us with suggestions, complaints, or ideas, you can contact us. Be in touch!  And don’t forget to subscribe to the show to make sure you don’t miss an episode! You can find us on your favorite podcast app--apple podcasts, Google Play, Spotify—or wherever you get your podcasts. The “22 Lessons in Ethical Technology” series is co-sponsored by the National Science Foundation and the Cal Poly Strategic Research Initiative Grant Award. The show is written, hosted, and produced by Deb Donig, with production support from Matthew Harsh and Elise St. John. Our head of research for this series is Sakina Nuruddin. Our editor is Carrie Caulfield Arick. Rate or review us on Apple Podcasts, and feel free to contact us with any suggestions, complaints, or ideas. To learn more about the 22 Lessons on Ethical Technology series, visit www.etcalpoly.org. And don’t forget to subscribe to the show so that you don’t miss an episode!</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Millennial Action Technology: US Senate Candidate Steven Olikara talks tech and political activism for a new generation of leaders **RE-RELEASE**</title>
        <itunes:title>Millennial Action Technology: US Senate Candidate Steven Olikara talks tech and political activism for a new generation of leaders **RE-RELEASE**</itunes:title>
        <link>https://dmdonig.podbean.com/e/millennial-action-technology-us-senate-candidate-steven-olikara-talks-tech-and-political-activism-for-a-new-generation-of-leaders-re-release/</link>
                    <comments>https://dmdonig.podbean.com/e/millennial-action-technology-us-senate-candidate-steven-olikara-talks-tech-and-political-activism-for-a-new-generation-of-leaders-re-release/#comments</comments>        <pubDate>Thu, 30 Jun 2022 06:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a863a57b-0bc6-360b-a26b-1dc2e0660b12</guid>
                                    <description><![CDATA[<p>**RE-RELEASE**</p>
<p>Ok, ok ok. So I know I said that we weren’t releasing episodes until September. But this week, we learned that one of our previous guests, Steven Olikara, former Millennial Action Project CEO and current candidate for US Senate, just got one step closer to winning his bid to become the Democratic Party’s nominee for the 2022 election. Steven is campaigning in Wisconsin for a critical seat in an election year that will determine which party controls the Senate as a majority for the next two years. The Democratic nominee will face off against current sitting senator, Republican Ron Johnson.</p>
<p>This is not a political podcast. And I’m not hosting this show as a partisan. But “Technically Human” is a podcast about human values, about the pursuit of ethics and equity in our society, and about how we can build a society that better stands to live up to our human values. So many issues at stake for the future of an ethical and equitable world, technological and otherwise, hang in the balance of this election. Steven is the only candidate in this race that I believe will unequivocally protect, support, and enhance these values.</p>
<p>There is only one debate in the Wisconsin Democratic Primary for US Senate and Steven needs 5,000 people to donate by this Thursday in order to qualify for the debate stage. If 5,000 people donate just $1 each by this Thursday, June 30th at 11:59 pm EST, Steven will qualify for the debate and the world will get to hear his message. </p>
<p>To support, please visit <a href='https://secure.actblue.com/donate/olikara_debate'>https://secure.actblue.com/donate/olikara_debate</a>.</p>
<p> </p>
<p>ORIGINAL SHOW NOTES: </p>
<p>In this week's episode, I speak to Steven Olikara, founder of the <a href='https://www.millennialaction.org/'>Millenial Action Project (MAP</a>), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. </p>
<p>Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law & Policy, and a Forward Under 40 by the Wisconsin Alumni Association. </p>
<p>JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election.</p>
<p>To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit <a href='http://www.www.stevenolikara.com./'>www.www.stevenolikara.com. </a></p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>**RE-RELEASE**</p>
<p>Ok, ok ok. So I know I said that we weren’t releasing episodes until September. But this week, we learned that one of our previous guests, Steven Olikara, former Millennial Action Project CEO and current candidate for US Senate, just got one step closer to winning his bid to become the Democratic Party’s nominee for the 2022 election. Steven is campaigning in Wisconsin for a critical seat in an election year that will determine which party controls the Senate as a majority for the next two years. The Democratic nominee will face off against current sitting senator, Republican Ron Johnson.</p>
<p>This is not a political podcast. And I’m not hosting this show as a partisan. But “Technically Human” is a podcast about human values, about the pursuit of ethics and equity in our society, and about how we can build a society that better stands to live up to our human values. So many issues at stake for the future of an ethical and equitable world, technological and otherwise, hang in the balance of this election. Steven is the only candidate in this race that I believe will unequivocally protect, support, and enhance these values.</p>
<p>There is only one debate in the Wisconsin Democratic Primary for US Senate and Steven needs 5,000 people to donate by this Thursday in order to qualify for the debate stage. If 5,000 people donate just $1 each by this Thursday, June 30th at 11:59 pm EST, Steven will qualify for the debate and the world will get to hear his message. </p>
<p>To support, please visit <a href='https://secure.actblue.com/donate/olikara_debate'>https://secure.actblue.com/donate/olikara_debate</a>.</p>
<p> </p>
<p>ORIGINAL SHOW NOTES: </p>
<p>In this week's episode, I speak to Steven Olikara, founder of the <a href='https://www.millennialaction.org/'>Millenial Action Project (MAP</a>), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. </p>
<p>Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law & Policy, and a Forward Under 40 by the Wisconsin Alumni Association. </p>
<p>JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election.</p>
<p>To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit <a href='http://www.www.stevenolikara.com./'>www.www.stevenolikara.com. </a></p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="108483607" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/jxxwsa/Steven_263gkt.mp3"/>
        <itunes:summary>In this week’s episode, I speak to US Senate Candidate Steven Olikara, founder of the Millenial Action Project (MAP), who is currently running for office in Wisconsin to be the next US Senator to occupy the seat currently held by Republican Ron Johnson. MAP is the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law &amp; Policy, and a Forward Under 40 by the Wisconsin Alumni Association. Learn more about his candidacy and support his senate bid here: https://secure.actblue.com/donate/olikara_debate</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4519</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>82</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>**RE-RELEASE** Ok, ok ok. So I know I said that we weren’t releasing episodes until September. But this week, we learned that one of our previous guests, Steven Olikara, former Millennial Action Project CEO and current candidate for US Senate, just got one step closer to winning his bid to become the Democratic Party’s nominee for the 2022 election. Steven is campaigning in Wisconsin for a critical seat in an election year that will determine which party controls the Senate as a majority for the next two years. The Democratic nominee will face off against current sitting senator, Republican Ron Johnson. This is not a political podcast. And I’m not hosting this show as a partisan. But “Technically Human” is a podcast about human values, about the pursuit of ethics and equity in our society, and about how we can build a society that better stands to live up to our human values. So many issues at stake for the future of an ethical and equitable world, technological and otherwise, hang in the balance of this election. Steven is the only candidate in this race that I believe will unequivocally protect, support, and enhance these values. There is only one debate in the Wisconsin Democratic Primary for US Senate and Steven needs 5,000 people to donate by this Thursday in order to qualify for the debate stage. If 5,000 people donate just $1 each by this Thursday, June 30th at 11:59 pm EST, Steven will qualify for the debate and the world will get to hear his message.  To support, please visit https://secure.actblue.com/donate/olikara_debate.   ORIGINAL SHOW NOTES:  In this week's episode, I speak to Steven Olikara, founder of the Millenial Action Project (MAP), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age.  Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law &amp; Policy, and a Forward Under 40 by the Wisconsin Alumni Association.  JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election. To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit www.www.stevenolikara.com.  Podcast produced by Matt Perry and Ana Marsh. Podcast art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Future of the Ethical Technology Workforce</title>
        <itunes:title>The Future of the Ethical Technology Workforce</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-future-of-the-ethical-technology-workforce/</link>
                    <comments>https://dmdonig.podbean.com/e/the-future-of-the-ethical-technology-workforce/#comments</comments>        <pubDate>Fri, 03 Jun 2022 09:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/036e54fd-158d-399e-b3e6-3cad9b8de9ef</guid>
                                    <description><![CDATA[<p>For our last episode of the season, I sit down with <a href='https://rebekahtweed.com/'>Rebekah Tweed</a> to talk about the topic that has animated my research for the past year: The future of what I have been calling the new profession of ethical technology.</p>
<p>As listeners may know, for the past year I have led a team of researchers for the National Science Foundation to explore this new profession, to assess what it means, and to proactively define it in order to ensure that workers in this profession can succeed in these roles, and that they can make the ethical difference they were hired to make.</p>
<p>So I was especially excited to talk to Rebekah Tweed, the creator of the <a href='https://alltechishuman.org/responsible-tech-job-board'>Responsible Tech Job Board</a>, which features roles that are focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that tech is aligned with the public interest. It’s the first job board of its kind, it attracts both hirers and job seekers who are interested in creating ethical change in tech, and it is already changing the industry and defining the field.  </p>
<p>Rebekah Tweed is a leader in Responsible Technology careers, talent, and hiring trends. Alongside her role as curator and creator of the Responsible Tech Job Board, she is the Program Director at <a href='https://alltechishuman.org/'>All Tech is Human</a>, where she heads up various programs including their mentorship program and university ambassadors program. She is also the Co-Chair of the <a href='https://standards.ieee.org/industry-connections/ec/autonomous-systems/'>IEEE Global AI Ethics Initiative Editing Committee </a>and a member of the Arts Committee.</p>
<p>And that’s all for this season! We are off for the summer, but we’ll be back in September with brand new episodes of Technically Human. Until then, check out our archives. Enjoy the summer, and see you in September!</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>For our last episode of the season, I sit down with <a href='https://rebekahtweed.com/'>Rebekah Tweed</a> to talk about the topic that has animated my research for the past year: The future of what I have been calling the new profession of ethical technology.</p>
<p>As listeners may know, for the past year I have led a team of researchers for the National Science Foundation to explore this new profession, to assess what it means, and to proactively define it in order to ensure that workers in this profession can succeed in these roles, and that they can make the ethical difference they were hired to make.</p>
<p>So I was especially excited to talk to Rebekah Tweed, the creator of the <a href='https://alltechishuman.org/responsible-tech-job-board'>Responsible Tech Job Board</a>, which features roles that are focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that tech is aligned with the public interest. It’s the first job board of its kind, it attracts both hirers and job seekers who are interested in creating ethical change in tech, and it is already changing the industry and defining the field.  </p>
<p>Rebekah Tweed is a leader in Responsible Technology careers, talent, and hiring trends. Alongside her role as curator and creator of the Responsible Tech Job Board, she is the Program Director at <a href='https://alltechishuman.org/'>All Tech is Human</a>, where she heads up various programs including their mentorship program and university ambassadors program. She is also the Co-Chair of the <a href='https://standards.ieee.org/industry-connections/ec/autonomous-systems/'>IEEE Global AI Ethics Initiative Editing Committee </a>and a member of the Arts Committee.</p>
<p>And that’s all for this season! We are off for the summer, but we’ll be back in September with brand new episodes of Technically Human. Until then, check out our archives. Enjoy the summer, and see you in September!</p>
]]></content:encoded>
                                    
        <enclosure length="86411170" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9wqbe1/tweed_2.mp3"/>
        <itunes:summary>For our last episode of the season, I sit down with Rebekah Tweed to talk about the topic that has animated my research for the past year: The future of what I have been calling the new profession of ethical technology. Rebekah Tweed is the creator of the Responsible Tech Job Board, which features roles that are focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that tech is aligned with the public interest. We talk about the reasons and history behind this growing new profession, what it means for the future of tech, and how workers can leverage their skills into jobs in this sector.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3600</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>81</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>For our last episode of the season, I sit down with Rebekah Tweed to talk about the topic that has animated my research for the past year: The future of what I have been calling the new profession of ethical technology. As listeners may know, for the past year I have led a team of researchers for the National Science Foundation to explore this new profession, to assess what it means, and to proactively define it in order to ensure that workers in this profession can succeed in these roles, and that they can make the ethical difference they were hired to make. So I was especially excited to talk to Rebekah Tweed, the creator of the Responsible Tech Job Board, which features roles that are focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that tech is aligned with the public interest. It’s the first job board of its kind, it attracts both hirers and job seekers who are interested in creating ethical change in tech, and it is already changing the industry and defining the field.   Rebekah Tweed is a leader in Responsible Technology careers, talent, and hiring trends. Alongside her role as curator and creator of the Responsible Tech Job Board, she is the Program Director at All Tech is Human, where she heads up various programs including their mentorship program and university ambassadors program. She is also the Co-Chair of the IEEE Global AI Ethics Initiative Editing Committee and a member of the Arts Committee. And that’s all for this season! We are off for the summer, but we’ll be back in September with brand new episodes of Technically Human. Until then, check out our archives. Enjoy the summer, and see you in September!</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Battery Power: Dr. John Cooley on the technology replacing fossil fuels</title>
        <itunes:title>Battery Power: Dr. John Cooley on the technology replacing fossil fuels</itunes:title>
        <link>https://dmdonig.podbean.com/e/battery-power-dr-john-cooley-on-the-technology-replacing-fossil-fuels/</link>
                    <comments>https://dmdonig.podbean.com/e/battery-power-dr-john-cooley-on-the-technology-replacing-fossil-fuels/#comments</comments>        <pubDate>Fri, 27 May 2022 07:47:48 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/092894b3-b685-318d-bf66-d3f3f7a717ad</guid>
                                    <description><![CDATA[<p>In this episode, I talk to Dr. John Cooley, the Founder and Chief of Products of <a href='https://www.nanoramic.com/'>Nanoramic Laboratories</a>, a company reinventing the transportation industry with new battery technologies to replace fossil fuel consumption in our car economy. We talk about the relationship between ethical innovation and financial success, the state of the auto industry's transition to battery power, the controversial ethics of battery technology, and the growth of the climate tech industry and environmental consciousness.</p>
<p>Dr. John Cooley is the founder and Chief of Products and Innovation at <a href='https://www.nanoramic.com/'>Nanoramic Laboratories</a>, a company working to accelerate the adoption and universality of battery-powered transportation. He holds five (5) technical degrees from MIT including the Ph.D. from the Electrical Engineering dept. </p>
<p>Dr. Cooley has been issued several patents including four for his thesis work. He has presented and published papers in the areas of power converter control and modeling, linearized circuit analysis, capacitive sensing, building energy management, and in education. His interests lie in energy-related problems of scale and the ways in which we can impact those with technology and policy.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I talk to Dr. John Cooley, the Founder and Chief of Products of <a href='https://www.nanoramic.com/'>Nanoramic Laboratories</a>, a company reinventing the transportation industry with new battery technologies to replace fossil fuel consumption in our car economy. We talk about the relationship between ethical innovation and financial success, the state of the auto industry's transition to battery power, the controversial ethics of battery technology, and the growth of the climate tech industry and environmental consciousness.</p>
<p>Dr. John Cooley is the founder and Chief of Products and Innovation at <a href='https://www.nanoramic.com/'>Nanoramic Laboratories</a>, a company working to accelerate the adoption and universality of battery-powered transportation. He holds five (5) technical degrees from MIT including the Ph.D. from the Electrical Engineering dept. </p>
<p>Dr. Cooley has been issued several patents including four for his thesis work. He has presented and published papers in the areas of power converter control and modeling, linearized circuit analysis, capacitive sensing, building energy management, and in education. His interests lie in energy-related problems of scale and the ways in which we can impact those with technology and policy.</p>
]]></content:encoded>
                                    
        <enclosure length="86417048" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/yhoax1/Cooley_mixdown.mp3"/>
        <itunes:summary>In this episode, I talk to Dr. John Cooley, CEO of Nanoramic Laboratories, a company reinventing the transportation industry with new battery technologies to replace fossil fuel consumption in our car economy. We talk about the relationship between ethical innovation and financial success, the state of the auto industry’s transition to battery power, the controversial ethics of battery technology, and the growth of the climate tech industry and environmental consciousness.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3600</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>80</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I talk to Dr. John Cooley, the Founder and Chief of Products of Nanoramic Laboratories, a company reinventing the transportation industry with new battery technologies to replace fossil fuel consumption in our car economy. We talk about the relationship between ethical innovation and financial success, the state of the auto industry's transition to battery power, the controversial ethics of battery technology, and the growth of the climate tech industry and environmental consciousness. Dr. John Cooley is the founder and Chief of Products and Innovation at Nanoramic Laboratories, a company working to accelerate the adoption and universality of battery-powered transportation. He holds five (5) technical degrees from MIT including the Ph.D. from the Electrical Engineering dept.  Dr. Cooley has been issued several patents including four for his thesis work. He has presented and published papers in the areas of power converter control and modeling, linearized circuit analysis, capacitive sensing, building energy management, and in education. His interests lie in energy-related problems of scale and the ways in which we can impact those with technology and policy.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Disconnect: Millennials, media, and mental health</title>
        <itunes:title>Disconnect: Millennials, media, and mental health</itunes:title>
        <link>https://dmdonig.podbean.com/e/disconnect-millennials-media-and-mental-health/</link>
                    <comments>https://dmdonig.podbean.com/e/disconnect-millennials-media-and-mental-health/#comments</comments>        <pubDate>Fri, 20 May 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c848f2bc-7232-365c-8311-43d29d6bcbf1</guid>
                                    <description><![CDATA[<p>This week, I’ll turn the mic over to two guest hosts, for a conversation about mental health and technology with Dr. Elizabeth Barrett, licensed family-marriage counselor, author, and Cal Poly professor. Cal Poly “Technically Human” students Katelyn Travis and Katrina Loye interview Dr. Barrett to discuss the modern implications of digital technologies for family and romantic dynamics.</p>
<p>The episode delves into the complications of recent technology, including social media apps and the shift into virtual education due to Covid-19. In a virtual world, we lose connection and intimacy in the relationships that should be most important to us, and Dr. Barrett helps us brainstorm ways that we can reconnect in our coldly digital world.</p>
<p>Dr. Elizabeth Barrett is a Psychology and Child Development professor at the California Polytechnic State University, San Luis Obispo where she lectures on the topics of counseling, family psychology, child abuse and neglect, and marriage and family therapy. She is a licensed marriage and family therapist of 20 years and a mental health coach specializing in personal growth, family life, and relationship issues. She has worked with the county of San Luis Obispo as a crisis/in-home counselor for a child abuse prevention program where she focused on communication’s importance in individual health and the well-being of a family. Her expertise surrounding family psychology and the psychological impact of our evolving society is enhanced through her roles as a wife, mother, grandmother, sister, and daughter. She shares her concerns regarding our collective mental health and the direction of the helping professions on her weekly radio program on Public Radio KCBX, <a href='https://www.kcbx.org/people/elizabeth-barrett'>A Conversation with the Reluctant Therapist.</a></p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week, I’ll turn the mic over to two guest hosts, for a conversation about mental health and technology with Dr. Elizabeth Barrett, licensed family-marriage counselor, author, and Cal Poly professor. Cal Poly “Technically Human” students Katelyn Travis and Katrina Loye interview Dr. Barrett to discuss the modern implications of digital technologies for family and romantic dynamics.</p>
<p>The episode delves into the complications of recent technology, including social media apps and the shift into virtual education due to Covid-19. In a virtual world, we lose connection and intimacy in the relationships that should be most important to us, and Dr. Barrett helps us brainstorm ways that we can reconnect in our coldly digital world.</p>
<p>Dr. Elizabeth Barrett is a Psychology and Child Development professor at the California Polytechnic State University, San Luis Obispo where she lectures on the topics of counseling, family psychology, child abuse and neglect, and marriage and family therapy. She is a licensed marriage and family therapist of 20 years and a mental health coach specializing in personal growth, family life, and relationship issues. She has worked with the county of San Luis Obispo as a crisis/in-home counselor for a child abuse prevention program where she focused on communication’s importance in individual health and the well-being of a family. Her expertise surrounding family psychology and the psychological impact of our evolving society is enhanced through her roles as a wife, mother, grandmother, sister, and daughter. She shares her concerns regarding our collective mental health and the direction of the helping professions on her weekly radio program on Public Radio KCBX, <a href='https://www.kcbx.org/people/elizabeth-barrett'><em>A Conversation with the Reluctant Therapist.</em></a></p>
]]></content:encoded>
                                    
        <enclosure length="64816580" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9ud93v/Barrett_Podcast.mp3"/>
        <itunes:summary>This week, I’ll turn the mic over to two guest hosts, for a conversation about mental health and technology with Dr. Elizabeth Barrett, licensed family-marriage counselor, author, and Cal Poly professor. Cal Poly “Technically Human” students Katelyn Travis and Katrina Loye interview Dr. Barrett to discuss the modern implications of digital technologies for family and romantic dynamics.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2700</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>79</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week, I’ll turn the mic over to two guest hosts, for a conversation about mental health and technology with Dr. Elizabeth Barrett, licensed family-marriage counselor, author, and Cal Poly professor. Cal Poly “Technically Human” students Katelyn Travis and Katrina Loye interview Dr. Barrett to discuss the modern implications of digital technologies for family and romantic dynamics. The episode delves into the complications of recent technology, including social media apps and the shift into virtual education due to Covid-19. In a virtual world, we lose connection and intimacy in the relationships that should be most important to us, and Dr. Barrett helps us brainstorm ways that we can reconnect in our coldly digital world. Dr. Elizabeth Barrett is a Psychology and Child Development professor at the California Polytechnic State University, San Luis Obispo where she lectures on the topics of counseling, family psychology, child abuse and neglect, and marriage and family therapy. She is a licensed marriage and family therapist of 20 years and a mental health coach specializing in personal growth, family life, and relationship issues. She has worked with the county of San Luis Obispo as a crisis/in-home counselor for a child abuse prevention program where she focused on communication’s importance in individual health and the well-being of a family. Her expertise surrounding family psychology and the psychological impact of our evolving society is enhanced through her roles as a wife, mother, grandmother, sister, and daughter. She shares her concerns regarding our collective mental health and the direction of the helping professions on her weekly radio program on Public Radio KCBX, A Conversation with the Reluctant Therapist.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Clean Meat Revolution</title>
        <itunes:title>The Clean Meat Revolution</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-clean-meat-revolution/</link>
                    <comments>https://dmdonig.podbean.com/e/the-clean-meat-revolution/#comments</comments>        <pubDate>Fri, 13 May 2022 08:10:40 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/aa87bc7a-a259-37d2-bef8-8eed220e32a3</guid>
                                    <description><![CDATA[<p>In this episode, we take a deep dive into the technology of “Clean Meat,” with Paul Shapiro. We talk about the ethics of eating non-human animals, the technological history that led to factory farming and the technology that is allowing human animals to eat meat, in what we might call the "Clean Meat" revolution--a term that nods to the clean energy revolution that has transformed the energy sector.</p>
<p>Paul Shapiro author of the national bestseller Clean Meat, published in 2018. When Paul took his first bite of clean meat in 2014, more humans had gone into space than had eaten real meat grown outside an animal. In addition to being among the world’s first clean meat consumers, Paul is a <a href='https://ted.com/search?q=paul%20shapiro'>four-time TEDx speaker</a>, the host of the <a href='http://businessforgoodpodcast.com/'>Business for Good Podcast</a>, the CEO of <a href='http://www.bettermeat.co/'>The Better Meat Co.</a>, and long-time leader in food sustainability. Paul is a researcher, innovator, industry leader, and public scholar swho has <a href='https://www.paul-shapiro.com/media/'>published hundreds of articles</a> in publications ranging from daily newspapers to academic journals. Paul lives in Sacramento, California with his wife <a href='http://www.toniokamoto.com/'>Toni Okamoto,</a> author and founder of <a href='https://plantbasedonabudget.com/'>Plant-Based on a Budget</a>, and their <a href='https://www.theguardian.com/us-news/2020/mar/30/foster-pets-coronavirus-animal-shelters-adoption'>very friendly pit bull Eddie</a>. Clean Meat (2018) is his first book, is a Washington Post bestseller, and has been translated into seven languages. </p>
<p>You can read more about Paul’s work and contact him at <a href='http://www.paul-shapiro.com/'>www.paul-shapiro.com</a>.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, we take a deep dive into the technology of “Clean Meat,” with Paul Shapiro. We talk about the ethics of eating non-human animals, the technological history that led to factory farming and the technology that is allowing human animals to eat meat, in what we might call the "Clean Meat" revolution--a term that nods to the clean energy revolution that has transformed the energy sector.</p>
<p>Paul Shapiro author of the national bestseller <em>Clean Meat, </em>published in 2018. When Paul took his first bite of clean meat in 2014, more humans had gone into space than had eaten real meat grown outside an animal. In addition to being among the world’s first clean meat consumers, Paul is a <a href='https://ted.com/search?q=paul%20shapiro'>four-time TEDx speaker</a>, the host of the <a href='http://businessforgoodpodcast.com/'>Business for Good Podcast</a>, the CEO of <a href='http://www.bettermeat.co/'>The Better Meat Co.</a>, and long-time leader in food sustainability. Paul is a researcher, innovator, industry leader, and public scholar swho has <a href='https://www.paul-shapiro.com/media/'>published hundreds of articles</a> in publications ranging from daily newspapers to academic journals. Paul lives in Sacramento, California with his wife <a href='http://www.toniokamoto.com/'>Toni Okamoto,</a> author and founder of <a href='https://plantbasedonabudget.com/'>Plant-Based on a Budget</a>, and their <a href='https://www.theguardian.com/us-news/2020/mar/30/foster-pets-coronavirus-animal-shelters-adoption'>very friendly pit bull Eddie</a>. <em>Clean Meat</em> (2018) is his first book, is a <em>Washington Post</em> bestseller, and has been translated into seven languages. </p>
<p>You can read more about Paul’s work and contact him at <a href='http://www.paul-shapiro.com/'>www.paul-shapiro.com</a>.</p>
]]></content:encoded>
                                    
        <enclosure length="1543699496" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/injfyr/Paul_Shapiro_mixdown.mp3"/>
        <itunes:summary>In this episode, we take a deep dive into the technology of “Clean Meat,” with Paul Shapiro. We talk about the ethics of eating non-human animals, the technological history that led to factory farming and the technology that is allowing human animals to eat meat, in what we might call the ”Clean Meat” revolution--a nod to the clean energy revolution that has transformed the energy sector.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4020</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>78</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, we take a deep dive into the technology of “Clean Meat,” with Paul Shapiro. We talk about the ethics of eating non-human animals, the technological history that led to factory farming and the technology that is allowing human animals to eat meat, in what we might call the "Clean Meat" revolution--a term that nods to the clean energy revolution that has transformed the energy sector. Paul Shapiro author of the national bestseller Clean Meat, published in 2018. When Paul took his first bite of clean meat in 2014, more humans had gone into space than had eaten real meat grown outside an animal. In addition to being among the world’s first clean meat consumers, Paul is a four-time TEDx speaker, the host of the Business for Good Podcast, the CEO of The Better Meat Co., and long-time leader in food sustainability. Paul is a researcher, innovator, industry leader, and public scholar swho has published hundreds of articles in publications ranging from daily newspapers to academic journals. Paul lives in Sacramento, California with his wife Toni Okamoto, author and founder of Plant-Based on a Budget, and their very friendly pit bull Eddie. Clean Meat (2018) is his first book, is a Washington Post bestseller, and has been translated into seven languages. You can read more about Paul’s work and contact him at www.paul-shapiro.com.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Intercode: Part 2</title>
        <itunes:title>Intercode: Part 2</itunes:title>
        <link>https://dmdonig.podbean.com/e/intercode-part-2/</link>
                    <comments>https://dmdonig.podbean.com/e/intercode-part-2/#comments</comments>        <pubDate>Fri, 06 May 2022 04:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/3071b361-d379-399d-96d7-04f24c018090</guid>
                                    <description><![CDATA[<p>This week's episode is the second episode of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the <a href='https://www.gracehopper.com/?gclid=Cj0KCQjwvLOTBhCJARIsACVldV3C3qSr2qLv5k9eKDvRhq7IlrUNFnriUS9Bx3ICa3zgv6uzvsMTe78aAgjuEALw_wcB&hsa_acc=6473224336&hsa_ad=588189507090&hsa_cam=658335516&hsa_grp=28781697530&hsa_kw=grace%20hopper%20coding%20bootcamp&hsa_mt=e&hsa_net=adwords&hsa_src=g&hsa_tgt=kwd-255536349006&hsa_ver=3&utm_campaign=gh_tech_web_google_sr_all_all_core_brand-EM&utm_medium=ppc&utm_source=adwords&utm_term=grace%20hopper%20coding%20bootcamp'>Grace Hopper Coding Academy</a>, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections.</p>
<p><a href='https://www.intercode.blog'>Intercode</a> is a collective of voices exploring how the intersection of identity and privilege impacts every facet of the tech industry–including access, culture and the ethics governing the space. Through candid conversation and writing, we work to tackle the ways current DEI efforts can still fall short in fostering inclusive and equitable spaces. ​​</p>
<p>The idea for Intercode began with several candid discussions amongst a group of software developers and recent graduates from the Grace Hopper Program, a NY-based bootcamp targeted towards historically underrepresented candidates in the field of engineering. While sharing our perspectives we quickly realized that these discussions were valuable enough to merit a larger audience and began laying the groundwork to create a formal space for us to share these perspectives with the world.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week's episode is the second episode of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the <a href='https://www.gracehopper.com/?gclid=Cj0KCQjwvLOTBhCJARIsACVldV3C3qSr2qLv5k9eKDvRhq7IlrUNFnriUS9Bx3ICa3zgv6uzvsMTe78aAgjuEALw_wcB&hsa_acc=6473224336&hsa_ad=588189507090&hsa_cam=658335516&hsa_grp=28781697530&hsa_kw=grace%20hopper%20coding%20bootcamp&hsa_mt=e&hsa_net=adwords&hsa_src=g&hsa_tgt=kwd-255536349006&hsa_ver=3&utm_campaign=gh_tech_web_google_sr_all_all_core_brand-EM&utm_medium=ppc&utm_source=adwords&utm_term=grace%20hopper%20coding%20bootcamp'>Grace Hopper Coding Academy</a>, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections.</p>
<p><a href='https://www.intercode.blog'>Intercode</a> is a collective of voices exploring how the intersection of identity and privilege impacts every facet of the tech industry–including access, culture and the ethics governing the space. Through candid conversation and writing, we work to tackle the ways current DEI efforts can still fall short in fostering inclusive and equitable spaces. ​​</p>
<p>The idea for Intercode began with several candid discussions amongst a group of software developers and recent graduates from the Grace Hopper Program, a NY-based bootcamp targeted towards historically underrepresented candidates in the field of engineering. While sharing our perspectives we quickly realized that these discussions were valuable enough to merit a larger audience and began laying the groundwork to create a formal space for us to share these perspectives with the world.</p>
]]></content:encoded>
                                    
        <enclosure length="990760690" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/uq871n/Hopper_Part_2.mp3"/>
        <itunes:summary>This week’s episode is the second episode of a 2 part series of Technically Human. Over the next two episodes, I speak with the six women/nonbinary/trans individuals of the new collective “Intercode” about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the Grace Hopper Coding Academy, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code to pursue careers in the tech industry. 

Intercode is a collective of voices exploring how the intersection of identity and privilege impacts every facet of the tech industry–including access, culture and the ethics governing the space. Through candid conversation and writing, we work to tackle the ways current DEI efforts can still fall short in fostering inclusive and equitable spaces. ​​</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2580</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>77</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week's episode is the second episode of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the Grace Hopper Coding Academy, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections. Intercode is a collective of voices exploring how the intersection of identity and privilege impacts every facet of the tech industry–including access, culture and the ethics governing the space. Through candid conversation and writing, we work to tackle the ways current DEI efforts can still fall short in fostering inclusive and equitable spaces. ​​ The idea for Intercode began with several candid discussions amongst a group of software developers and recent graduates from the Grace Hopper Program, a NY-based bootcamp targeted towards historically underrepresented candidates in the field of engineering. While sharing our perspectives we quickly realized that these discussions were valuable enough to merit a larger audience and began laying the groundwork to create a formal space for us to share these perspectives with the world.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Intercode: A panel discussion about gender and transitioning into tech</title>
        <itunes:title>Intercode: A panel discussion about gender and transitioning into tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/intercode-a-panel-discussion-about-gender-and-transitioning-into-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/intercode-a-panel-discussion-about-gender-and-transitioning-into-tech/#comments</comments>        <pubDate>Sat, 30 Apr 2022 10:57:47 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/21399ed1-9b90-3980-9e07-90c4230ea14a</guid>
                                    <description><![CDATA[<p>This week's episode is the first of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the <a href='https://www.gracehopper.com/?utm_term=grace%20hopper%20coding%20bootcamp&utm_medium=ppc&utm_campaign=gh_tech_web_google_sr_all_all_core_brand-EM&utm_source=adwords&hsa_tgt=kwd-255536349006&hsa_ad=588189507090&hsa_net=adwords&hsa_ver=3&hsa_cam=658335516&hsa_kw=grace%20hopper%20coding%20bootcamp&hsa_acc=6473224336&hsa_mt=e&hsa_src=g&hsa_grp=28781697530&gclid=Cj0KCQjwvLOTBhCJARIsACVldV3C3qSr2qLv5k9eKDvRhq7IlrUNFnriUS9Bx3ICa3zgv6uzvsMTe78aAgjuEALw_wcB'>Grace Hopper Coding Academy</a>, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections.</p>
<p>Serena Chang is a Fullstack software engineer and professional dancer in New York City looking to combine both these passions in her next career. Chang was dancing as one of the lead roles in the off-broadway performance, “Then She Fell” and prepared to go on an international tour prior to the performance shutdowns due to Covid. Looking for another creative and technical pathway, she became immersed in coding and the endless creative possibilities it offered to interface with humans.</p>
<p>Kelsey Roy is a Software Engineer who is seeking to implement socially conscious practices in the tech sphere. She has held previous roles as a Data Analyst and Operations Manager and as a Project Management Consultant. Kelsey is devoted to a career working for mission-driven organizations with diverse and collaborative environments that make a positive difference in the world. Related to her passion for supporting DEI efforts in tech, she is also interested in the ethics surrounding AI, machine learning, and computing in general.</p>
<p>Jazma Foskin is a Fullstack Software Engineer who recently graduated from the Grace Hopper Program at Fullstack Academy. She is an Army Veteran who is passionate about learning, traveling, and growing. Combining technology and creativity has allowed her to work on passion projects that are aiming to push the Black and LGBTQIA+ community forward. As a Black woman, Jazma is continuing to be a representation so that others may see themselves in her and understand they too no matter their starting point can accomplish their goals.</p>
<p>Diana Viglucci (they/them) is a full stack developer, community-builder, and lifelong learner. They like writing code that brings people joy, helps them learn something new, or that makes resources more accessible. Diana completed their technical training at the Grace Hopper Program, where they were best known for their Stackathon-winning rat tracker app. Prior to transitioning into tech, they worked in community-based nonprofit programs, supporting individuals and their families as they navigated mental health issues, career changes, and LGBTQ+ identity. A cum laude graduate of Cornell University, Diana finds joy in making art, spending time in nature, and turning off their phone for hours-long stretches. Their work is grounded in person-centered, trauma-informed, and intersectional perspectives - and always will be - because software is for people.</p>
<p>Violet Cutler (She/They) is a trans woman living in Philadelphia. She has been an artist and performer for more than a decade. She has also spent that time organizing DIY events in the queer and trans community and really values community building. She spent the last 4 years working in a food coop. She co-organized a successful union campaign when Covid struck. Despite this success, the dangers of the pandemic drove her to look for another way to support herself. In August of 2021, she quit her job and began studying to get into the Grace Hopper Program at Fullstack Academy. She graduated in April 2022 and looks forward to a career in Tech and Game Development. Her priorities moving forward are accessibility of the web and creating spaces in tech for other marginalized identities.</p>
<p>Jessica Donig (she/her) is a Fullstack software engineer with a background in social entrepreneurship. Prior to attending Grace Hopper, Jessica co-founded a nonprofit, worked as the first employee of a YC-backed startup, and conducted clinical research at Stanford University.  From the time she entered the startup world in 2015, Jessica wanted to learn to code, but the lack of female representation in the field had made her hesitant to do so. Now that she has completed her coursework, Jessica is passionate about helping other nontraditional engineers—especially women—see themselves in tech.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week's episode is the first of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the <a href='https://www.gracehopper.com/?utm_term=grace%20hopper%20coding%20bootcamp&utm_medium=ppc&utm_campaign=gh_tech_web_google_sr_all_all_core_brand-EM&utm_source=adwords&hsa_tgt=kwd-255536349006&hsa_ad=588189507090&hsa_net=adwords&hsa_ver=3&hsa_cam=658335516&hsa_kw=grace%20hopper%20coding%20bootcamp&hsa_acc=6473224336&hsa_mt=e&hsa_src=g&hsa_grp=28781697530&gclid=Cj0KCQjwvLOTBhCJARIsACVldV3C3qSr2qLv5k9eKDvRhq7IlrUNFnriUS9Bx3ICa3zgv6uzvsMTe78aAgjuEALw_wcB'>Grace Hopper Coding Academy</a>, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections.</p>
<p>Serena Chang is a Fullstack software engineer and professional dancer in New York City looking to combine both these passions in her next career. Chang was dancing as one of the lead roles in the off-broadway performance, “Then She Fell” and prepared to go on an international tour prior to the performance shutdowns due to Covid. Looking for another creative and technical pathway, she became immersed in coding and the endless creative possibilities it offered to interface with humans.</p>
<p>Kelsey Roy is a Software Engineer who is seeking to implement socially conscious practices in the tech sphere. She has held previous roles as a Data Analyst and Operations Manager and as a Project Management Consultant. Kelsey is devoted to a career working for mission-driven organizations with diverse and collaborative environments that make a positive difference in the world. Related to her passion for supporting DEI efforts in tech, she is also interested in the ethics surrounding AI, machine learning, and computing in general.</p>
<p>Jazma Foskin is a Fullstack Software Engineer who recently graduated from the Grace Hopper Program at Fullstack Academy. She is an Army Veteran who is passionate about learning, traveling, and growing. Combining technology and creativity has allowed her to work on passion projects that are aiming to push the Black and LGBTQIA+ community forward. As a Black woman, Jazma is continuing to be a representation so that others may see themselves in her and understand they too no matter their starting point can accomplish their goals.</p>
<p>Diana Viglucci (they/them) is a full stack developer, community-builder, and lifelong learner. They like writing code that brings people joy, helps them learn something new, or that makes resources more accessible. Diana completed their technical training at the Grace Hopper Program, where they were best known for their Stackathon-winning rat tracker app. Prior to transitioning into tech, they worked in community-based nonprofit programs, supporting individuals and their families as they navigated mental health issues, career changes, and LGBTQ+ identity. A cum laude graduate of Cornell University, Diana finds joy in making art, spending time in nature, and turning off their phone for hours-long stretches. Their work is grounded in person-centered, trauma-informed, and intersectional perspectives - and always will be - because software is for people.</p>
<p>Violet Cutler (She/They) is a trans woman living in Philadelphia. She has been an artist and performer for more than a decade. She has also spent that time organizing DIY events in the queer and trans community and really values community building. She spent the last 4 years working in a food coop. She co-organized a successful union campaign when Covid struck. Despite this success, the dangers of the pandemic drove her to look for another way to support herself. In August of 2021, she quit her job and began studying to get into the Grace Hopper Program at Fullstack Academy. She graduated in April 2022 and looks forward to a career in Tech and Game Development. Her priorities moving forward are accessibility of the web and creating spaces in tech for other marginalized identities.</p>
<p>Jessica Donig (she/her) is a Fullstack software engineer with a background in social entrepreneurship. Prior to attending Grace Hopper, Jessica co-founded a nonprofit, worked as the first employee of a YC-backed startup, and conducted clinical research at Stanford University.  From the time she entered the startup world in 2015, Jessica wanted to learn to code, but the lack of female representation in the field had made her hesitant to do so. Now that she has completed her coursework, Jessica is passionate about helping other nontraditional engineers—especially women—see themselves in tech.</p>
]]></content:encoded>
                                    
        <enclosure length="1474593306" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/8pz8kx/Hopper_Final.mp3"/>
        <itunes:summary>This week’s episode is the first of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the Grace Hopper Coding Academy. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, ”Intercode,” a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>21</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>76</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week's episode is the first of a 2 part series of Technically Human. Over the next two episodes, I speak with six women/nonbinary/trans individuals about their experiences transitioning into the tech industry after leaving established careers. They share their stories about what led them to decide to leave their established careers and retrain as technologists through the Grace Hopper Coding Academy, a program specifically targeting women/nonbinary/trans individuals who want to learn how to code so that they can pursue careers in the tech industry. We discuss the challenges that women/nonbinary/trans individuals face when pursuing careers in tech. We talk about what tech represents for those who have been historically excluded from it, and their decision to launch their new collective, "Intercode," a platform that seeks to establish a community for Womyn+ in tech to share their stories and forge new connections. Serena Chang is a Fullstack software engineer and professional dancer in New York City looking to combine both these passions in her next career. Chang was dancing as one of the lead roles in the off-broadway performance, “Then She Fell” and prepared to go on an international tour prior to the performance shutdowns due to Covid. Looking for another creative and technical pathway, she became immersed in coding and the endless creative possibilities it offered to interface with humans. Kelsey Roy is a Software Engineer who is seeking to implement socially conscious practices in the tech sphere. She has held previous roles as a Data Analyst and Operations Manager and as a Project Management Consultant. Kelsey is devoted to a career working for mission-driven organizations with diverse and collaborative environments that make a positive difference in the world. Related to her passion for supporting DEI efforts in tech, she is also interested in the ethics surrounding AI, machine learning, and computing in general. Jazma Foskin is a Fullstack Software Engineer who recently graduated from the Grace Hopper Program at Fullstack Academy. She is an Army Veteran who is passionate about learning, traveling, and growing. Combining technology and creativity has allowed her to work on passion projects that are aiming to push the Black and LGBTQIA+ community forward. As a Black woman, Jazma is continuing to be a representation so that others may see themselves in her and understand they too no matter their starting point can accomplish their goals. Diana Viglucci (they/them) is a full stack developer, community-builder, and lifelong learner. They like writing code that brings people joy, helps them learn something new, or that makes resources more accessible. Diana completed their technical training at the Grace Hopper Program, where they were best known for their Stackathon-winning rat tracker app. Prior to transitioning into tech, they worked in community-based nonprofit programs, supporting individuals and their families as they navigated mental health issues, career changes, and LGBTQ+ identity. A cum laude graduate of Cornell University, Diana finds joy in making art, spending time in nature, and turning off their phone for hours-long stretches. Their work is grounded in person-centered, trauma-informed, and intersectional perspectives - and always will be - because software is for people. Violet Cutler (She/They) is a trans woman living in Philadelphia. She has been an artist and performer for more than a decade. She has also spent that time organizing DIY events in the queer and trans community and really values community building. She spent the last 4 years working in a food coop. She co-organized a successful union campaign when Covid struck. Despite this success, the dangers of the pandemic drove her to look for another way to support herself. In August of 2021, she quit her job and began studying to get into the Grace Hopper Program at Fullstack Academy. She graduated in April 2022 and looks forward to a career in Tech and Game Development. Her priorities moving forward are accessibility of the web and creating spaces in tech for other marginalized identities. Jessica Donig (she/her) is a Fullstack software engineer with a background in social entrepreneurship. Prior to attending Grace Hopper, Jessica co-founded a nonprofit, worked as the first employee of a YC-backed startup, and conducted clinical research at Stanford University.  From the time she entered the startup world in 2015, Jessica wanted to learn to code, but the lack of female representation in the field had made her hesitant to do so. Now that she has completed her coursework, Jessica is passionate about helping other nontraditional engineers—especially women—see themselves in tech.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Technology For Life: Disaster relief and life-saving tech</title>
        <itunes:title>Technology For Life: Disaster relief and life-saving tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/technology-for-life-disaster-relief-and-life-saving-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/technology-for-life-disaster-relief-and-life-saving-tech/#comments</comments>        <pubDate>Fri, 22 Apr 2022 09:08:45 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/aea1e182-3e70-3229-abf5-9949b1a0cb49</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human” I talk to Dov Maisel, the cofounder of <a href='https://israelrescue.org'>United Hatzalah</a>, an organization that leverages technologies to provide disaster relief around the world when crisis strikes—in Haiti, Florida, Nepal, Israel, and right now, in the devastating war in Ukraine. We talk about United Hatzalah’s ethic of providing free emergency care to all people, regardless of race, religion, ethnicity, or nationality, we talk about how technologies are changing the terrain of disaster relief, and we discuss how existing technologies can be transformed into life-saving ones.</p>
<p>Dov Maisel is an Innovator, volunteer and world renown expert in disaster management. In 2006, Dovi helped to cofound United Hatzalah, Israel’s first nationwide, all volunteer EMS organization. He invented the technology for United Hatzalah’s Uber-like GPS based dispatch system which locates and sends the EMT closest to the medical emergency to provide aid. He led the United Hatzalah international relief missions in Haiti, Nepal and in both Houston and Florida, USA, after the devastating hurricanes that decimated the communities there, and he managed the United Hatzalah EMS response teams in Mumbai immediately after the shocking terror attacks that took place there in 2008. He works as the head of International Operations of United Hatzalah, which is currently providing critical care and emergency aid in Ukraine. He continues to save lives as a volunteer, instructor and mass casualty incident manager. 

United Hatzalah is the largest independent, non-profit, fully volunteer Emergency Medical Service organization providing free emergency medical first response. They have provided critical life-saving care in the wake of devastating disasters in Haiti, Nepal, Houston, Florida, Mumbai, and they are currently on the ground in Ukraine. United Hatzalah’s service is available to all people regardless of race, religion, or national origin. United Hatzalah has more than 6,200 volunteers around the country, available around the clock – 24 hours a day, 7 days a week, 365 days a year.</p>
<p>United Hatzalah is <a href='https://sanfrancisco.cbslocal.com/2022/04/14/jewish-volunteer-group-organizes-ukraine-aid-efforts-seeks-bay-area-help/'>currently providing crucial on the ground aid in Ukraine</a>. To donate to United Hatzalah and to support their work in Ukraine, please visit their website: <a href='https://israelrescue.org/'>https://israelrescue.org/</a></p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human” I talk to Dov Maisel, the cofounder of <a href='https://israelrescue.org'>United Hatzalah</a>, an organization that leverages technologies to provide disaster relief around the world when crisis strikes—in Haiti, Florida, Nepal, Israel, and right now, in the devastating war in Ukraine. We talk about United Hatzalah’s ethic of providing free emergency care to all people, regardless of race, religion, ethnicity, or nationality, we talk about how technologies are changing the terrain of disaster relief, and we discuss how existing technologies can be transformed into life-saving ones.</p>
<p>Dov Maisel is an Innovator, volunteer and world renown expert in disaster management. In 2006, Dovi helped to cofound United Hatzalah, Israel’s first nationwide, all volunteer EMS organization. He invented the technology for United Hatzalah’s Uber-like GPS based dispatch system which locates and sends the EMT closest to the medical emergency to provide aid. He led the United Hatzalah international relief missions in Haiti, Nepal and in both Houston and Florida, USA, after the devastating hurricanes that decimated the communities there, and he managed the United Hatzalah EMS response teams in Mumbai immediately after the shocking terror attacks that took place there in 2008. He works as the head of International Operations of United Hatzalah, which is currently providing critical care and emergency aid in Ukraine. He continues to save lives as a volunteer, instructor and mass casualty incident manager. <br>
<br>
United Hatzalah is the largest independent, non-profit, fully volunteer Emergency Medical Service organization providing free emergency medical first response. They have provided critical life-saving care in the wake of devastating disasters in Haiti, Nepal, Houston, Florida, Mumbai, and they are currently on the ground in Ukraine. United Hatzalah’s service is available to all people regardless of race, religion, or national origin. United Hatzalah has more than 6,200 volunteers around the country, available around the clock – 24 hours a day, 7 days a week, 365 days a year.</p>
<p>United Hatzalah is <a href='https://sanfrancisco.cbslocal.com/2022/04/14/jewish-volunteer-group-organizes-ukraine-aid-efforts-seeks-bay-area-help/'>currently providing crucial on the ground aid in Ukraine</a>. To donate to United Hatzalah and to support their work in Ukraine, please visit their website: <a href='https://israelrescue.org/'>https://israelrescue.org/</a></p>
]]></content:encoded>
                                    
        <enclosure length="72016924" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/bg85di/United_Hatzalaha2cfb.mp3"/>
        <itunes:summary>In this episode of “Technically Human” I talk to Dov Maisel, the cofounder of United Hatzalah, an organization that leverages technologies to provide disaster relief around the world when crisis strikes—in Haiti, Florida, Nepal, Israel, and right now, in the devastating war in Ukraine. We talk about United Hatzalah’s ethic of providing free emergency care to all people, regardless of race, religion, ethnicity, or nationality, we talk about how technologies are changing the terrain of disaster relief, and we discuss how existing technologies can be transformed into life-saving ones.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3000</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>75</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human” I talk to Dov Maisel, the cofounder of United Hatzalah, an organization that leverages technologies to provide disaster relief around the world when crisis strikes—in Haiti, Florida, Nepal, Israel, and right now, in the devastating war in Ukraine. We talk about United Hatzalah’s ethic of providing free emergency care to all people, regardless of race, religion, ethnicity, or nationality, we talk about how technologies are changing the terrain of disaster relief, and we discuss how existing technologies can be transformed into life-saving ones. Dov Maisel is an Innovator, volunteer and world renown expert in disaster management. In 2006, Dovi helped to cofound United Hatzalah, Israel’s first nationwide, all volunteer EMS organization. He invented the technology for United Hatzalah’s Uber-like GPS based dispatch system which locates and sends the EMT closest to the medical emergency to provide aid. He led the United Hatzalah international relief missions in Haiti, Nepal and in both Houston and Florida, USA, after the devastating hurricanes that decimated the communities there, and he managed the United Hatzalah EMS response teams in Mumbai immediately after the shocking terror attacks that took place there in 2008. He works as the head of International Operations of United Hatzalah, which is currently providing critical care and emergency aid in Ukraine. He continues to save lives as a volunteer, instructor and mass casualty incident manager. United Hatzalah is the largest independent, non-profit, fully volunteer Emergency Medical Service organization providing free emergency medical first response. They have provided critical life-saving care in the wake of devastating disasters in Haiti, Nepal, Houston, Florida, Mumbai, and they are currently on the ground in Ukraine. United Hatzalah’s service is available to all people regardless of race, religion, or national origin. United Hatzalah has more than 6,200 volunteers around the country, available around the clock – 24 hours a day, 7 days a week, 365 days a year. United Hatzalah is currently providing crucial on the ground aid in Ukraine. To donate to United Hatzalah and to support their work in Ukraine, please visit their website: https://israelrescue.org/</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>AI for the Developing World</title>
        <itunes:title>AI for the Developing World</itunes:title>
        <link>https://dmdonig.podbean.com/e/ai-for-the-developing-world/</link>
                    <comments>https://dmdonig.podbean.com/e/ai-for-the-developing-world/#comments</comments>        <pubDate>Fri, 15 Apr 2022 05:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d761ff22-e4c8-36dd-9f64-5e67eb4f1c20</guid>
                                    <description><![CDATA[<p>In this episode, I interview Prateek Joshi, Founder and CEO of Plutoshift. We talk about the importance of local and cultural knowledge in a global tech economy, the ethical obligations of technological producers in the West to technological development in developing countries, and how AI transforming the landscape of the developing world.</p>
<p>Prateek Joshi is the Founder and CEO of<a href='https://plutoshift.com/about'> Plutoshift</a>, a company that leverages AI to create sustainable, and life-saving, technologies that help meet basic needs in developing countries. He is the author of <a href='http://www.prateekj.com/books.html'>13 books</a> on ML, including a <a href='https://goo.gl/ddt3HF'>#1 Best Seller</a>, and the host of the <a href='https://www.prateekj.com/podcast.html'>Infinite Machine Learning</a> podcast  He has been featured on <a href='https://www.forbes.com/pictures/5a0208cfa7ea436b47b50119/prateek-joshi-29/#31ef93a15b9f'>Forbes</a>, <a href='https://fortune.com/2020/01/28/a-i-is-unstoppable-and-a-i-is-struggling/'>Fortune</a>, <a href='https://www.prateekj.com/uploads/2/4/3/4/24348556/pic.png'>CNBC</a>, <a href='https://techcrunch.com/2017/04/04/pluto-ai-raises-2-1-million-to-bring-intelligence-to-water-treatment/'>TechCrunch</a>, and <a href='https://www.bloomberg.com/press-releases/2019-09-19/plutoshift-secures-series-a-funding-to-introduce-process-performance-monitoring-to-manufacturing'>Bloomberg</a>, and he publishes a tech <a href='http://prateekvjoshi.com/'>blog</a> with readership in 200+ countries. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I interview Prateek Joshi, Founder and CEO of Plutoshift. We talk about the importance of local and cultural knowledge in a global tech economy, the ethical obligations of technological producers in the West to technological development in developing countries, and how AI transforming the landscape of the developing world.</p>
<p>Prateek Joshi is the Founder and CEO of<a href='https://plutoshift.com/about'> Plutoshift</a>, a company that leverages AI to create sustainable, and life-saving, technologies that help meet basic needs in developing countries. He is the author of <a href='http://www.prateekj.com/books.html'>13 books</a> on ML, including a <a href='https://goo.gl/ddt3HF'>#1 Best Seller</a>, and the host of the <a href='https://www.prateekj.com/podcast.html'>Infinite Machine Learning</a> podcast  He has been featured on <a href='https://www.forbes.com/pictures/5a0208cfa7ea436b47b50119/prateek-joshi-29/#31ef93a15b9f'>Forbes</a>, <a href='https://fortune.com/2020/01/28/a-i-is-unstoppable-and-a-i-is-struggling/'>Fortune</a>, <a href='https://www.prateekj.com/uploads/2/4/3/4/24348556/pic.png'>CNBC</a>, <a href='https://techcrunch.com/2017/04/04/pluto-ai-raises-2-1-million-to-bring-intelligence-to-water-treatment/'>TechCrunch</a>, and <a href='https://www.bloomberg.com/press-releases/2019-09-19/plutoshift-secures-series-a-funding-to-introduce-process-performance-monitoring-to-manufacturing'>Bloomberg</a>, and he publishes a tech <a href='http://prateekvjoshi.com/'>blog</a> with readership in 200+ countries. </p>
]]></content:encoded>
                                    
        <enclosure length="74058830" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qzzvsb/Prateek_Joshi_Episode_mixdowna7wmx.mp3"/>
        <itunes:summary>In this episode, I interview Prateek Joshi, Founder and CEO of Plutoshift. We talk about the importance of local and cultural knowledge in a global tech economy, the ethical obligations of technological producers in the West to technological development in developing countries, and how AI transforming the landscape of the developing world.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3085</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>74</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I interview Prateek Joshi, Founder and CEO of Plutoshift. We talk about the importance of local and cultural knowledge in a global tech economy, the ethical obligations of technological producers in the West to technological development in developing countries, and how AI transforming the landscape of the developing world. Prateek Joshi is the Founder and CEO of Plutoshift, a company that leverages AI to create sustainable, and life-saving, technologies that help meet basic needs in developing countries. He is the author of 13 books on ML, including a #1 Best Seller, and the host of the Infinite Machine Learning podcast  He has been featured on Forbes, Fortune, CNBC, TechCrunch, and Bloomberg, and he publishes a tech blog with readership in 200+ countries. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Opportunity Trap: tech’s visa problem</title>
        <itunes:title>The Opportunity Trap: tech’s visa problem</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-opportunity-trap-tech-s-visa-problem/</link>
                    <comments>https://dmdonig.podbean.com/e/the-opportunity-trap-tech-s-visa-problem/#comments</comments>        <pubDate>Fri, 08 Apr 2022 09:33:43 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/3ec29f82-0270-3974-b91a-3826186de968</guid>
                                    <description><![CDATA[<p>In this episode, Dr. Pallavi Banerjee joins me to talk about her new book, <a href='https://href.li/?https://nyupress.org/9781479852918/the-opportunity-trap/?fbclid=IwAR33BQSQsDyMSY0WYjWOkcWp1B4DUcswF6cO8YMuAGQnpj6tVv-b_BiXlUQ'>The Opportunity Trap: High-Skilled Workers, Indian Families and the Failures of Dependent-Visa Program.</a> We talk about the role of immigrants in American tech culture, the challenges that immigrants coming to the U.S. to work face in the immigration process, and the need to think about what "tech" is, beyond our just technological products.</p>
<p>Dr. Pallavi Banerjee is a <a href='https://href.li/?https://soci.ucalgary.ca/manageprofile/profiles/pallavi-banerjee'>Professor of Sociology at the University of Calgary</a>. Her research interests lie at the intersections of sociology of families, immigration, labour, gender, transnationalism and critical feminist theories. Her new book looks at the experience of Indian immigrants coming to the U.S. to work in the tech sector through the American visa program, and the problems with an immigration system that offers opportunities for immigrants, while often simultaneously wreaking havoc on their lives and families.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, Dr. Pallavi Banerjee joins me to talk about her new book, <a href='https://href.li/?https://nyupress.org/9781479852918/the-opportunity-trap/?fbclid=IwAR33BQSQsDyMSY0WYjWOkcWp1B4DUcswF6cO8YMuAGQnpj6tVv-b_BiXlUQ'>The Opportunity Trap: High-Skilled Workers, Indian Families and the Failures of Dependent-Visa Program.</a> We talk about the role of immigrants in American tech culture, the challenges that immigrants coming to the U.S. to work face in the immigration process, and the need to think about what "tech" is, beyond our just technological products.</p>
<p>Dr. Pallavi Banerjee is a <a href='https://href.li/?https://soci.ucalgary.ca/manageprofile/profiles/pallavi-banerjee'>Professor of Sociology at the University of Calgary</a>. Her research interests lie at the intersections of sociology of families, immigration, labour, gender, transnationalism and critical feminist theories. Her new book looks at the experience of Indian immigrants coming to the U.S. to work in the tech sector through the American visa program, and the problems with an immigration system that offers opportunities for immigrants, while often simultaneously wreaking havoc on their lives and families.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="65598462" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/s52dyy/Pallavi_Banerjee_Episodea8u7y.mp3"/>
        <itunes:summary>In this episode, Dr. Pallavi Banerjee joins me to talk about her new book, The Opportunity Trap: High-Skilled Workers, Indian Families and the Failures of Dependent-Visa Program. We talk about the role of immigrants in American tech culture, the challenges that immigrants coming to the U.S. to work face in the immigration process, and the need to think about what ”tech” is, beyond our just technological products.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2732</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>73</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, Dr. Pallavi Banerjee joins me to talk about her new book, The Opportunity Trap: High-Skilled Workers, Indian Families and the Failures of Dependent-Visa Program. We talk about the role of immigrants in American tech culture, the challenges that immigrants coming to the U.S. to work face in the immigration process, and the need to think about what "tech" is, beyond our just technological products. Dr. Pallavi Banerjee is a Professor of Sociology at the University of Calgary. Her research interests lie at the intersections of sociology of families, immigration, labour, gender, transnationalism and critical feminist theories. Her new book looks at the experience of Indian immigrants coming to the U.S. to work in the tech sector through the American visa program, and the problems with an immigration system that offers opportunities for immigrants, while often simultaneously wreaking havoc on their lives and families.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Market Values: Dr. Steven Kelts on corporate ethics in the tech industry</title>
        <itunes:title>Market Values: Dr. Steven Kelts on corporate ethics in the tech industry</itunes:title>
        <link>https://dmdonig.podbean.com/e/market-values-dr-steven-kelts-on-corporate-ethics-in-the-tech-industry/</link>
                    <comments>https://dmdonig.podbean.com/e/market-values-dr-steven-kelts-on-corporate-ethics-in-the-tech-industry/#comments</comments>        <pubDate>Fri, 01 Apr 2022 10:34:46 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9f4704ca-2153-30d2-b467-d4541cca9770</guid>
                                    <description><![CDATA[<p>We are back, with another season of “Technically Human.”</p>
<p>For our first episode of the season, we're bringing you a conversation with <a href='https://www.google.com/search?q=steven+kelts+princeton&sxsrf=APq-WBuO3IuuMGpwxQzmJJTh3m3HHVCzjQ%3A1648618728323&ei=6OxDYt-0E9uUwbkPiZOMYA&oq=steven+kelts+s&gs_lcp=Cgdnd3Mtd2l6EAMYADIECCMQJzIECCMQJzIECCMQJzIFCAAQkQIyBggAEBYQHjIGCAAQFhAeMgUIABCGAzIFCAAQhgMyBQgAEIYDOgcIIxCwAxAnOgcIABBHELADOgcIABCwAxBDOgoIABDkAhCwAxgBOg8ILhDUAhDIAxCwAxBDGAI6DAguEMgDELADEEMYAjoKCC4QsQMQ1AIQQzoICC4QgAQQsQM6BAguEEM6CgguELEDEIMBEEM6DQguEIAEEIcCELEDEBQ6BQgAEIAEOggIABAWEAoQHkoECEEYAEoECEYYAVA5WPUGYMYQaAFwAXgAgAGmBIgB2g6SAQczLTEuMS4ymAEAoAEByAETwAEB2gEGCAEQARgJ2gEGCAIQARgI&sclient=gws-wiz'>Dr. Steven Kelts</a>. We talk about corporate ethics, we debate the role of values in tech culture, and Steven plays "optimistic cop" to my "cynical cop," to argue that he's hopeful for, and excited about, the future of ethics in tech culture.</p>
<p>Steven Kelts is a political theorist and long-time ethics educator, and a Lecturer at Princeton University, in the Politics Department and at the University Center for Human Values. </p>
<p>His current research is on the history and uses of market ideas, including theories of the firm and corporate organization. In addition to ongoing writing projects, Dr. Kelts consults in the private sector with companies looking to align their market value with their ethical values, working to develop frameworks to help employees navigate ethical pitfalls in their organizational culture.</p>
<p>This episode was produced by Deb Donig and Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>We are back, with another season of “Technically Human.”</p>
<p>For our first episode of the season, we're bringing you a conversation with <a href='https://www.google.com/search?q=steven+kelts+princeton&sxsrf=APq-WBuO3IuuMGpwxQzmJJTh3m3HHVCzjQ%3A1648618728323&ei=6OxDYt-0E9uUwbkPiZOMYA&oq=steven+kelts+s&gs_lcp=Cgdnd3Mtd2l6EAMYADIECCMQJzIECCMQJzIECCMQJzIFCAAQkQIyBggAEBYQHjIGCAAQFhAeMgUIABCGAzIFCAAQhgMyBQgAEIYDOgcIIxCwAxAnOgcIABBHELADOgcIABCwAxBDOgoIABDkAhCwAxgBOg8ILhDUAhDIAxCwAxBDGAI6DAguEMgDELADEEMYAjoKCC4QsQMQ1AIQQzoICC4QgAQQsQM6BAguEEM6CgguELEDEIMBEEM6DQguEIAEEIcCELEDEBQ6BQgAEIAEOggIABAWEAoQHkoECEEYAEoECEYYAVA5WPUGYMYQaAFwAXgAgAGmBIgB2g6SAQczLTEuMS4ymAEAoAEByAETwAEB2gEGCAEQARgJ2gEGCAIQARgI&sclient=gws-wiz'>Dr. Steven Kelts</a>. We talk about corporate ethics, we debate the role of values in tech culture, and Steven plays "optimistic cop" to my "cynical cop," to argue that he's hopeful for, and excited about, the future of ethics in tech culture.</p>
<p>Steven Kelts is a political theorist and long-time ethics educator, and a Lecturer at Princeton University, in the Politics Department and at the University Center for Human Values. </p>
<p>His current research is on the history and uses of market ideas, including theories of the firm and corporate organization. In addition to ongoing writing projects, Dr. Kelts consults in the private sector with companies looking to align their market value with their ethical values, working to develop frameworks to help employees navigate ethical pitfalls in their organizational culture.</p>
<p>This episode was produced by Deb Donig and Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="89002580" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/7j7ad6/Steven_Kelt_20220328T170036695928_mixdownbrss3.mp3"/>
        <itunes:summary>We are back, with another season of “Technically Human.”

For our first episode of the season, we’re bringing you a conversation with Dr. Steven Kelts. We talk about corporate ethics, we debate the role of values in tech culture, and Steven plays ”optimistic cop” to my ”cynical cop,” to argue that he’s hopeful for, and excited about, the future of ethics in tech culture.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3706</itunes:duration>
        <itunes:season>8</itunes:season>
        <itunes:episode>72</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>We are back, with another season of “Technically Human.” For our first episode of the season, we're bringing you a conversation with Dr. Steven Kelts. We talk about corporate ethics, we debate the role of values in tech culture, and Steven plays "optimistic cop" to my "cynical cop," to argue that he's hopeful for, and excited about, the future of ethics in tech culture. Steven Kelts is a political theorist and long-time ethics educator, and a Lecturer at Princeton University, in the Politics Department and at the University Center for Human Values.  His current research is on the history and uses of market ideas, including theories of the firm and corporate organization. In addition to ongoing writing projects, Dr. Kelts consults in the private sector with companies looking to align their market value with their ethical values, working to develop frameworks to help employees navigate ethical pitfalls in their organizational culture. This episode was produced by Deb Donig and Sakina Nuruddin. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Body Snatchers: Manjula Padmanabhan discusses the drama of technology and the black market of organ harvesting</title>
        <itunes:title>Body Snatchers: Manjula Padmanabhan discusses the drama of technology and the black market of organ harvesting</itunes:title>
        <link>https://dmdonig.podbean.com/e/body-snatchers-manjula-padmanabhan-discusses-the-drama-of-technology-and-the-black-market-of-organ-harvesting/</link>
                    <comments>https://dmdonig.podbean.com/e/body-snatchers-manjula-padmanabhan-discusses-the-drama-of-technology-and-the-black-market-of-organ-harvesting/#comments</comments>        <pubDate>Fri, 11 Mar 2022 14:43:48 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1feb1ae2-6565-3fc1-ac60-403082182da5</guid>
                                    <description><![CDATA[<p>Today’s episode is the final episode of our season. The episode features a very special conversation, one that I have wanted to have since I started the show two years ago. In the episode, I sit down with Manjula Padmanabhan. We talk about her play, <a href='https://magnoliana.com/portfolio-item/harvest/'>Harvest</a>, and the connection between market demand in the West and body supply in the global South, and we discuss the relationship between organ donation, as a technology, and human rights, as a philosophy. And Manjula explains why science fiction matters for our ability to understand, and to create, what it means to be human.</p>
<p><a href='https://magnoliana.com/'>Manjula Padmanabhan</a> is an author, playwright, artist and cartoonist. She grew up in Europe and South Asia, returning to India as a teenager. Her play Harvest won the <a href='https://www.onassis.org/'>Onassis Awar</a>d for Theatre, in 1997, in Greece. Her books include <a href='https://magnoliana.com/portfolio-item/getting-there/'>Getting There</a>, <a href='https://magnoliana.com/portfolio-item/escape/'>Escape,</a> and <a href='https://magnoliana.com/portfolio-item/the-island-of-lost-girls/'>The Island of Lost Girls</a>. She has illustrated over twenty children’s books including <a href='https://magnoliana.com/portfolio-item/i-am-different/'>I Am Different </a>and<a href='https://magnoliana.com/portfolio-item/shrinking-vanita/'> Shrinking Vanita</a>. She lives in the US, with a home in New Delhi.</p>
<p>This episode concludes the 7th season of "Technically Human." We’ll be back at the beginning of April, with more episodes of the show. </p>
<p>One important note: our producer, Matt Perry, who has been with the show since its early days, is moving on to pursue some dreams. Matt's work, his brilliance, and his vision has helped to build the show to what it is today. Thank you, Matt! </p>
<p>To our listeners, thanks for listening, and we will see you in April with more episodes of Technically Human.</p>
<p>This episode was produced by Matt Perry and Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Today’s episode is the final episode of our season. The episode features a very special conversation, one that I have wanted to have since I started the show two years ago. In the episode, I sit down with Manjula Padmanabhan. We talk about her play, <a href='https://magnoliana.com/portfolio-item/harvest/'><em>Harvest</em></a>, and the connection between market demand in the West and body supply in the global South, and we discuss the relationship between organ donation, as a technology, and human rights, as a philosophy. And Manjula explains why science fiction matters for our ability to understand, and to create, what it means to be human.</p>
<p><a href='https://magnoliana.com/'>Manjula Padmanabhan</a> is an author, playwright, artist and cartoonist. She grew up in Europe and South Asia, returning to India as a teenager. Her play <em>Harvest</em> won the <a href='https://www.onassis.org/'>Onassis Awar</a>d for Theatre, in 1997, in Greece. Her books include <em><a href='https://magnoliana.com/portfolio-item/getting-there/'>Getting There</a>, <a href='https://magnoliana.com/portfolio-item/escape/'>Escape,</a></em> and <em><a href='https://magnoliana.com/portfolio-item/the-island-of-lost-girls/'>The Island of Lost Girls</a>.</em> She has illustrated over twenty children’s books including <a href='https://magnoliana.com/portfolio-item/i-am-different/'><em>I Am Different</em> </a>and<a href='https://magnoliana.com/portfolio-item/shrinking-vanita/'> <em>Shrinking Vanita</em></a>. She lives in the US, with a home in New Delhi.</p>
<p>This episode concludes the 7th season of "Technically Human." We’ll be back at the beginning of April, with more episodes of the show. </p>
<p>One important note: our producer, Matt Perry, who has been with the show since its early days, is moving on to pursue some dreams. Matt's work, his brilliance, and his vision has helped to build the show to what it is today. Thank you, Matt! </p>
<p>To our listeners, thanks for listening, and we will see you in April with more episodes of Technically Human.</p>
<p>This episode was produced by Matt Perry and Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="100816046" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/38v88a/Manjula_Padmanabhan_Podcast_mixdownagp3u.mp3"/>
        <itunes:summary>Today’s episode is the final episode of our season. The episode features a very special conversation, one that I have wanted to have since I started the show two years ago. In the episode, I sit down with Manjula Padmanabhan. We talk about her play, Harvest, and the connection between market demand in the West and body supply in the global South, and we discuss the relationship between organ donation, as a technology, and human rights, as a philosophy. And Manjula explains why science fiction matters for our ability to understand, and to create, what it means to be human.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4200</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>71</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Today’s episode is the final episode of our season. The episode features a very special conversation, one that I have wanted to have since I started the show two years ago. In the episode, I sit down with Manjula Padmanabhan. We talk about her play, Harvest, and the connection between market demand in the West and body supply in the global South, and we discuss the relationship between organ donation, as a technology, and human rights, as a philosophy. And Manjula explains why science fiction matters for our ability to understand, and to create, what it means to be human. Manjula Padmanabhan is an author, playwright, artist and cartoonist. She grew up in Europe and South Asia, returning to India as a teenager. Her play Harvest won the Onassis Award for Theatre, in 1997, in Greece. Her books include Getting There, Escape, and The Island of Lost Girls. She has illustrated over twenty children’s books including I Am Different and Shrinking Vanita. She lives in the US, with a home in New Delhi. This episode concludes the 7th season of "Technically Human." We’ll be back at the beginning of April, with more episodes of the show.  One important note: our producer, Matt Perry, who has been with the show since its early days, is moving on to pursue some dreams. Matt's work, his brilliance, and his vision has helped to build the show to what it is today. Thank you, Matt!  To our listeners, thanks for listening, and we will see you in April with more episodes of Technically Human. This episode was produced by Matt Perry and Sakina Nuruddin. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Word Processing: how tech transforms translation</title>
        <itunes:title>Word Processing: how tech transforms translation</itunes:title>
        <link>https://dmdonig.podbean.com/e/word-processing-how-tech-transforms-translation/</link>
                    <comments>https://dmdonig.podbean.com/e/word-processing-how-tech-transforms-translation/#comments</comments>        <pubDate>Fri, 04 Mar 2022 22:51:14 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d4c2de4b-632c-3780-9817-3d4d529a8f8e</guid>
                                    <description><![CDATA[<p>In this episode, I chat with Christopher Willis, the Chief Marketing Officer of Acrolinx. We discuss how our digital and globally connected world is posing new challenges for—and new ways of thinking about or solving—how we talk to one another across cultures, across language barriers, across national boundaries, and we talk about just how human language is, in an age where AI can do a lot of the talking.</p>
<p>Christopher Willis is the Chief Marketing Officer of <a href='https://acrolinxcc.com'>Acrolinx</a>, an industry pioneer that is changing how we think about language across borders, cultures, and national boundaries. We talk about how tech is transforming translation, and just how human language is, in an age where AI can do a lot of the talking. Chris is widely recognized for his public speaking, his innovation, and his ability to build success from the ground up. His work focuses on centering tech around human values and foregrounding inclusive language practices in technology and translation.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I chat with Christopher Willis, the Chief Marketing Officer of Acrolinx. We discuss how our digital and globally connected world is posing new challenges for—and new ways of thinking about or solving—how we talk to one another across cultures, across language barriers, across national boundaries, and we talk about just how human language is, in an age where AI can do a lot of the talking.</p>
<p>Christopher Willis is the Chief Marketing Officer of <a href='https://acrolinxcc.com'>Acrolinx</a>, an industry pioneer that is changing how we think about language across borders, cultures, and national boundaries. We talk about how tech is transforming translation, and just how human language is, in an age where AI can do a lot of the talking. Chris is widely recognized for his public speaking, his innovation, and his ability to build success from the ground up. His work focuses on centering tech around human values and foregrounding inclusive language practices in technology and translation.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="74894568" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/xvubnc/Chris_Willis_Podcast_mixdown77mgj.mp3"/>
        <itunes:summary>In this episode, I chat with Christopher Willis, the Chief Marketing Officer of Acrolinx. We discuss how our digital and globally connected world is posing new challenges for—and new ways of thinking about or solving—how we talk to one another across cultures, across language barriers, across national boundaries, and we talk about just how human language is, in an age where AI can do a lot of the talking.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3120</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>70</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I chat with Christopher Willis, the Chief Marketing Officer of Acrolinx. We discuss how our digital and globally connected world is posing new challenges for—and new ways of thinking about or solving—how we talk to one another across cultures, across language barriers, across national boundaries, and we talk about just how human language is, in an age where AI can do a lot of the talking. Christopher Willis is the Chief Marketing Officer of Acrolinx, an industry pioneer that is changing how we think about language across borders, cultures, and national boundaries. We talk about how tech is transforming translation, and just how human language is, in an age where AI can do a lot of the talking. Chris is widely recognized for his public speaking, his innovation, and his ability to build success from the ground up. His work focuses on centering tech around human values and foregrounding inclusive language practices in technology and translation. This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Next Generation of AI</title>
        <itunes:title>The Next Generation of AI</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-next-generation-of-ai/</link>
                    <comments>https://dmdonig.podbean.com/e/the-next-generation-of-ai/#comments</comments>        <pubDate>Fri, 25 Feb 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/35df32ab-de1b-3dc3-8dc4-15ac6f427dd6</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dr. Eric Daimler. We talk about one of the biggest technology problems facing us today—data deluge—and how new computational models and theories can help solve it and, Dr. Daimler weighs in on the gaps, differences, and possibilities for collaboration between policy, industry, and academia. And we talk about what a vision of “AI for Good” might look like in a world of increasingly infinite data.</p>
<p>Dr. Eric Daimler is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. He served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics.</p>
<p>Dr. Daimler has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. His newest venture,<a href='https://conexus.com'> Conexus</a>, is a groundbreaking solution for what is perhaps today's biggest information technology problem — data deluge.</p>
<p>As founder and CEO of Conexus, Dr. Daimler  is leading the development of CQL, a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming and rapidly growing challenge of data integration and migration.</p>
<p>His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanding to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon's School of Computer Science where he founded the university's Entrepreneurial Management program and helped to launch Carnegie Mellon's Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.</p>
<p>Dr. Daimler’s extensive career spanning business, academics and policy give him a rare perspective on the next generation of AI. Dr. Daimler sees clearly how information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development.

</p>
<p>This episode was produced by Matt Perry.</p>
<p>Our head of reseaarch is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dr. Eric Daimler. We talk about one of the biggest technology problems facing us today—data deluge—and how new computational models and theories can help solve it and, Dr. Daimler weighs in on the gaps, differences, and possibilities for collaboration between policy, industry, and academia. And we talk about what a vision of “AI for Good” might look like in a world of increasingly infinite data.</p>
<p>Dr. Eric Daimler is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. He served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI & Robotics.</p>
<p>Dr. Daimler has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. His newest venture,<a href='https://conexus.com'> Conexus</a>, is a groundbreaking solution for what is perhaps today's biggest information technology problem — data deluge.</p>
<p>As founder and CEO of Conexus, Dr. Daimler  is leading the development of CQL, a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming and rapidly growing challenge of data integration and migration.</p>
<p>His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanding to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon's School of Computer Science where he founded the university's Entrepreneurial Management program and helped to launch Carnegie Mellon's Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science.</p>
<p>Dr. Daimler’s extensive career spanning business, academics and policy give him a rare perspective on the next generation of AI. Dr. Daimler sees clearly how information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development.<br>
<br>
</p>
<p>This episode was produced by Matt Perry.</p>
<p>Our head of reseaarch is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="89292618" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/hsa8hh/Dr_Eric_Daimler_Podcast_mixdown7vi7r.mp3"/>
        <itunes:summary>In this episode of “Technically Human,” I sit down with Dr. Eric Daimler. We talk about one of the biggest technology problems facing us today—data deluge—and how new computational models and theories can help solve it and, Dr. Daimler weighs in on the gaps, differences, and possibilities for collaboration between policy, industry, and academia. And we talk about what a vision of “AI for Good” might look like in a world of increasingly infinite data.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3720</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>69</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human,” I sit down with Dr. Eric Daimler. We talk about one of the biggest technology problems facing us today—data deluge—and how new computational models and theories can help solve it and, Dr. Daimler weighs in on the gaps, differences, and possibilities for collaboration between policy, industry, and academia. And we talk about what a vision of “AI for Good” might look like in a world of increasingly infinite data. Dr. Eric Daimler is a leading authority in robotics and artificial intelligence with over 20 years of experience as an entrepreneur, investor, technologist, and policymaker. He served under the Obama Administration as a Presidential Innovation Fellow for AI and Robotics in the Executive Office of President, as the sole authority driving the agenda for U.S. leadership in research, commercialization, and public adoption of AI &amp; Robotics. Dr. Daimler has incubated, built and led several technology companies recognized as pioneers in their fields ranging from software systems to statistical arbitrage. His newest venture, Conexus, is a groundbreaking solution for what is perhaps today's biggest information technology problem — data deluge. As founder and CEO of Conexus, Dr. Daimler  is leading the development of CQL, a patent-pending platform founded upon category theory — a revolution in mathematics — to help companies manage the overwhelming and rapidly growing challenge of data integration and migration. His academic research has been at the intersection of AI, Computational Linguistics, and Network Science (Graph Theory). His work has expanding to include economics and public policy. He served as Assistant Professor and Assistant Dean at Carnegie Mellon's School of Computer Science where he founded the university's Entrepreneurial Management program and helped to launch Carnegie Mellon's Silicon Valley Campus. He has studied at the University of Washington-Seattle, Stanford University, and Carnegie Mellon University, where he earned his Ph.D. in Computer Science. Dr. Daimler’s extensive career spanning business, academics and policy give him a rare perspective on the next generation of AI. Dr. Daimler sees clearly how information technology can dramatically improve our world. However, it demands our engagement. Neither a utopia nor dystopia is inevitable. What matters is how we shape and react to, its development. This episode was produced by Matt Perry. Our head of reseaarch is Sakina Nuruddin. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Creative (R)evolution: PJ Manney and science fiction for good</title>
        <itunes:title>Creative (R)evolution: PJ Manney and science fiction for good</itunes:title>
        <link>https://dmdonig.podbean.com/e/creative-revolution-pj-manney-and-science-fiction-for-good/</link>
                    <comments>https://dmdonig.podbean.com/e/creative-revolution-pj-manney-and-science-fiction-for-good/#comments</comments>        <pubDate>Fri, 18 Feb 2022 03:23:17 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/b2ab9cde-c286-38ff-84c0-b44fae5b6e10</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with science fiction writer, essayist, innovator, and cultural icon PJ Manney. We talk about the relationship between literature and empathy, the feedback loops between science fiction imagining and technological production, and how art is, and always has been, a technology. </p>
<p><a href='https://www.pjmanney.com/'>PJ Manney</a> is the author of the bestselling and Philip K. Dick Award-nominated science fiction technothriller, <a href='https://www.amazon.com/R-evolution-Phoenix-Horizon/dp/1477828494/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>(R)EVOLUTION </a>(2015), published by 47North in the Phoenix Horizon trilogy with, <a href='https://www.amazon.com/ID-entity-Phoenix-Horizon/dp/1503948498/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>(ID)ENTITY</a> (2017), and (<a href='https://www.amazon.com/gp/product/B00WL6QGHC'>CON)SCIENCE</a>, (2021). Set as alternate, future American histories, the novels chart the influence of world-changing technologies on power and nations. </p>
<p>A former chairperson of Humanity Plus, she helped rebrand the organization, launch H+ Magazine and organize the first multi-org conference on futurist topics, Convergence ’08. She authored "<a href='https://www.researchgate.net/publication/336929595_Yucky_gets_yummy_how_speculative_fiction_creates_society'>Yucky Gets Yummy: How Speculative Fiction Creates Society</a>"​ and "<a href='https://jetpress.org/v19/manney.htm'>Empathy in the Time of Technology: How Storytelling is the Key to Empathy,</a>"​ foundational works on the neuropsychology of empathy and media.</p>
<p>Manney presented her ideas to National Geographic, the Producers Guild of America, Directors Guild of America, NASA-JPL, M.I.T., Huffington Post, The H+ Summit, and the Institute for Ethics and Emerging Technologies, She is also a frequent guest on podcasts and webshows, and is widely published in as a public thinker and critic. Manney consults for varied organizations about the future of humanity and technology, including artificial intelligence, robotics, cyborgs, nanotechnology, biotechnology, brain-computer interfaces, space, blockchains and cryptocurrencies.

Manney worked for over 25 years in film/TV: motion picture PR at Walt Disney/Touchstone Pictures; story development for independent film production companies; and writing as Patricia Manney for the critically acclaimed hit TV shows Hercules — The Legendary Journeys and Xena: Warrior Princess. She also co-founded Uncharted Entertainment, writing and/or creating many pilot scripts for television networks, including CBS, Fox, UPN, Discovery, ABC Family and Comedy Central.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Our Head of Research is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with science fiction writer, essayist, innovator, and cultural icon PJ Manney. We talk about the relationship between literature and empathy, the feedback loops between science fiction imagining and technological production, and how art is, and always has been, a technology. </p>
<p><a href='https://www.pjmanney.com/'>PJ Manney</a> is the author of the bestselling and Philip K. Dick Award-nominated science fiction technothriller, <a href='https://www.amazon.com/R-evolution-Phoenix-Horizon/dp/1477828494/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>(R)EVOLUTION </a>(2015), published by 47North in the Phoenix Horizon trilogy with, <a href='https://www.amazon.com/ID-entity-Phoenix-Horizon/dp/1503948498/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr='>(ID)ENTITY</a> (2017), and (<a href='https://www.amazon.com/gp/product/B00WL6QGHC'>CON)SCIENCE</a>, (2021). Set as alternate, future American histories, the novels chart the influence of world-changing technologies on power and nations. </p>
<p>A former chairperson of Humanity Plus, she helped rebrand the organization, launch H+ Magazine and organize the first multi-org conference on futurist topics, Convergence ’08. She authored "<a href='https://www.researchgate.net/publication/336929595_Yucky_gets_yummy_how_speculative_fiction_creates_society'>Yucky Gets Yummy: How Speculative Fiction Creates Society</a>"​ and "<a href='https://jetpress.org/v19/manney.htm'>Empathy in the Time of Technology: How Storytelling is the Key to Empathy,</a>"​ foundational works on the neuropsychology of empathy and media.</p>
<p>Manney presented her ideas to National Geographic, the Producers Guild of America, Directors Guild of America, NASA-JPL, M.I.T., Huffington Post, The H+ Summit, and the Institute for Ethics and Emerging Technologies, She is also a frequent guest on podcasts and webshows, and is widely published in as a public thinker and critic. Manney consults for varied organizations about the future of humanity and technology, including artificial intelligence, robotics, cyborgs, nanotechnology, biotechnology, brain-computer interfaces, space, blockchains and cryptocurrencies.<br>
<br>
Manney worked for over 25 years in film/TV: motion picture PR at Walt Disney/Touchstone Pictures; story development for independent film production companies; and writing as Patricia Manney for the critically acclaimed hit TV shows Hercules — The Legendary Journeys and Xena: Warrior Princess. She also co-founded Uncharted Entertainment, writing and/or creating many pilot scripts for television networks, including CBS, Fox, UPN, Discovery, ABC Family and Comedy Central.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Our Head of Research is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="80651652" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/aqn9y5/PJ_Manney_Podcast_mixdown9f2n7.mp3"/>
        <itunes:summary>In this episode, I sit down with bestselling science fiction author, essayist, innovator, and cultural icon PJ Manney. We talk about the science of empathy, the feedback loop between science fiction and technological production, and how art is--and has always been--a technology.

PJ Manney is the author of the bestselling and Philip K. Dick Award-nominated science fiction technothriller, (R)EVOLUTION (2015), published by 47North in the Phoenix Horizon trilogy with, (ID)ENTITY (2017), and (CON)SCIENCE, (2021). Set as alternate, future American histories, the novels chart the influence of world-changing technologies on power and nations. 
 
A former chairperson of Humanity Plus, she helped rebrand the organization, launch H+ Magazine and organize the first multi-org conference on futurist topics, Convergence ’08. She authored ”Yucky Gets Yummy: How Speculative Fiction Creates Society” and ”Empathy in the Time of Technology: How Storytelling is the Key to Empathy,” foundational works on the neuropsychology of empathy and media.
 
Manney presented her ideas to National Geographic, the Producers Guild of America, Directors Guild of America, NASA-JPL, M.I.T., Huffington Post, The H+ Summit, and the Institute for Ethics and Emerging Technologies, She is also a frequent guest on podcasts and webshows, and is widely published in as a public thinker and critic. Manney consults for varied organizations about the future of humanity and technology, including artificial intelligence, robotics, cyborgs, nanotechnology, biotechnology, brain-computer interfaces, space, blockchains and cryptocurrencies.

Manney worked for over 25 years in film/TV: motion picture PR at Walt Disney/Touchstone Pictures; story development for independent film production companies; and writing as Patricia Manney for the critically acclaimed hit TV shows Hercules — The Legendary Journeys and Xena: Warrior Princess. She also co-founded Uncharted Entertainment, writing and/or creating many pilot scripts for television networks, including CBS, Fox, UPN, Discovery, ABC Family and Comedy Central.

This episode was produced by Matt Perry.

Our head of research is Sakina Nuruddin.

Art by Desi Aleman.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3360</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>68</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with science fiction writer, essayist, innovator, and cultural icon PJ Manney. We talk about the relationship between literature and empathy, the feedback loops between science fiction imagining and technological production, and how art is, and always has been, a technology.  PJ Manney is the author of the bestselling and Philip K. Dick Award-nominated science fiction technothriller, (R)EVOLUTION (2015), published by 47North in the Phoenix Horizon trilogy with, (ID)ENTITY (2017), and (CON)SCIENCE, (2021). Set as alternate, future American histories, the novels chart the influence of world-changing technologies on power and nations.  A former chairperson of Humanity Plus, she helped rebrand the organization, launch H+ Magazine and organize the first multi-org conference on futurist topics, Convergence ’08. She authored "Yucky Gets Yummy: How Speculative Fiction Creates Society"​ and "Empathy in the Time of Technology: How Storytelling is the Key to Empathy,"​ foundational works on the neuropsychology of empathy and media. Manney presented her ideas to National Geographic, the Producers Guild of America, Directors Guild of America, NASA-JPL, M.I.T., Huffington Post, The H+ Summit, and the Institute for Ethics and Emerging Technologies, She is also a frequent guest on podcasts and webshows, and is widely published in as a public thinker and critic. Manney consults for varied organizations about the future of humanity and technology, including artificial intelligence, robotics, cyborgs, nanotechnology, biotechnology, brain-computer interfaces, space, blockchains and cryptocurrencies. Manney worked for over 25 years in film/TV: motion picture PR at Walt Disney/Touchstone Pictures; story development for independent film production companies; and writing as Patricia Manney for the critically acclaimed hit TV shows Hercules — The Legendary Journeys and Xena: Warrior Princess. She also co-founded Uncharted Entertainment, writing and/or creating many pilot scripts for television networks, including CBS, Fox, UPN, Discovery, ABC Family and Comedy Central. This episode was produced by Matt Perry. Our Head of Research is Sakina Nuruddin. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Running Interference: will democracy survive foreign cyber attacks?</title>
        <itunes:title>Running Interference: will democracy survive foreign cyber attacks?</itunes:title>
        <link>https://dmdonig.podbean.com/e/running-interference-will-democracy-survive-foreign-cyber-attacks/</link>
                    <comments>https://dmdonig.podbean.com/e/running-interference-will-democracy-survive-foreign-cyber-attacks/#comments</comments>        <pubDate>Fri, 11 Feb 2022 05:04:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a7fd6b46-0a93-3cbf-b48b-983bc415a179</guid>
                                    <description><![CDATA[<p>For the final episode of our 3 part series on democracy and technology, I am bringing you a conversation with <a href='https://www.uchastings.edu/people/chimene-keitner/'>Professor Chimène Keitner</a> on cyber interference in democratic elections, and international law. We talk about the challenges and shortcomings of international legal structures in recognizing and responding to cyber interference in democratic processes, we discuss the way that democracies are made vulnerable by digital products, and Chimène explains what happened in the infamous Russian interference into 2016 election--and what might be in store for our democratic process as we approach the deeply consequential 2024 US Presidential election.</p>
<p>Professor Chimène Keitner is the Alfred and Hanna Fromm Professor of International Law at UC Hastings, where she teaches courses on International Law; on Democracy, Technology and Security; and on legal approaches to Evidence, among many other topics. She is a leading authority on international law and civil litigation, and served as the 27th Counselor on International Law in the U.S. Department of State. She holds a bachelor’s degree in history and literature with high honors from Harvard, a JD from Yale, where she was a Paul & Daisy Soros Fellow, and a doctorate in international relations from Oxford, where she was a Rhodes Scholar.</p>
<p>She has authored two <a href='https://www.keitnerlaw.com/scholarship'>books and dozens of articles, essays, and book chapters</a> on questions surrounding the relationship among law, communities, and borders, including issues of jurisdiction, extraterritoriality, foreign sovereign and foreign official immunity, and the historical understandings underpinning current practice in these areas.</p>
<p>Professor Keitner has served on the Executive Council of the American Society of International Law and as Co-Chair of the ASIL International Law in Domestic Courts Interest Group. She is a member of the American Law Institute and an Adviser on the ALI’s Fourth Restatement of the Foreign Relations Law of the United States. She is also a founding co-chair of the International Law Association’s Study Group on Individual Responsibility in International Law., and a member of the state department’s advisory committee on international law.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Our head of research is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>For the final episode of our 3 part series on democracy and technology, I am bringing you a conversation with <a href='https://www.uchastings.edu/people/chimene-keitner/'>Professor Chimène Keitner</a> on cyber interference in democratic elections, and international law. We talk about the challenges and shortcomings of international legal structures in recognizing and responding to cyber interference in democratic processes, we discuss the way that democracies are made vulnerable by digital products, and Chimène explains what happened in the infamous Russian interference into 2016 election--and what might be in store for our democratic process as we approach the deeply consequential 2024 US Presidential election.</p>
<p>Professor Chimène Keitner is the Alfred and Hanna Fromm Professor of International Law at UC Hastings, where she teaches courses on International Law; on Democracy, Technology and Security; and on legal approaches to Evidence, among many other topics. She is a leading authority on international law and civil litigation, and served as the 27th Counselor on International Law in the U.S. Department of State. She holds a bachelor’s degree in history and literature with high honors from Harvard, a JD from Yale, where she was a Paul & Daisy Soros Fellow, and a doctorate in international relations from Oxford, where she was a Rhodes Scholar.</p>
<p>She has authored two <a href='https://www.keitnerlaw.com/scholarship'>books and dozens of articles, essays, and book chapters</a> on questions surrounding the relationship among law, communities, and borders, including issues of jurisdiction, extraterritoriality, foreign sovereign and foreign official immunity, and the historical understandings underpinning current practice in these areas.</p>
<p>Professor Keitner has served on the Executive Council of the American Society of International Law and as Co-Chair of the ASIL International Law in Domestic Courts Interest Group. She is a member of the American Law Institute and an Adviser on the ALI’s Fourth Restatement of the Foreign Relations Law of the United States. She is also a founding co-chair of the International Law Association’s Study Group on Individual Responsibility in International Law., and a member of the state department’s advisory committee on international law.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Our head of research is Sakina Nuruddin.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="77054094" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/w9wtgu/Chimene_Keitner_Podcast_mixdown7x690.mp3"/>
        <itunes:summary>For the final episode of our 3 part series on democracy and technology, I am bringing you a conversation with Professor Chimène Keitner on cyber interference in democratic elections, and international law. We talk about the challenges and shortcomings of international legal structures in recognizing and responding to cyber interference in democratic processes, we discuss the way that democracies are made vulnerable by digital products, and Chimène explains what happened in the infamous Russian interference into 2016 election--and what might be in store for our democratic process as we approach the deeply consequential 2024 US Presidential election.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3210</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>67</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>For the final episode of our 3 part series on democracy and technology, I am bringing you a conversation with Professor Chimène Keitner on cyber interference in democratic elections, and international law. We talk about the challenges and shortcomings of international legal structures in recognizing and responding to cyber interference in democratic processes, we discuss the way that democracies are made vulnerable by digital products, and Chimène explains what happened in the infamous Russian interference into 2016 election--and what might be in store for our democratic process as we approach the deeply consequential 2024 US Presidential election. Professor Chimène Keitner is the Alfred and Hanna Fromm Professor of International Law at UC Hastings, where she teaches courses on International Law; on Democracy, Technology and Security; and on legal approaches to Evidence, among many other topics. She is a leading authority on international law and civil litigation, and served as the 27th Counselor on International Law in the U.S. Department of State. She holds a bachelor’s degree in history and literature with high honors from Harvard, a JD from Yale, where she was a Paul &amp; Daisy Soros Fellow, and a doctorate in international relations from Oxford, where she was a Rhodes Scholar. She has authored two books and dozens of articles, essays, and book chapters on questions surrounding the relationship among law, communities, and borders, including issues of jurisdiction, extraterritoriality, foreign sovereign and foreign official immunity, and the historical understandings underpinning current practice in these areas. Professor Keitner has served on the Executive Council of the American Society of International Law and as Co-Chair of the ASIL International Law in Domestic Courts Interest Group. She is a member of the American Law Institute and an Adviser on the ALI’s Fourth Restatement of the Foreign Relations Law of the United States. She is also a founding co-chair of the International Law Association’s Study Group on Individual Responsibility in International Law., and a member of the state department’s advisory committee on international law.   This episode was produced by Matt Perry. Our head of research is Sakina Nuruddin. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Private Square: democracy and the attention economy</title>
        <itunes:title>The Private Square: democracy and the attention economy</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-private-squaredemocracy-and-the-attention-economy/</link>
                    <comments>https://dmdonig.podbean.com/e/the-private-squaredemocracy-and-the-attention-economy/#comments</comments>        <pubDate>Fri, 04 Feb 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a43ba7b0-bdf1-3278-bcac-16fb2e12d34d</guid>
                                    <description><![CDATA[<p>This week, we are continuing our series on the theme of democracy and technology by bringing you a conversation with Ram Fish, on the impact of social media on democratic institutions and civil discourse. We talk about the existential threat that social media poses to democratic norms, the erosion of civil discourse in the attention economy, and where else in the world we might look for hope in leading us out of democratic decline. And, finally--because we don't like leaving our audience with a doomsday prophecy--Ram proposes policies that might productively change the tide of partisan politics on social media platforms.</p>
<p>Ram Fish is the CEO of 19Labs. Throughout his career in Apple, Samsung, Nokia, as well as Co-Founder CEO of three startups, he specializes in leading interdisciplinary special consumer projects, bridging technology, consumer needs, and business & regulatory constraints. </p>
<p>Mr. Fish has an MBA from Yale University as well as Computer Engineering Bachelor’s and Master’s degrees from Case Western Reserve University, where he is also a lecturer in Technology Management.</p>
<p>He has authored many articles on the topic of democracy, ethics, and technology, including, most recently “<a href='https://www.linkedin.com/pulse/twitter-fact-checking-trump-good-intentions-wrong-approach-ram-fish/'>Twitter fact-checking Trump: Good intentions. Wrong approach. And a proposal for how to do it right</a>,” and "<a href='https://techpolicy.press/four-proposals-to-neutralize-social-medias-threat-to-democracies/'>Four proposals to neutralize social media’s threat to democracies</a>," co-authored with Professor Chimène Keitner.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week, we are continuing our series on the theme of democracy and technology by bringing you a conversation with Ram Fish, on the impact of social media on democratic institutions and civil discourse. We talk about the existential threat that social media poses to democratic norms, the erosion of civil discourse in the attention economy, and where else in the world we might look for hope in leading us out of democratic decline. And, finally--because we don't like leaving our audience with a doomsday prophecy--Ram proposes policies that might productively change the tide of partisan politics on social media platforms.</p>
<p>Ram Fish is the CEO of 19Labs. Throughout his career in Apple, Samsung, Nokia, as well as Co-Founder CEO of three startups, he specializes in leading interdisciplinary special consumer projects, bridging technology, consumer needs, and business & regulatory constraints. </p>
<p>Mr. Fish has an MBA from Yale University as well as Computer Engineering Bachelor’s and Master’s degrees from Case Western Reserve University, where he is also a lecturer in Technology Management.</p>
<p>He has authored many articles on the topic of democracy, ethics, and technology, including, most recently “<a href='https://www.linkedin.com/pulse/twitter-fact-checking-trump-good-intentions-wrong-approach-ram-fish/'>Twitter fact-checking Trump: Good intentions. Wrong approach. And a proposal for how to do it right</a>,” and "<a href='https://techpolicy.press/four-proposals-to-neutralize-social-medias-threat-to-democracies/'>Four proposals to neutralize social media’s threat to democracies</a>," co-authored with Professor Chimène Keitner.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="79930678" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/mt92wi/Ram_Fish_Podcast_mixdown7ccu9.mp3"/>
        <itunes:summary>This week, we are continuing our series on the theme of democracy and technology by bringing you a conversation with Ram Fish, on the impact of social media on democratic institutions and civil discourse. We talk about the existential threat that social media poses to democratic norms, the erosion of civil discourse in the attention economy, and where else in the world we might look for hope in leading us out of democratic decline. And, finally--because we don’t like leaving our audience with a doomsday prophecy--Ram proposes policies that might productively change the tide of partisan politics on social media platforms.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3330</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>66</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week, we are continuing our series on the theme of democracy and technology by bringing you a conversation with Ram Fish, on the impact of social media on democratic institutions and civil discourse. We talk about the existential threat that social media poses to democratic norms, the erosion of civil discourse in the attention economy, and where else in the world we might look for hope in leading us out of democratic decline. And, finally--because we don't like leaving our audience with a doomsday prophecy--Ram proposes policies that might productively change the tide of partisan politics on social media platforms. Ram Fish is the CEO of 19Labs. Throughout his career in Apple, Samsung, Nokia, as well as Co-Founder CEO of three startups, he specializes in leading interdisciplinary special consumer projects, bridging technology, consumer needs, and business &amp; regulatory constraints.  Mr. Fish has an MBA from Yale University as well as Computer Engineering Bachelor’s and Master’s degrees from Case Western Reserve University, where he is also a lecturer in Technology Management. He has authored many articles on the topic of democracy, ethics, and technology, including, most recently “Twitter fact-checking Trump: Good intentions. Wrong approach. And a proposal for how to do it right,” and "Four proposals to neutralize social media’s threat to democracies," co-authored with Professor Chimène Keitner. This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Digital Democracy</title>
        <itunes:title>Digital Democracy</itunes:title>
        <link>https://dmdonig.podbean.com/e/dr-foaad-khosmood-podcast-episode/</link>
                    <comments>https://dmdonig.podbean.com/e/dr-foaad-khosmood-podcast-episode/#comments</comments>        <pubDate>Fri, 28 Jan 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6b0c1eb1-5f8d-3558-9908-da2cc21c8dec</guid>
                                    <description><![CDATA[<p>This week, we are kicking off a special series of “Technically Human” focused on the intersection of democracy and tech. In the first episode in the series, I sit down with Dr. Foaad Khosmood. We talk about the relationship between access to information and functional democracy, and how digital technologies can expand civil discourse.</p>
<p>Dr. Foaad Khosmood is the Forbes Professor of Computer Engineering and Associate Professor of Computer Science at California Polytechnic State University.  His research interests include natural language processing (NLP), artificial intelligence, interactive entertainment, game AI and game jams.</p>
<p>At Cal Poly, Professor Khosmood usually <a href='http://foaad.net/teaching'>teaches</a> AI, Interactive Entertainment, Computational Linguistics, Data Mining and Operating Systems. He serves as the faculty advisor for the Cal Poly Game Development (<a href='http://cpgd.org/'>CPGD</a>), <a href='https://www.slohacks.com/'>SLO Hacks</a> and <a href='https://www.facebook.com/groups/159831618203477/'>Color Coded</a> student clubs. He is the founder of the Digital Democracy Project, a platform that seeks to use digital technologies to Make Government More Transparent one Video at a Time, and the lead researcher on a new project at the Cal Poly Institute for Advanced Technology and Public Policy to strengthen democracy by developing an artificial intelligence system that will expand and improve state government coverage at local and regional media outlets—an area of journalism that has especially suffered amid the economic slide of the news industry.</p>
<p>Dr. Khosmood is the Senior Research Fellow at the <a href='http://iatpp.calpoly.edu/'>Institute for Advanced Technology & Public Policy</a>. He is also a board member, former CTO and past president of <a href='http://www.globalgamejam.org/'>Global Game Jam</a>, Inc. where he helps to organize the world's largest game creation activity (<a href='https://globalgamejam.org/ggj-participation'>120+ countries</a>). He is also the general chair of the Foundations of Digital Games, a major international "big tent" academic conference dedicated to exploring the latest research in all aspects of digital games, and to increasing diversity and inclusion in the world of computing .</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>This week, we are kicking off a special series of “Technically Human” focused on the intersection of democracy and tech. In the first episode in the series, I sit down with Dr. Foaad Khosmood. We talk about the relationship between access to information and functional democracy, and how digital technologies can expand civil discourse.</p>
<p>Dr. Foaad Khosmood is the Forbes Professor of Computer Engineering and Associate Professor of Computer Science at California Polytechnic State University.  His research interests include natural language processing (NLP), artificial intelligence, interactive entertainment, game AI and game jams.</p>
<p>At Cal Poly, Professor Khosmood usually <a href='http://foaad.net/teaching'>teaches</a> AI, Interactive Entertainment, Computational Linguistics, Data Mining and Operating Systems. He serves as the faculty advisor for the Cal Poly Game Development (<a href='http://cpgd.org/'>CPGD</a>), <a href='https://www.slohacks.com/'>SLO Hacks</a> and <a href='https://www.facebook.com/groups/159831618203477/'>Color Coded</a> student clubs. He is the founder of the Digital Democracy Project, a platform that seeks to use digital technologies to Make Government More Transparent one Video at a Time, and the lead researcher on a new project at the Cal Poly Institute for Advanced Technology and Public Policy to strengthen democracy by developing an artificial intelligence system that will expand and improve state government coverage at local and regional media outlets—an area of journalism that has especially suffered amid the economic slide of the news industry.</p>
<p>Dr. Khosmood is the Senior Research Fellow at the <a href='http://iatpp.calpoly.edu/'>Institute for Advanced Technology & Public Policy</a>. He is also a board member, former CTO and past president of <a href='http://www.globalgamejam.org/'>Global Game Jam</a>, Inc. where he helps to organize the world's largest game creation activity (<a href='https://globalgamejam.org/ggj-participation'>120+ countries</a>). He is also the general chair of the Foundations of Digital Games, a major international "big tent" academic conference dedicated to exploring the latest research in all aspects of digital games, and to increasing diversity and inclusion in the world of computing .</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="84251182" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/tft9bi/Dr_Foaad_Khosmood_Podcast_mixdownahewv.mp3"/>
        <itunes:summary>This week, we are kicking off a special series of “technically human” focused on the intersection of democracy and tech. In the first episode in the series, I sit down with Dr. Foaad Khosmood. We talk about the relationship between access to information and functional democracy, and how digital technologies can expand civil discourse.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3510</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>65</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>This week, we are kicking off a special series of “Technically Human” focused on the intersection of democracy and tech. In the first episode in the series, I sit down with Dr. Foaad Khosmood. We talk about the relationship between access to information and functional democracy, and how digital technologies can expand civil discourse. Dr. Foaad Khosmood is the Forbes Professor of Computer Engineering and Associate Professor of Computer Science at California Polytechnic State University.  His research interests include natural language processing (NLP), artificial intelligence, interactive entertainment, game AI and game jams. At Cal Poly, Professor Khosmood usually teaches AI, Interactive Entertainment, Computational Linguistics, Data Mining and Operating Systems. He serves as the faculty advisor for the Cal Poly Game Development (CPGD), SLO Hacks and Color Coded student clubs. He is the founder of the Digital Democracy Project, a platform that seeks to use digital technologies to Make Government More Transparent one Video at a Time, and the lead researcher on a new project at the Cal Poly Institute for Advanced Technology and Public Policy to strengthen democracy by developing an artificial intelligence system that will expand and improve state government coverage at local and regional media outlets—an area of journalism that has especially suffered amid the economic slide of the news industry. Dr. Khosmood is the Senior Research Fellow at the Institute for Advanced Technology &amp; Public Policy. He is also a board member, former CTO and past president of Global Game Jam, Inc. where he helps to organize the world's largest game creation activity (120+ countries). He is also the general chair of the Foundations of Digital Games, a major international "big tent" academic conference dedicated to exploring the latest research in all aspects of digital games, and to increasing diversity and inclusion in the world of computing . This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The LAWS of War: Lethal autonomous weapons systems and the new ethics of warfare</title>
        <itunes:title>The LAWS of War: Lethal autonomous weapons systems and the new ethics of warfare</itunes:title>
        <link>https://dmdonig.podbean.com/e/what-technically-makes-humans-meaningful-war-and-who-governs-laws/</link>
                    <comments>https://dmdonig.podbean.com/e/what-technically-makes-humans-meaningful-war-and-who-governs-laws/#comments</comments>        <pubDate>Fri, 21 Jan 2022 00:50:14 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c0fe10c0-494f-33fd-b66a-78f28439a6ed</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Dr. John C. Williams about the ethics of automated weapons systems. We talk about the concept of meaningful human control, about the ethics of war, and what it means to engage in the politics of biopower in the age of lethal autonomous weapons.</p>
<p><a href='https://www.durham.ac.uk/staff/j-c-williams/'>Dr. John C. Williams</a> is a Professor in the <a href='https://www.durham.ac.uk/sgia/'>School of Government and International Affairs</a> at Durham University, in the UK.</p>
<p>Among the many areas of his research, Dr. Williams is an expert on the ethics of war and challenges presented by changing patterns and technologies of violence and the issue of democratic authority over warfare. His work looks at key technologies including drones and emergent autonomous weapons systems, and considers the ethics of meaningful human control as AI increasingly becomes part of what it means to wage war. He is the author of Ethics, Diversity and World Politics: Saving Pluralism From Itself? (Oxford University Press), and <a href='https://dro.dur.ac.uk/943'>The Ethics of Territorial Borders: Drawing Lines in the Shifting Sand. Basingstoke: Palgrave Macmillan, </a>as well as multiple other articles on the ethics of technology at the intersection of war.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Dr. John C. Williams about the ethics of automated weapons systems. We talk about the concept of meaningful human control, about the ethics of war, and what it means to engage in the politics of biopower in the age of lethal autonomous weapons.</p>
<p><a href='https://www.durham.ac.uk/staff/j-c-williams/'>Dr. John C. Williams</a> is a Professor in the <a href='https://www.durham.ac.uk/sgia/'>School of Government and International Affairs</a> at Durham University, in the UK.</p>
<p>Among the many areas of his research, Dr. Williams is an expert on the ethics of war and challenges presented by changing patterns and technologies of violence and the issue of democratic authority over warfare. His work looks at key technologies including drones and emergent autonomous weapons systems, and considers the ethics of meaningful human control as AI increasingly becomes part of what it means to wage war. He is the author of <em>Ethics, Diversity and World Politics: Saving Pluralism From Itself? (</em>Oxford University Press), and <a href='https://dro.dur.ac.uk/943'><em>The Ethics of Territorial Borders: Drawing Lines in the Shifting Sand</em>. Basingstoke: Palgrave Macmillan, </a>as well as multiple other articles on the ethics of technology at the intersection of war.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="79931660" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/r328ph/John_Williams_Podcast_mixdown_levelan5ap.mp3"/>
        <itunes:summary>In this episode, I speak with Dr. John C. Williams about the ethics of automated weapons systems. We talk about the concept of meaningful human control, about the ethics of war, and what it means to engage in the politics of biopower in the age of lethal autonomous weapons.

Dr. John C. Williams is a Professor in the School of Government and International Affairs at Durham University, in the UK.</itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3330</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>64</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Dr. John C. Williams about the ethics of automated weapons systems. We talk about the concept of meaningful human control, about the ethics of war, and what it means to engage in the politics of biopower in the age of lethal autonomous weapons. Dr. John C. Williams is a Professor in the School of Government and International Affairs at Durham University, in the UK. Among the many areas of his research, Dr. Williams is an expert on the ethics of war and challenges presented by changing patterns and technologies of violence and the issue of democratic authority over warfare. His work looks at key technologies including drones and emergent autonomous weapons systems, and considers the ethics of meaningful human control as AI increasingly becomes part of what it means to wage war. He is the author of Ethics, Diversity and World Politics: Saving Pluralism From Itself? (Oxford University Press), and The Ethics of Territorial Borders: Drawing Lines in the Shifting Sand. Basingstoke: Palgrave Macmillan, as well as multiple other articles on the ethics of technology at the intersection of war. This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Grimm Futures: Technology’s fairy tales</title>
        <itunes:title>Grimm Futures: Technology’s fairy tales</itunes:title>
        <link>https://dmdonig.podbean.com/e/grimm-futures-technology-s-fairy-tales/</link>
                    <comments>https://dmdonig.podbean.com/e/grimm-futures-technology-s-fairy-tales/#comments</comments>        <pubDate>Fri, 14 Jan 2022 05:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/787fa761-57ed-383e-9745-5c7ec8caedee</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I sit down with D.J. MacLennan to talk about the relationship between technological realities and fairy tale mythologies. We talk about what it means to re-write epic and age-old stories about magical worlds and beings in the age of tech, and how technological culture may itself be a form of fairy tale thinking.</p>
<p><a href='http://www.djmaclennan.com/'>D.J. MacLennan</a> is a writer of speculative fiction and non-fiction from the Isle of Skye in the Highlands of Scotland. His new book, <a href='https://www.google.com/search?q=dj+maclennan&oq=dj+macl&aqs=chrome.0.69i59j69i57j35i39j0i512l2j69i60l3.1910j0j9&sourceid=chrome&ie=UTF-8'>Future Bright, Future Grimm: Transhumanist Tales for Mother Nature's Offspring</a> reboots the Grimms brothers' fables to imagine how the fairy tales help us imagine the future, and how science fiction blends with fairy tale mythologies,  as technology becomes increasingly embedded in our bodies and our landscapes.</p>
<p>This podcast was produced by Matt Perry.</p>
<p>Artwork by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I sit down with D.J. MacLennan to talk about the relationship between technological realities and fairy tale mythologies. We talk about what it means to re-write epic and age-old stories about magical worlds and beings in the age of tech, and how technological culture may itself be a form of fairy tale thinking.</p>
<p><a href='http://www.djmaclennan.com/'>D.J. MacLennan</a> is a writer of speculative fiction and non-fiction from the Isle of Skye in the Highlands of Scotland. His new book, <a href='https://www.google.com/search?q=dj+maclennan&oq=dj+macl&aqs=chrome.0.69i59j69i57j35i39j0i512l2j69i60l3.1910j0j9&sourceid=chrome&ie=UTF-8'><em>Future Bright, Future Grimm: Transhumanist Tales for Mother Nature's Offspring</em></a> reboots the Grimms brothers' fables to imagine how the fairy tales help us imagine the future, and how science fiction blends with fairy tale mythologies,  as technology becomes increasingly embedded in our bodies and our landscapes.</p>
<p>This podcast was produced by Matt Perry.</p>
<p>Artwork by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="73452138" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/a2edv4/DJ_Maclennan_Podcast_mixdowna1ax2.mp3"/>
        <itunes:summary><![CDATA[In this episode of "Technically Human," I sit down with D.J. MacLennan to talk about the relationship between technological realities and fairy tale mythologies. We talk about what it means to re-write epic and age-old stories about magical worlds and beings in the age of tech, and how technological culture may itself be a form of fairy tale thinking.
D.J. MacLennan is a writer of speculative fiction and non-fiction from the Isle of Skye in the Highlands of Scotland. His new book, Future Bright, Future Grimm: Transhumanist Tales for Mother Nature's Offspring reboots the Grimms brothers' fables to imagine how the fairy tales help us imagine the future, and how science fiction blends with fairy tale mythologies,  as technology becomes increasingly embedded in our bodies and our landscapes.
This podcast was produced by Matt Perry.
Artwork by Desi Aleman.]]></itunes:summary>
        <itunes:author>Deb Donig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3060</itunes:duration>
                <itunes:episode>63</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I sit down with D.J. MacLennan to talk about the relationship between technological realities and fairy tale mythologies. We talk about what it means to re-write epic and age-old stories about magical worlds and beings in the age of tech, and how technological culture may itself be a form of fairy tale thinking. D.J. MacLennan is a writer of speculative fiction and non-fiction from the Isle of Skye in the Highlands of Scotland. His new book, Future Bright, Future Grimm: Transhumanist Tales for Mother Nature's Offspring reboots the Grimms brothers' fables to imagine how the fairy tales help us imagine the future, and how science fiction blends with fairy tale mythologies,  as technology becomes increasingly embedded in our bodies and our landscapes. This podcast was produced by Matt Perry. Artwork by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Moving Pictures: Film director Jake Wachtel discusses his new film, Karmalink, and sci-fi in Cambodia</title>
        <itunes:title>Moving Pictures: Film director Jake Wachtel discusses his new film, Karmalink, and sci-fi in Cambodia</itunes:title>
        <link>https://dmdonig.podbean.com/e/moving-pictures-film-director-jake-wachtel-discusses-his-new-film-karmalink-and-sci-fi-in-cambodia/</link>
                    <comments>https://dmdonig.podbean.com/e/moving-pictures-film-director-jake-wachtel-discusses-his-new-film-karmalink-and-sci-fi-in-cambodia/#comments</comments>        <pubDate>Fri, 07 Jan 2022 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1c1c595a-86ea-3359-ba22-97a21c25e93c</guid>
                                    <description><![CDATA[<p>"Technically Human" is back with a brand new season of the show!</p>
<p>In our first episode of the season, I sit down with film director Jake Wachtel to talk about his debut film, "<a href='https://variety.com/2021/film/reviews/karmalink-review-chiet-krawy-1235054335/'>Karmalink,</a>" the first science fiction film set in Cambodia. We discuss the connection between digital technologies, reincarnation, and Buddhism, we talk about the state of technological development in Cambodia, and Jake reflects on how Cambodians are imagining the future, in light of Cambodia's past.</p>
<p><a href='https://www.imdb.com/title/tt10396450/'>Jake Wachtel</a> grew up in California, and studied Film and Neuroscience at Stanford University. In 2015, he moved to Cambodia to teach filmmaking to children through Filmmakers Without Borders. His short THE FOREIGNER HERE premiered at the Cambodian International Film Festival alongside the new wave of Cambodian directors. His doc work has been featured on NYTimes.com, Wired, NPR and MSNBC. <a href='https://hiff.org/films/karmalink/'>KARMALINK</a> is his debut feature.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p>In memory of Leng Heng Prak.</p>
<p> </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>"Technically Human" is back with a brand new season of the show!</p>
<p>In our first episode of the season, I sit down with film director Jake Wachtel to talk about his debut film, "<a href='https://variety.com/2021/film/reviews/karmalink-review-chiet-krawy-1235054335/'>Karmalink,</a>" the first science fiction film set in Cambodia. We discuss the connection between digital technologies, reincarnation, and Buddhism, we talk about the state of technological development in Cambodia, and Jake reflects on how Cambodians are imagining the future, in light of Cambodia's past.</p>
<p><a href='https://www.imdb.com/title/tt10396450/'>Jake Wachtel</a> grew up in California, and studied Film and Neuroscience at Stanford University. In 2015, he moved to Cambodia to teach filmmaking to children through Filmmakers Without Borders. His short THE FOREIGNER HERE premiered at the Cambodian International Film Festival alongside the new wave of Cambodian directors. His doc work has been featured on NYTimes.com, Wired, NPR and MSNBC. <a href='https://hiff.org/films/karmalink/'>KARMALINK</a> is his debut feature.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p>In memory of Leng Heng Prak.</p>
<p> </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="80652263" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kdvmv6/Jake_Wachtel_Podcast_mixdown8wutd.mp3"/>
        <itunes:summary>”Technically Human” is back with a brand new season of the show!

In our first episode of the season, I sit down with film director Jake Wachtel to talk about his debut film, ”Karmalink,” the first science fiction film set in Cambodia. We discuss the connection between digital technologies, reincarnation, and Buddhism, we talk about the state of technological development in Cambodia, and Jake reflects on how Cambodians are imagining the future, in light of Cambodia’s past.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3360</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>62</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>"Technically Human" is back with a brand new season of the show! In our first episode of the season, I sit down with film director Jake Wachtel to talk about his debut film, "Karmalink," the first science fiction film set in Cambodia. We discuss the connection between digital technologies, reincarnation, and Buddhism, we talk about the state of technological development in Cambodia, and Jake reflects on how Cambodians are imagining the future, in light of Cambodia's past. Jake Wachtel grew up in California, and studied Film and Neuroscience at Stanford University. In 2015, he moved to Cambodia to teach filmmaking to children through Filmmakers Without Borders. His short THE FOREIGNER HERE premiered at the Cambodian International Film Festival alongside the new wave of Cambodian directors. His doc work has been featured on NYTimes.com, Wired, NPR and MSNBC. KARMALINK is his debut feature. This episode was produced by Matt Perry. Art by Desi Aleman. In memory of Leng Heng Prak.    </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>How Women Work: Gender,  digital labor, and (not) getting paid to do what you love</title>
        <itunes:title>How Women Work: Gender,  digital labor, and (not) getting paid to do what you love</itunes:title>
        <link>https://dmdonig.podbean.com/e/how-women-work-gender-digital-labor-and-not-getting-paid-to-do-what-you-love/</link>
                    <comments>https://dmdonig.podbean.com/e/how-women-work-gender-digital-labor-and-not-getting-paid-to-do-what-you-love/#comments</comments>        <pubDate>Fri, 03 Dec 2021 02:19:31 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/50765b45-495d-343b-ac3b-f109bcbd9e3f</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Dr. Brooke Duffy about the structure of digital labor. We talk about Instagram influencers and the people who love to hate them, the double bind that women online face in presenting themselves as both "authentic" and  "relatable," and the problem with the advice we so often get, to "do what we love."</p>
<p>Dr. Brooke Erin Duffy is an Associate Professor at Cornell University, where she holds appointments in the <a href='https://communication.cals.cornell.edu/'>Department of Communication</a> and the Program in <a href='https://fgss.cornell.edu/'>Feminist, Gender & Sexuality Studies</a>. Her work spans the topics of social media, gender, identity and inequality, digital labor, and promotional culture.</p>
<p>She's the author of two monographs on gender and cultural production, including <a href='http://yalebooks.com/book/9780300218176/not-getting-paid-do-what-you-love'>(Not) Getting Paid to Do What You Love: Gender, Social Media, and Aspirational Work</a> (Yale University Press, in 2017), which draws upon research with fashion bloggers, YouTubers, and Instagram influencers to explore the culture and politics of the digital labor. <a href='https://www.wired.com/'>Wired</a> named it one of the "<a href='https://www.wired.com/story/the-top-tech-books-of-2017-part-1/'>Top Tech Books of 2017.</a>" Dr. Duffy's first monograph, <a href='http://www.press.uillinois.edu/books/catalog/99tzr8xd9780252037962.html'>Remake, Remodel: Women’s Magazines in the Digital Age</a> (University of Illinois Press, 2013), examined the rapidly changing technologies and political economies of media production through an analysis of the magazine industry. Duffy’s third book, <a href='https://politybooks.com/bookdetail/?isbn=9781509540501'>Platforms and Cultural Production</a> with <a href='https://www.uva.nl/en/profile/p/o/t.poell/t.poell.html'>Thomas Poell </a>and <a href='http://www.gamespace.nl/'>David Nieborg</a>, is forthcoming with Polity in 2021. She is also the co-editor of <a href='http://www.routledge.com/books/details/9780415992053/'>Key Readings in Media Today: Mass Communication in Contexts</a> with <a href='https://www.asc.upenn.edu/people/faculty/joseph-turow-phd'>Joseph Turow</a> (Routledge press, in 2009). </p>
<p>Dr. Duffy’s research has been published in a wide variety of academic journals, and she is also a public scholar, whose work has appeared in <a href='https://www.theatlantic.com/entertainment/archive/2015/09/fashion-blogging-labor-myths/405817/'>The Atlantic</a>, <a href='https://www.vox.com/the-goods/22323961/meghan-markle-fakery-piers-morgan-authenticity'>Vox</a>, <a href='https://www.timeshighereducation.com/author/brooke-erin-duffy'>Times Higher Education</a>, <a href='https://www.wired.com/story/when-instagram-influencing-isnt-so-glamorous/'>Wired,</a> and <a href='https://qz.com/author/brookeerinduffy/'>Quartz</a>. Her commentary has been featured in The New York Times, The Guardian, the BBC, Vox, The Washington Post, The USA Today, and Vice, among others.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Dr. Brooke Duffy about the structure of digital labor. We talk about Instagram influencers and the people who love to hate them, the double bind that women online face in presenting themselves as both "authentic" and  "relatable," and the problem with the advice we so often get, to "do what we love."</p>
<p>Dr. Brooke Erin Duffy is an Associate Professor at Cornell University, where she holds appointments in the <a href='https://communication.cals.cornell.edu/'>Department of Communication</a> and the Program in <a href='https://fgss.cornell.edu/'>Feminist, Gender & Sexuality Studies</a>. Her work spans the topics of social media, gender, identity and inequality, digital labor, and promotional culture.</p>
<p>She's the author of two monographs on gender and cultural production, including <a href='http://yalebooks.com/book/9780300218176/not-getting-paid-do-what-you-love'><em>(Not) Getting Paid to Do What You Love: Gender, Social Media, and Aspirational Work</em></a><em> </em>(Yale University Press, in 2017), which draws upon research with fashion bloggers, YouTubers, and Instagram influencers to explore the culture and politics of the digital labor. <a href='https://www.wired.com/'><em>Wired</em></a> named it one of the "<a href='https://www.wired.com/story/the-top-tech-books-of-2017-part-1/'>Top Tech Books of 2017.</a>" Dr. Duffy's first monograph, <a href='http://www.press.uillinois.edu/books/catalog/99tzr8xd9780252037962.html'><em>Remake, Remodel: Women’s Magazines in the Digital Age</em></a><em> </em>(University of Illinois Press, 2013), examined the rapidly changing technologies and political economies of media production through an analysis of the magazine industry. Duffy’s third book, <a href='https://politybooks.com/bookdetail/?isbn=9781509540501'><em>Platforms and Cultural Production</em></a><em> </em>with <a href='https://www.uva.nl/en/profile/p/o/t.poell/t.poell.html'>Thomas Poell </a>and <a href='http://www.gamespace.nl/'>David Nieborg</a>, is forthcoming with Polity in 2021. She is also the co-editor of <a href='http://www.routledge.com/books/details/9780415992053/'><em>Key Readings in Media Today: Mass Communication in Contexts</em></a> with <a href='https://www.asc.upenn.edu/people/faculty/joseph-turow-phd'>Joseph Turow</a> (Routledge press, in 2009). </p>
<p>Dr. Duffy’s research has been published in a wide variety of academic journals, and she is also a public scholar, whose work has appeared in <a href='https://www.theatlantic.com/entertainment/archive/2015/09/fashion-blogging-labor-myths/405817/'><em>The Atlantic</em></a><em>, </em><a href='https://www.vox.com/the-goods/22323961/meghan-markle-fakery-piers-morgan-authenticity'><em>Vox</em></a><em>, </em><a href='https://www.timeshighereducation.com/author/brooke-erin-duffy'><em>Times Higher Education</em></a><em>, </em><a href='https://www.wired.com/story/when-instagram-influencing-isnt-so-glamorous/'><em>Wired</em>,</a> and<em> </em><a href='https://qz.com/author/brookeerinduffy/'><em>Quartz</em></a>. Her commentary has been featured in <em>The New York Times</em>, <em>The Guardian</em>, the <em>BBC</em>, <em>Vox,</em> <em>The Washington Post</em>, The <em>USA Today</em>, and <em>Vice</em>, among others.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="99374088" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/34xqyt/Brooke_Duffy_Podcast_mixdownar75j.mp3"/>
        <itunes:summary>In this episode, I speak with Dr. Brooke Duffy about the structure of digital labor. We talk about Instagram influencers and the people who love to hate them, the double bind that women online face in presenting themselves as both ”authentic” and  ”relatable,” and the problem with the advice we so often get, to ”do what we love.”</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4140</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>61</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Dr. Brooke Duffy about the structure of digital labor. We talk about Instagram influencers and the people who love to hate them, the double bind that women online face in presenting themselves as both "authentic" and  "relatable," and the problem with the advice we so often get, to "do what we love." Dr. Brooke Erin Duffy is an Associate Professor at Cornell University, where she holds appointments in the Department of Communication and the Program in Feminist, Gender &amp; Sexuality Studies. Her work spans the topics of social media, gender, identity and inequality, digital labor, and promotional culture. She's the author of two monographs on gender and cultural production, including (Not) Getting Paid to Do What You Love: Gender, Social Media, and Aspirational Work (Yale University Press, in 2017), which draws upon research with fashion bloggers, YouTubers, and Instagram influencers to explore the culture and politics of the digital labor. Wired named it one of the "Top Tech Books of 2017." Dr. Duffy's first monograph, Remake, Remodel: Women’s Magazines in the Digital Age (University of Illinois Press, 2013), examined the rapidly changing technologies and political economies of media production through an analysis of the magazine industry. Duffy’s third book, Platforms and Cultural Production with Thomas Poell and David Nieborg, is forthcoming with Polity in 2021. She is also the co-editor of Key Readings in Media Today: Mass Communication in Contexts with Joseph Turow (Routledge press, in 2009).  Dr. Duffy’s research has been published in a wide variety of academic journals, and she is also a public scholar, whose work has appeared in The Atlantic, Vox, Times Higher Education, Wired, and Quartz. Her commentary has been featured in The New York Times, The Guardian, the BBC, Vox, The Washington Post, The USA Today, and Vice, among others. This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>A Conversation with Open Dyalog: civil discourse in the digital age</title>
        <itunes:title>A Conversation with Open Dyalog: civil discourse in the digital age</itunes:title>
        <link>https://dmdonig.podbean.com/e/a-conversation-with-open-dyalog-civil-discourse-in-the-digital-age/</link>
                    <comments>https://dmdonig.podbean.com/e/a-conversation-with-open-dyalog-civil-discourse-in-the-digital-age/#comments</comments>        <pubDate>Fri, 19 Nov 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d2e3dc19-e8bf-3e2a-90a1-847ef0f70b2f</guid>
                                    <description><![CDATA[<p>In this week’s episode, we bring you a conversation about, well, how we have conversations. Listeners of this podcast, students who have taken my class, and anyone who has heard me talk about ethics and technology in public has heard me talk about the importance of civil discourse. In an age of Twitter feuds, Facebook shouting matches, and an online culture of escalating arguments, learning the skills of talking to one another is more important—and less understood—than ever.</p>
<p>That is why, this week, I invited the founder of the <a href='https://www.opendyalog.com/'>Open Dyalog</a> movement, Zahabiya Nuruddin, to join me this week to talk about how, in a time when we interact with one another increasingly online, through our tech, we can practice the ethics of civil discourse. The head of research for the “Technically Human” team, Sakina Nuruddin, co-hosts this week's show. Sakina is the founder of the Cal Poly chapter of Open Dyalog. </p>
<p>Interested in Open Dyalog? Contact Sakina at <a href='mailto:snuruddi@calpoly.edu'>snuruddi@calpoly.edu</a>. </p>
<p>Thanks for listening! We are off next week for the Thanksgiving break—we’ll return the first week of December for our final episode of the Technically Human season.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week’s episode, we bring you a conversation about, well, how we have conversations. Listeners of this podcast, students who have taken my class, and anyone who has heard me talk about ethics and technology in public has heard me talk about the importance of civil discourse. In an age of Twitter feuds, Facebook shouting matches, and an online culture of escalating arguments, learning the skills of talking to one another is more important—and less understood—than ever.</p>
<p>That is why, this week, I invited the founder of the <a href='https://www.opendyalog.com/'>Open Dyalog</a> movement, Zahabiya Nuruddin, to join me this week to talk about how, in a time when we interact with one another increasingly online, through our tech, we can practice the ethics of civil discourse. The head of research for the “Technically Human” team, Sakina Nuruddin, co-hosts this week's show. Sakina is the founder of the Cal Poly chapter of Open Dyalog. </p>
<p>Interested in Open Dyalog? Contact Sakina at <a href='mailto:snuruddi@calpoly.edu'>snuruddi@calpoly.edu</a>. </p>
<p>Thanks for listening! We are off next week for the Thanksgiving break—we’ll return the first week of December for our final episode of the Technically Human season.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="79211170" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/jy3uhd/Open_Dyalog_Podcast_mixdownah84h.mp3"/>
        <itunes:summary>In this week’s episode, we bring you a conversation about, well, how we have conversations, featuring the founder of the Open Dyalog movement, Zahabiya Nuruddin. Listeners of this podcast, students who have taken my class, and anyone who has heard me talk about ethics and technology in public has heard me talk about the importance of civil discourse. In an age of Twitter feuds, Facebook shouting matches, and an online culture of escalating arguments, learning the skills of talking to one another is more important—and less understood—than ever.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3300</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>60</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week’s episode, we bring you a conversation about, well, how we have conversations. Listeners of this podcast, students who have taken my class, and anyone who has heard me talk about ethics and technology in public has heard me talk about the importance of civil discourse. In an age of Twitter feuds, Facebook shouting matches, and an online culture of escalating arguments, learning the skills of talking to one another is more important—and less understood—than ever. That is why, this week, I invited the founder of the Open Dyalog movement, Zahabiya Nuruddin, to join me this week to talk about how, in a time when we interact with one another increasingly online, through our tech, we can practice the ethics of civil discourse. The head of research for the “Technically Human” team, Sakina Nuruddin, co-hosts this week's show. Sakina is the founder of the Cal Poly chapter of Open Dyalog.  Interested in Open Dyalog? Contact Sakina at snuruddi@calpoly.edu.  Thanks for listening! We are off next week for the Thanksgiving break—we’ll return the first week of December for our final episode of the Technically Human season. This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Cybersecurity in the age of Zero Trust</title>
        <itunes:title>Cybersecurity in the age of Zero Trust</itunes:title>
        <link>https://dmdonig.podbean.com/e/cybersecurity-in-the-age-of-zero-trust/</link>
                    <comments>https://dmdonig.podbean.com/e/cybersecurity-in-the-age-of-zero-trust/#comments</comments>        <pubDate>Fri, 12 Nov 2021 02:00:08 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1c7a7aea-43e0-3ccc-ab03-960738402b88</guid>
                                    <description><![CDATA[<p>In this episode, I talk to Rob Dickinson, the CEO of <a href='https://resurface.io/'>Resurface</a>. We talk about the ethic behind cybersecurity technology, the ethics of data ownership, and what regulations and laws can--and can't--do. </p>
<p>Rob Dickinson is Co-Founder and CEO at Resurface, an innovative platform focused on data privacy and API activity. His work around observability, cybersecurity and the “internet of things” has set him out as a thought leader in this part of the tech world. As a technologist, Rob seeks to build a future of responsible and ethical API. He is a pioneering thinker in the movement to regain ownership of our data, and in what he calls ZeroTrust cybersecurity.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I talk to Rob Dickinson, the CEO of <a href='https://resurface.io/'>Resurface</a>. We talk about the ethic behind cybersecurity technology, the ethics of data ownership, and what regulations and laws can--and can't--do. </p>
<p>Rob Dickinson is Co-Founder and CEO at Resurface, an innovative platform focused on data privacy and API activity. His work around observability, cybersecurity and the “internet of things” has set him out as a thought leader in this part of the tech world. As a technologist, Rob seeks to build a future of responsible and ethical API. He is a pioneering thinker in the movement to regain ownership of our data, and in what he calls ZeroTrust cybersecurity.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="73452146" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/u2yx3k/Rob_Dickinson_Podcast_mixdownat0ow.mp3"/>
        <itunes:summary>In this episode, I talk to Rob Dickinson, the CEO of Resurface. We talk about the ethic behind cybersecurity technology, the ethics of data ownership, and what regulations and laws can--and can‘t--do.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3060</itunes:duration>
        <itunes:season>7</itunes:season>
        <itunes:episode>59</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I talk to Rob Dickinson, the CEO of Resurface. We talk about the ethic behind cybersecurity technology, the ethics of data ownership, and what regulations and laws can--and can't--do.  Rob Dickinson is Co-Founder and CEO at Resurface, an innovative platform focused on data privacy and API activity. His work around observability, cybersecurity and the “internet of things” has set him out as a thought leader in this part of the tech world. As a technologist, Rob seeks to build a future of responsible and ethical API. He is a pioneering thinker in the movement to regain ownership of our data, and in what he calls ZeroTrust cybersecurity.   This episode was produced by Matt Perry. Art by Desi Aleman.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Technically Human 101: a crash course on being human in the age of tech</title>
        <itunes:title>Technically Human 101: a crash course on being human in the age of tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/technically-human-101-a-crash-course-on-being-human-in-the-age-of-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/technically-human-101-a-crash-course-on-being-human-in-the-age-of-tech/#comments</comments>        <pubDate>Fri, 05 Nov 2021 02:40:22 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/b6ed5ef6-a989-387e-9897-55baa8b48a14</guid>
                                    <description><![CDATA[<p>In this episode, we do a deep dive into the Technically Human show archive to bring you an episode that puts together some of the show's top moments. If you are looking for a guide into the key questions, concepts, and characters critical to thinking about ethics and technology. </p>
<p>We've put together a show that answers some of the top questions that we consistently get asked about ethics and technology, with some of "Technically Human's" most memorable guests.</p>
<p>This episode features commentary on the philosophy of the good with <a href='https://philosophy.calpoly.edu/faculty-staff/ryan-jenkins'>Ryan Jenkins</a>, bioethics with <a href='https://med.nyu.edu/faculty/arthur-l-caplan'>Art Caplan</a>, digital human rights and sci-fi with <a href='https://daveeggers.net/dave-eggers'>Dave Eggers</a>, BUMMR technological critiques by <a href='http://www.jaronlanier.com/'>Jaron Lanier</a>, commentary on democracy and tech <a href='https://www.yaeleisenstat.com/'>Yaël Eisenstat</a>, Silicon Valley humor with <a href='https://danlyons.io/'>Dan Lyons</a>, technology's impact on intimacy with <a href='https://drjuliealbright.com/best-keynote-speaker/'>Julie Albright</a>, a meditation on the ethics of the algorithm with <a href='https://elts.ucla.edu/person/todd-presner/'>Todd Presner</a>, a consideration of disability and tech with <a href='https://georgeestreich.com/'>George Estreich</a>, and a conversation about technological utopianism with <a href='https://morganya.org/'>Morgan Ames</a>.</p>
<p>If you're looking for a crash course in ethical technology, this is the episode for you!</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, we do a deep dive into the Technically Human show archive to bring you an episode that puts together some of the show's top moments. If you are looking for a guide into the key questions, concepts, and characters critical to thinking about ethics and technology. </p>
<p>We've put together a show that answers some of the top questions that we consistently get asked about ethics and technology, with some of "Technically Human's" most memorable guests.</p>
<p>This episode features commentary on the philosophy of the good with <a href='https://philosophy.calpoly.edu/faculty-staff/ryan-jenkins'>Ryan Jenkins</a>, bioethics with <a href='https://med.nyu.edu/faculty/arthur-l-caplan'>Art Caplan</a>, digital human rights and sci-fi with <a href='https://daveeggers.net/dave-eggers'>Dave Eggers</a>, BUMMR technological critiques by <a href='http://www.jaronlanier.com/'>Jaron Lanier</a>, commentary on democracy and tech <a href='https://www.yaeleisenstat.com/'>Yaël Eisenstat</a>, Silicon Valley humor with <a href='https://danlyons.io/'>Dan Lyons</a>, technology's impact on intimacy with <a href='https://drjuliealbright.com/best-keynote-speaker/'>Julie Albright</a>, a meditation on the ethics of the algorithm with <a href='https://elts.ucla.edu/person/todd-presner/'>Todd Presner</a>, a consideration of disability and tech with <a href='https://georgeestreich.com/'>George Estreich</a>, and a conversation about technological utopianism with <a href='https://morganya.org/'>Morgan Ames</a>.</p>
<p>If you're looking for a crash course in ethical technology, this is the episode for you!</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="102252644" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qq2vew/Compilation_Episode_Podcast_mixdowncompleteblv76.mp3"/>
        <itunes:summary>In this episode, we do a deep dive into the Technically Human show archive to bring you an episode that puts together some of the show‘s top moments. If you are looking for a guide into the key questions, concepts, and characters critical to thinking about ethics and technology. 
We‘ve put together a show that answers some of the top questions that we consistently get asked about ethics and technology, with some of ”Technically Human‘s” most memorable guests.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4260</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>58</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, we do a deep dive into the Technically Human show archive to bring you an episode that puts together some of the show's top moments. If you are looking for a guide into the key questions, concepts, and characters critical to thinking about ethics and technology.  We've put together a show that answers some of the top questions that we consistently get asked about ethics and technology, with some of "Technically Human's" most memorable guests. This episode features commentary on the philosophy of the good with Ryan Jenkins, bioethics with Art Caplan, digital human rights and sci-fi with Dave Eggers, BUMMR technological critiques by Jaron Lanier, commentary on democracy and tech Yaël Eisenstat, Silicon Valley humor with Dan Lyons, technology's impact on intimacy with Julie Albright, a meditation on the ethics of the algorithm with Todd Presner, a consideration of disability and tech with George Estreich, and a conversation about technological utopianism with Morgan Ames. If you're looking for a crash course in ethical technology, this is the episode for you! This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Embodied Technology and the Quantified Self with Dr. Steven LeBoeuf</title>
        <itunes:title>Embodied Technology and the Quantified Self with Dr. Steven LeBoeuf</itunes:title>
        <link>https://dmdonig.podbean.com/e/embodied-technology-and-the-quantified-self-with-dr-steven-leboeuf/</link>
                    <comments>https://dmdonig.podbean.com/e/embodied-technology-and-the-quantified-self-with-dr-steven-leboeuf/#comments</comments>        <pubDate>Fri, 29 Oct 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e4d78619-7e37-388d-8028-757347319669</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I dive into the history, the sociology, and the ethics of wearables with Dr. Steven LeBoeuf, the President and Co-Founder of <a href='https://valencell.com/'>Valencell Technologies</a>. We talk about how wearable technologies trouble the boundaries of what we call a "self," and how what it means to be human is changing as we increasingly enlist technologies on our bodies in reporting on what is happening in our bodies.</p>
<p>We talk about the evolution of wearables, the technologies that go into wearable tech, and why you might want to think twice about wearing devices that collect information about what goes on inside your body, especially if you don't know who might be using that data--or why.</p>
<p> </p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I dive into the history, the sociology, and the ethics of wearables with Dr. Steven LeBoeuf, the President and Co-Founder of <a href='https://valencell.com/'>Valencell Technologies</a>. We talk about how wearable technologies trouble the boundaries of what we call a "self," and how what it means to be human is changing as we increasingly enlist technologies <em>on</em> our bodies in reporting on what is happening <em>in </em>our bodies.</p>
<p>We talk about the evolution of wearables, the technologies that go into wearable tech, and why you might want to think twice about wearing devices that collect information about what goes on inside your body, especially if you don't know who might be using that data--or why.</p>
<p> </p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="91455671" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ymth46/Steven_Lebeouf_Podcast_mixdown_FIXat6os.mp3"/>
        <itunes:summary>In this episode of ”Technically Human,” I dive into the history, the sociology, and the ethics of wearables with Dr. Steven LeBoeuf, the President and Co-Founder of Valencell Technologies. We talk about how wearable technologies trouble the boundaries of what we call a ”self,” and how what it means to be human is changing as we increasingly enlist technologies on our bodies in reporting on what is happening in our bodies.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3810</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>57</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I dive into the history, the sociology, and the ethics of wearables with Dr. Steven LeBoeuf, the President and Co-Founder of Valencell Technologies. We talk about how wearable technologies trouble the boundaries of what we call a "self," and how what it means to be human is changing as we increasingly enlist technologies on our bodies in reporting on what is happening in our bodies. We talk about the evolution of wearables, the technologies that go into wearable tech, and why you might want to think twice about wearing devices that collect information about what goes on inside your body, especially if you don't know who might be using that data--or why.   This episode was produced by Mereck Palazzo &amp; Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Fork in the Road to Ethical Technology: Vivek Wadhwa on navigating ethical roadmaps in a perilous tech landscape</title>
        <itunes:title>The Fork in the Road to Ethical Technology: Vivek Wadhwa on navigating ethical roadmaps in a perilous tech landscape</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-fork-in-the-road-to-ethical-technology-vivek-wadhwa-on-navigating-ethical-roadmaps-in-a-perilous-tech-landscape/</link>
                    <comments>https://dmdonig.podbean.com/e/the-fork-in-the-road-to-ethical-technology-vivek-wadhwa-on-navigating-ethical-roadmaps-in-a-perilous-tech-landscape/#comments</comments>        <pubDate>Fri, 22 Oct 2021 02:24:23 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/1a170052-d894-3400-ab59-397c049539b4</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with Vivek Wadhwa to talk about his pivot from tech entrepreneur and big tech enthusiast, to critic and activist. We talk about his path to tech and then to his activism in education, his research into tech innovation, and his research into the importance of global diversity when considering questions of how we imagine, innovate, and build.</p>






<p><a href='https://wadhwa.com/'>Vivek Wadhwa</a> is a Distinguished Fellow at Harvard Law School’s Labor and Worklife Program. He is the author of five best-selling books: <a href='https://www.penguinrandomhouse.com/books/646544/from-incremental-to-exponential-by-vivek-wadhwa-and-ismail-amla-with-alex-salkever/'>From Incremental to Exponential; Your Happiness Was Hacked</a>; <a href='https://g.co/kgs/UCFMYg'>The Driver in the Driverless Car</a>; <a href='https://g.co/kgs/x1Y2zW'>Innovating Women</a>; and <a href='https://wsp.wharton.upenn.edu/book/the-immigrant-exodus/'>The Immigrant Exodus</a>.</p>
<p>He has been a globally syndicated columnist for The Washington Post and held appointments at Carnegie Mellon University, Duke University, Stanford Law School, UC Berkeley, Emory University, and Singularity University. In 2012, the U.S. Government awarded Wadhwa distinguished recognition as an “Outstanding American by Choice” for his “commitment to this country and to the common civic values that unite us as Americans.”</p>
<p>He was also named one of the world’s “Top 100 Global Thinkers” by Foreign Policy magazine in that year; in June 2013, he was on TIME magazine’s list of “Tech 40”, one of forty of the most influential minds in tech; and in September 2015, he was second on a list of “ten men worth emulating” in The Financial Times. In 2018, he was awarded Silicon Valley Forum’s Visionary Award, a list of luminaries “who have made Silicon Valley synonymous with creativity and life-changing advancements in technology.”</p>











 











<p>Wadhwa is an advisor to several governments; mentors entrepreneurs; and writes for top publications across the globe. He has also researched Silicon Valley’s diversity, or the lack of it.  He documented that women entrepreneurs have the same backgrounds and motivations as men do, but are rare in the ranks of technology CEOs and CTOs. He is the founding president of the Carolinas chapter of The IndUS Entrepreneurs (TIE), a non-profit global network intended to foster entrepreneurship.  He has been featured in thousands of articles in publications worldwide, including the Wall Street Journal, The Economist, Forbes magazine, The Washington Post, The New York Times, U.S. News and World Report, and Science Magazine, and has made many appearances on U.S. and international TV stations, including CBS 60 Minutes, PBS, CNN, ABC, NBC, CNBC, and the BBC.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>





]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with Vivek Wadhwa to talk about his pivot from tech entrepreneur and big tech enthusiast, to critic and activist. We talk about his path to tech and then to his activism in education, his research into tech innovation, and his research into the importance of global diversity when considering questions of how we imagine, innovate, and build.</p>






<p><a href='https://wadhwa.com/'>Vivek Wadhwa</a> is a Distinguished Fellow at Harvard Law School’s Labor and Worklife Program. He is the author of five best-selling books: <a href='https://www.penguinrandomhouse.com/books/646544/from-incremental-to-exponential-by-vivek-wadhwa-and-ismail-amla-with-alex-salkever/'><em>From Incremental to Exponential</em>; <em>Your Happiness Was Hacked</em></a>; <a href='https://g.co/kgs/UCFMYg'><em>The Driver in the Driverless Car</em></a>; <a href='https://g.co/kgs/x1Y2zW'><em>Innovating Women</em></a>; and <a href='https://wsp.wharton.upenn.edu/book/the-immigrant-exodus/'><em>The Immigrant Exodus</em></a>.</p>
<p>He has been a globally syndicated columnist for <em>The Washington Post</em> and held appointments at Carnegie Mellon University, Duke University, Stanford Law School, UC Berkeley, Emory University, and Singularity University. In 2012, the U.S. Government awarded Wadhwa distinguished recognition as an “Outstanding American by Choice” for his “commitment to this country and to the common civic values that unite us as Americans.”</p>
<p>He was also named one of the world’s “Top 100 Global Thinkers” by <em>Foreign Policy</em> magazine in that year; in June 2013, he was on <em>TIME</em> magazine’s list of “Tech 40”, one of forty of the most influential minds in tech; and in September 2015, he was second on a list of “ten men worth emulating” in <em>The Financial Times</em>. In 2018, he was awarded Silicon Valley Forum’s Visionary Award, a list of luminaries “who have made Silicon Valley synonymous with creativity and life-changing advancements in technology.”</p>











 











<p>Wadhwa is an advisor to several governments; mentors entrepreneurs; and writes for top publications across the globe. He has also researched Silicon Valley’s diversity, or the lack of it.  He documented that women entrepreneurs have the same backgrounds and motivations as men do, but are rare in the ranks of technology CEOs and CTOs. He is the founding president of the Carolinas chapter of The IndUS Entrepreneurs (TIE), a non-profit global network intended to foster entrepreneurship.  He has been featured in thousands of articles in publications worldwide, including the <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">Wall Street Journal</em>, <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">The Economist</em>, <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">Forbes</em> magazine, <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">The Washington Post</em>, <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">The New York Times</em>, <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">U.S. News and World Report</em>, and <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">Science Magazine</em>, and has made many appearances on U.S. and international TV stations, including CBS <em style="font-family:'-apple-system', BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;">60 Minutes</em>, PBS, CNN, ABC, NBC, CNBC, and the BBC.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>





]]></content:encoded>
                                    
        <enclosure length="62061604" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ayw6wm/Vivek_Wadhwa_Podcast_Mixdown.mp3"/>
        <itunes:summary>In this episode, I sit down with Vivek Wadhwa to talk about his pivot from tech entrepreneur and big tech enthusiast, to critic and activist. We talk about his path to tech and then to his activism in education, his research into tech innovation, and his research into the importance of global diversity when considering questions of how we imagine, innovate, and build.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2580</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>56</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with Vivek Wadhwa to talk about his pivot from tech entrepreneur and big tech enthusiast, to critic and activist. We talk about his path to tech and then to his activism in education, his research into tech innovation, and his research into the importance of global diversity when considering questions of how we imagine, innovate, and build. Vivek Wadhwa is a Distinguished Fellow at Harvard Law School’s Labor and Worklife Program. He is the author of five best-selling books: From Incremental to Exponential; Your Happiness Was Hacked; The Driver in the Driverless Car; Innovating Women; and The Immigrant Exodus. He has been a globally syndicated columnist for The Washington Post and held appointments at Carnegie Mellon University, Duke University, Stanford Law School, UC Berkeley, Emory University, and Singularity University. In 2012, the U.S. Government awarded Wadhwa distinguished recognition as an “Outstanding American by Choice” for his “commitment to this country and to the common civic values that unite us as Americans.” He was also named one of the world’s “Top 100 Global Thinkers” by Foreign Policy magazine in that year; in June 2013, he was on TIME magazine’s list of “Tech 40”, one of forty of the most influential minds in tech; and in September 2015, he was second on a list of “ten men worth emulating” in The Financial Times. In 2018, he was awarded Silicon Valley Forum’s Visionary Award, a list of luminaries “who have made Silicon Valley synonymous with creativity and life-changing advancements in technology.”   Wadhwa is an advisor to several governments; mentors entrepreneurs; and writes for top publications across the globe. He has also researched Silicon Valley’s diversity, or the lack of it.  He documented that women entrepreneurs have the same backgrounds and motivations as men do, but are rare in the ranks of technology CEOs and CTOs. He is the founding president of the Carolinas chapter of The IndUS Entrepreneurs (TIE), a non-profit global network intended to foster entrepreneurship.  He has been featured in thousands of articles in publications worldwide, including the Wall Street Journal, The Economist, Forbes magazine, The Washington Post, The New York Times, U.S. News and World Report, and Science Magazine, and has made many appearances on U.S. and international TV stations, including CBS 60 Minutes, PBS, CNN, ABC, NBC, CNBC, and the BBC. This episode was produced by Mereck Palazzo &amp; Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Principled Dissent: Joe Toscano explains why he left the tech industry and what real change looks like</title>
        <itunes:title>Principled Dissent: Joe Toscano explains why he left the tech industry and what real change looks like</itunes:title>
        <link>https://dmdonig.podbean.com/e/principled-dissent-joe-toscano-explains-why-he-left-the-tech-industry-and-what-real-change-looks-like/</link>
                    <comments>https://dmdonig.podbean.com/e/principled-dissent-joe-toscano-explains-why-he-left-the-tech-industry-and-what-real-change-looks-like/#comments</comments>        <pubDate>Fri, 15 Oct 2021 12:36:07 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a82163df-0a87-3334-90f4-8ba6e54c1504</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Joe Toscano about why he left Google in 2017, and how he became one of tech's leading critics. We talk about what he saw, in 2017, in the culture of tech that led him to leave, and what led him to found the <a href='https://www.beacontrustnetwork.com/'>Better Ethics and Consumer Outcomes Network (BEACON)</a>. We discuss the relationship between ethics, law, and policy, and best practices for building a space for change in the public and in the industry.</p>
<p>Joe Toscano is an award-winning designer, published author, and international keynote speaker. Joe previously consulted for Google in Mountain View, CA. Joe left because he believes the industry misuses data and felt the issues needed to be addressed through innovation rather than strict regulation.</p>
<p>Since leaving, Joe has traveled the world speaking to audiences ranging from 10 people at local events to 15,000 person corporate events, he has written a book, called<a href='https://www.amazon.com/Automating-Humanity-Joe-Toscano/dp/1576879208/ref=sr_1_1?ie=UTF8&qid=1533500065&sr=8-1&keywords=automating+humanity'> Automating Humanity</a>, and he has started the <a href='https://www.beacontrustnetwork.com/'>Better Ethics and Consumer Outcomes Network (BEACON)</a>, all focused on increasing technology literacy, discovering opportunities for intentional and thoughtful innovative practices, and moving communities forward through purpose-driven innovation.</p>
<p>Outside of BEACON Joe also writes for Forbes, is a member of the World Economic Forum's Steering Committee for Data Protection, and is featured in<a href='http://netflix.com/thesocialdilemma'> The Social Dilemma</a>. His work is in the process of being translated into law, putting him in the room with legislators across the United States, including NY State Senate and dozens of Attorney's General, to whom he submitted evidence in the antitrust case against Google.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Joe Toscano about why he left Google in 2017, and how he became one of tech's leading critics. We talk about what he saw, in 2017, in the culture of tech that led him to leave, and what led him to found the <a href='https://www.beacontrustnetwork.com/'>Better Ethics and Consumer Outcomes Network (BEACON)</a>. We discuss the relationship between ethics, law, and policy, and best practices for building a space for change in the public and in the industry.</p>
<p>Joe Toscano is an award-winning designer, published author, and international keynote speaker. Joe previously consulted for Google in Mountain View, CA. Joe left because he believes the industry misuses data and felt the issues needed to be addressed through innovation rather than strict regulation.</p>
<p>Since leaving, Joe has traveled the world speaking to audiences ranging from 10 people at local events to 15,000 person corporate events, he has written a book, called<a href='https://www.amazon.com/Automating-Humanity-Joe-Toscano/dp/1576879208/ref=sr_1_1?ie=UTF8&qid=1533500065&sr=8-1&keywords=automating+humanity'> Automating Humanity</a>, and he has started the <a href='https://www.beacontrustnetwork.com/'>Better Ethics and Consumer Outcomes Network (BEACON)</a>, all focused on increasing technology literacy, discovering opportunities for intentional and thoughtful innovative practices, and moving communities forward through purpose-driven innovation.</p>
<p>Outside of BEACON Joe also writes for Forbes, is a member of the World Economic Forum's Steering Committee for Data Protection, and is featured in<a href='http://netflix.com/thesocialdilemma'> The Social Dilemma</a>. His work is in the process of being translated into law, putting him in the room with legislators across the United States, including NY State Senate and dozens of Attorney's General, to whom he submitted evidence in the antitrust case against Google.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="68413114" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/a36435/Joe_Toscano_Podcast_mixdown8y8az.mp3"/>
        <itunes:summary>In this episode, I speak with Joe Toscano about why he left Google in 2017, and how he became one of tech‘s leading critics. We talk about what he saw, in 2017, in the culture of tech that led him to leave, and what led him to found the Better Ethics and Consumer Outcomes Network (BEACON). We discuss the relationship between ethics, law, and policy, and best practices for building a space for change in the public and in the industry.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2850</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>55</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Joe Toscano about why he left Google in 2017, and how he became one of tech's leading critics. We talk about what he saw, in 2017, in the culture of tech that led him to leave, and what led him to found the Better Ethics and Consumer Outcomes Network (BEACON). We discuss the relationship between ethics, law, and policy, and best practices for building a space for change in the public and in the industry. Joe Toscano is an award-winning designer, published author, and international keynote speaker. Joe previously consulted for Google in Mountain View, CA. Joe left because he believes the industry misuses data and felt the issues needed to be addressed through innovation rather than strict regulation. Since leaving, Joe has traveled the world speaking to audiences ranging from 10 people at local events to 15,000 person corporate events, he has written a book, called Automating Humanity, and he has started the Better Ethics and Consumer Outcomes Network (BEACON), all focused on increasing technology literacy, discovering opportunities for intentional and thoughtful innovative practices, and moving communities forward through purpose-driven innovation. Outside of BEACON Joe also writes for Forbes, is a member of the World Economic Forum's Steering Committee for Data Protection, and is featured in The Social Dilemma. His work is in the process of being translated into law, putting him in the room with legislators across the United States, including NY State Senate and dozens of Attorney's General, to whom he submitted evidence in the antitrust case against Google. This episode was produced by Mereck Palazzo &amp; Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Memory Drive: The ethics of Holocaust memory in the age of virtual reality</title>
        <itunes:title>Memory Drive: The ethics of Holocaust memory in the age of virtual reality</itunes:title>
        <link>https://dmdonig.podbean.com/e/memory-drive-the-ethics-of-living-holocaust-memory-after-death/</link>
                    <comments>https://dmdonig.podbean.com/e/memory-drive-the-ethics-of-living-holocaust-memory-after-death/#comments</comments>        <pubDate>Fri, 08 Oct 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/5a89e675-867b-330b-9caa-a11abaf98b31</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Steven Smith, the director USC's<a href='https://sfi.usc.edu/dit'> Dimensions in Holocaust Testimony</a>.  </p>
<p>We talk about the ethics of memory, testimony, and witness, and how these fundamental concepts are being radically changed by developing technologies. Steven explains the ethics of Holocaust witness in the digital age and how a new interactive program that enlists virtual technologies may allow Holocaust testimony to remain vivified for generations to come. How should we think about the reality of virtual survivors? How is our basic concept of "witness"  transformed by new technologies? And what does  "memory" mean in our current digital age?</p>
<p><a href='https://sfi.usc.edu/about/staff/stephen-d-smith-phd'>Dr. Stephen D. Smith</a> is the Finci -Viterbi Executive Director of <a href='https://sfi.usc.edu/'>USC Shoah Foundation</a>, and holds the UNESCO Chair on Genocide Education.</p>
<p>Smith founded the UK Holocaust Centre in Nottinghamshire, England and cofounded the Aegis Trust for the prevention of crimes against humanity and genocide.</p>
<p>Smith has served as a producer on a number of film and new media projects, including Dimensions in Testimony, and the VR project The Last Goodbye. He also co-hosts the <a href='https://www.memorygenerationpodcast.com/'>MemoryGeneration podcast</a>, alongside documentary storyteller Rachael Cerrotti, a show that explores dimensions of testimony from survivors of genocide.</p>
<p>In recognition of his work, Smith has become a member of the Order of the British Empire and received the Interfaith Gold Medallion. He also holds two honorary doctorates, and lectures widely on issues relating to the history and collective response to the Holocaust, genocide, and crimes against humanity.</p>
<p>Dimensions in Testimony is a collection of interactive video testimonies from the USC Shoah Foundation, enabling people to engage with Holocaust survivors and other witnesses to genocide, by asking questions and conversing. It is the subject of the Academy-Award nominated documentary film, <a href='https://www.116cameras.com/'>116 Cameras</a>.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p>This episode is dedicated to Izzy Arbeiter.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Steven Smith, the director USC's<a href='https://sfi.usc.edu/dit'> Dimensions in Holocaust Testimony</a>.  </p>
<p>We talk about the ethics of memory, testimony, and witness, and how these fundamental concepts are being radically changed by developing technologies. Steven explains the ethics of Holocaust witness in the digital age and how a new interactive program that enlists virtual technologies may allow Holocaust testimony to remain vivified for generations to come. How should we think about the reality of virtual survivors? How is our basic concept of "witness"  transformed by new technologies? And what does  "memory" mean in our current digital age?</p>
<p><a href='https://sfi.usc.edu/about/staff/stephen-d-smith-phd'>Dr. Stephen D. Smith</a> is the Finci -Viterbi Executive Director of <a href='https://sfi.usc.edu/'>USC Shoah Foundation</a>, and holds the UNESCO Chair on Genocide Education.</p>
<p>Smith founded the UK Holocaust Centre in Nottinghamshire, England and cofounded the Aegis Trust for the prevention of crimes against humanity and genocide.</p>
<p>Smith has served as a producer on a number of film and new media projects, including Dimensions in Testimony, and the VR project <em>The Last Goodbye</em>. He also co-hosts the <a href='https://www.memorygenerationpodcast.com/'>MemoryGeneration podcast</a>, alongside documentary storyteller Rachael Cerrotti, a show that explores dimensions of testimony from survivors of genocide.</p>
<p>In recognition of his work, Smith has become a member of the Order of the British Empire and received the Interfaith Gold Medallion. He also holds two honorary doctorates, and lectures widely on issues relating to the history and collective response to the Holocaust, genocide, and crimes against humanity.</p>
<p>Dimensions in Testimony is a collection of interactive video testimonies from the USC Shoah Foundation, enabling people to engage with Holocaust survivors and other witnesses to genocide, by asking questions and conversing. It is the subject of the Academy-Award nominated documentary film, <a href='https://www.116cameras.com/'>116 Cameras</a>.</p>
<p>This episode was produced by Mereck Palazzo & Matt Perry.</p>
<p>Art by Desi Aleman.</p>
<p>This episode is dedicated to Izzy Arbeiter.</p>
]]></content:encoded>
                                    
        <enclosure length="96593213" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/u7x4x8/Stephen_Smith_Podcast_01_mixdown93a6h.mp3"/>
        <itunes:summary>In this episode of ”Technically Human,” I sit down with Dr. Steven Smith, the director USC‘s Dimensions in Holocaust Testimony.  

We talk about the ethics of memory, testimony, and witness, and how these fundamental concepts are being radically changed by developing technologies. Steven explains the ethics of Holocaust witness in the digital age and how a new interactive program that enlists virtual technologies may allow Holocaust testimony to remain vivified for generations to come. How should we think about the reality of virtual survivors? How is our basic concept of ”witness”  transformed by new technologies? And what does  ”memory” mean in our current digital age?</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4020</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>54</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I sit down with Dr. Steven Smith, the director USC's Dimensions in Holocaust Testimony.   We talk about the ethics of memory, testimony, and witness, and how these fundamental concepts are being radically changed by developing technologies. Steven explains the ethics of Holocaust witness in the digital age and how a new interactive program that enlists virtual technologies may allow Holocaust testimony to remain vivified for generations to come. How should we think about the reality of virtual survivors? How is our basic concept of "witness"  transformed by new technologies? And what does  "memory" mean in our current digital age? Dr. Stephen D. Smith is the Finci -Viterbi Executive Director of USC Shoah Foundation, and holds the UNESCO Chair on Genocide Education. Smith founded the UK Holocaust Centre in Nottinghamshire, England and cofounded the Aegis Trust for the prevention of crimes against humanity and genocide. Smith has served as a producer on a number of film and new media projects, including Dimensions in Testimony, and the VR project The Last Goodbye. He also co-hosts the MemoryGeneration podcast, alongside documentary storyteller Rachael Cerrotti, a show that explores dimensions of testimony from survivors of genocide. In recognition of his work, Smith has become a member of the Order of the British Empire and received the Interfaith Gold Medallion. He also holds two honorary doctorates, and lectures widely on issues relating to the history and collective response to the Holocaust, genocide, and crimes against humanity. Dimensions in Testimony is a collection of interactive video testimonies from the USC Shoah Foundation, enabling people to engage with Holocaust survivors and other witnesses to genocide, by asking questions and conversing. It is the subject of the Academy-Award nominated documentary film, 116 Cameras. This episode was produced by Mereck Palazzo &amp; Matt Perry. Art by Desi Aleman. This episode is dedicated to Izzy Arbeiter.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Public Service: Yaël Eisenstat Tackles the Intersection of Ethics, Tech, and Democracy</title>
        <itunes:title>Public Service: Yaël Eisenstat Tackles the Intersection of Ethics, Tech, and Democracy</itunes:title>
        <link>https://dmdonig.podbean.com/e/title-needs-to-be-edited-fireside-chat-with-yael-eisenstat/</link>
                    <comments>https://dmdonig.podbean.com/e/title-needs-to-be-edited-fireside-chat-with-yael-eisenstat/#comments</comments>        <pubDate>Fri, 01 Oct 2021 09:12:05 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/34fb52e5-2029-349c-affb-a4db861daa9b</guid>
                                    <description><![CDATA[<p>In this special edition of "Technically Human," we feature a live public conversation about the future of democracy, technology, and public policy. In 2017, Yaël Eisenstat came onboard Facebook to change it, joining the company as its Global Head of Elections Integrity Operations. What she discovered while working there alarmed her. She started speaking out, becoming a leading critic of tech’s threat to democracy.In this conversation, I sit down with Yaël in front of a live audience to ask:</p>
<ul><li>How can American Democracy persevere in the age of social media?</li>
<li>Why does tech need regulation?</li>
<li>Who can reign in Big Tech?</li>
<li>What can we do to help?</li>
</ul>
<p>Yaël Eisenstat works at the intersection of tech, democracy, and policy, with a focus on what the public square and open, democratic debate look like in the digital world. She works as a Future of Democracy Fellow at <a href='https://www.berggruen.org/people/yael-eisenstat/'>Berggruen Institute</a> and a policy advisor to start-ups, governments, and investors looking to align technology to better serve the public.</p>
<p>She has spent 20 years working around the globe on democracy and security issues as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, and the head of a global risk firm. She was a Researcher-in-Residence at <a href='https://www.betaworks-studios.com/betalab'>Betalab </a>in 2020-21 and a Visiting Fellow at Cornell Tech's Digital Life Initiative in 2019-2020, where she focused on technology's effects on discourse and democracy and taught a multi-university course on <a href='https://www.techmediademocracy.nyc/'>Tech, Media and Democracy</a>.</p>
<p>Yaël Eisenstat has become a key voice and public advocate for transparency and accountability in tech, particularly where real-world-consequences affect democracy and societies around the world. Her <a href='https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-1773'>recent TED talk</a> addresses these issues and proposes ideas for how government and society should hold the companies accountable.</p>
<p>In 2017, she was named in Forbes' list of “<a href='https://fortyover40.com/'>40 Women to Watch Over 40</a>”. She is also an Adjunct Professor at NYU's Center for Global Affairs, a member of the Council on Foreign Relations, and she provides context and analysis on social media, elections integrity, political and foreign affairs in the media. She has been published in the <a href='https://www.nytimes.com/2017/01/24/opinion/the-shocking-affront-of-donald-trumps-cia-stunt.html?_r=1'>New York Times</a>, <a href='https://www.washingtonpost.com/outlook/2019/11/04/i-worked-political-ads-facebook-they-profit-by-manipulating-us/'>the Washington Post,</a> <a href='https://www.brookings.edu/techstream/how-to-combat-online-voter-suppression/'>Brookings Techstream</a>, <a href='http://time.com/4370375/american-hate-national-security/'>TIME</a>, <a href='https://www.wired.com/story/the-real-reason-tech-struggles-with-algorithmic-bias/'>WIRED</a>, <a href='https://qz.com/1211344/we-must-strive-to-love-our-country-more-than-we-hate-our-neighbor-a-former-cia-officers-plea-to-america/'>Quartz</a> and <a href='http://www.huffingtonpost.com/entry/reclaiming-patriotism-in-trumps-america_us_58ef6f6ee4b0bb9638e1b5a4'>The Huffington Post</a>, has appeared on <a href='https://www.youtube.com/watch?v=E6ML9LZlk0o&t=4s'>CNN</a>, <a href='https://youtu.be/Tfi-x6X3okk'>BBC World News</a>, <a href='https://www.cbsnews.com/news/a-protected-right-free-speech-and-social-media/'>CBS Sunday Morning</a>, <a href='https://www.bloomberg.com/news/videos/2020-08-17/why-the-2020-election-could-be-a-mail-in-nightmare-video'>Bloomberg News</a>, <a href='http://www.cbsnews.com/videos/trump-slams-the-intel-community-for-leaks-to-media/'>CBS News</a>, <a href='http://www.pbs.org/wgbh/third-rail/home/'>PBS</a><a href='https://www.youtube.com/watch?v=6ruU5X1Q6ow'> </a>and <a href='https://www.c-span.org/video/?426001-3/immigration-refugees'>C-SPAN</a>, in policy forums, and on a number of podcasts. She earned an M.A. in International Affairs from the Johns Hopkins School of Advanced International Studies (SAIS).</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this special edition of "Technically Human," we feature a live public conversation about the future of democracy, technology, and public policy. In 2017, Yaël Eisenstat came onboard Facebook to change it, joining the company as its Global Head of Elections Integrity Operations. What she discovered while working there alarmed her. She started speaking out, becoming a leading critic of tech’s threat to democracy.In this conversation, I sit down with Yaël in front of a live audience to ask:</p>
<ul><li>How can American Democracy persevere in the age of social media?</li>
<li>Why does tech need regulation?</li>
<li>Who can reign in Big Tech?</li>
<li>What can we do to help?</li>
</ul>
<p>Yaël Eisenstat works at the intersection of tech, democracy, and policy, with a focus on what the public square and open, democratic debate look like in the digital world. She works as a Future of Democracy Fellow at <a href='https://www.berggruen.org/people/yael-eisenstat/'>Berggruen Institute</a> and a policy advisor to start-ups, governments, and investors looking to align technology to better serve the public.</p>
<p>She has spent 20 years working around the globe on democracy and security issues as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, and the head of a global risk firm. She was a Researcher-in-Residence at <a href='https://www.betaworks-studios.com/betalab'>Betalab </a>in 2020-21 and a Visiting Fellow at Cornell Tech's Digital Life Initiative in 2019-2020, where she focused on technology's effects on discourse and democracy and taught a multi-university course on <a href='https://www.techmediademocracy.nyc/'>Tech, Media and Democracy</a>.</p>
<p>Yaël Eisenstat has become a key voice and public advocate for transparency and accountability in tech, particularly where real-world-consequences affect democracy and societies around the world. Her <a href='https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-1773'>recent TED talk</a> addresses these issues and proposes ideas for how government and society should hold the companies accountable.</p>
<p>In 2017, she was named in Forbes' list of “<a href='https://fortyover40.com/'>40 Women to Watch Over 40</a>”. She is also an Adjunct Professor at NYU's Center for Global Affairs, a member of the Council on Foreign Relations, and she provides context and analysis on social media, elections integrity, political and foreign affairs in the media. She has been published in the <a href='https://www.nytimes.com/2017/01/24/opinion/the-shocking-affront-of-donald-trumps-cia-stunt.html?_r=1'>New York Times</a>, <a href='https://www.washingtonpost.com/outlook/2019/11/04/i-worked-political-ads-facebook-they-profit-by-manipulating-us/'>the Washington Post,</a> <a href='https://www.brookings.edu/techstream/how-to-combat-online-voter-suppression/'>Brookings Techstream</a>, <a href='http://time.com/4370375/american-hate-national-security/'>TIME</a>, <a href='https://www.wired.com/story/the-real-reason-tech-struggles-with-algorithmic-bias/'>WIRED</a>, <a href='https://qz.com/1211344/we-must-strive-to-love-our-country-more-than-we-hate-our-neighbor-a-former-cia-officers-plea-to-america/'>Quartz</a> and <a href='http://www.huffingtonpost.com/entry/reclaiming-patriotism-in-trumps-america_us_58ef6f6ee4b0bb9638e1b5a4'>The Huffington Post</a>, has appeared on <a href='https://www.youtube.com/watch?v=E6ML9LZlk0o&t=4s'>CNN</a>, <a href='https://youtu.be/Tfi-x6X3okk'>BBC World News</a>, <a href='https://www.cbsnews.com/news/a-protected-right-free-speech-and-social-media/'>CBS Sunday Morning</a>, <a href='https://www.bloomberg.com/news/videos/2020-08-17/why-the-2020-election-could-be-a-mail-in-nightmare-video'>Bloomberg News</a>, <a href='http://www.cbsnews.com/videos/trump-slams-the-intel-community-for-leaks-to-media/'>CBS News</a>, <a href='http://www.pbs.org/wgbh/third-rail/home/'>PBS</a><a href='https://www.youtube.com/watch?v=6ruU5X1Q6ow'> </a>and <a href='https://www.c-span.org/video/?426001-3/immigration-refugees'>C-SPAN</a>, in policy forums, and on a number of podcasts. She earned an M.A. in International Affairs from the Johns Hopkins School of Advanced International Studies (SAIS).</p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="94335550" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/x3zzae/Yael_Eisenstat_Podcast_mixdown7wbtq.mp3"/>
        <itunes:summary>In this special edition of ”Technically Human,” we feature a live public conversation about the future of democracy, technology, and public policy. In 2017, Yaël Eisenstat came onboard Facebook to change it, joining the company as its Global Head of Elections Integrity Operations. What she discovered while working there alarmed her. She started speaking out, becoming a leading critic of tech’s threat to democracy.In this conversation, I sit down with Yaël in front of a live audience to ask:

＞How can American Democracy persevere in the age of social media?
＞Why does tech need regulation?
＞Who can reign in Big Tech?
＞What can we do to help?</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3930</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>53</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this special edition of "Technically Human," we feature a live public conversation about the future of democracy, technology, and public policy. In 2017, Yaël Eisenstat came onboard Facebook to change it, joining the company as its Global Head of Elections Integrity Operations. What she discovered while working there alarmed her. She started speaking out, becoming a leading critic of tech’s threat to democracy.In this conversation, I sit down with Yaël in front of a live audience to ask: How can American Democracy persevere in the age of social media? Why does tech need regulation? Who can reign in Big Tech? What can we do to help? Yaël Eisenstat works at the intersection of tech, democracy, and policy, with a focus on what the public square and open, democratic debate look like in the digital world. She works as a Future of Democracy Fellow at Berggruen Institute and a policy advisor to start-ups, governments, and investors looking to align technology to better serve the public. She has spent 20 years working around the globe on democracy and security issues as a CIA officer, a White House advisor, the Global Head of Elections Integrity Operations for political advertising at Facebook, a diplomat, and the head of a global risk firm. She was a Researcher-in-Residence at Betalab in 2020-21 and a Visiting Fellow at Cornell Tech's Digital Life Initiative in 2019-2020, where she focused on technology's effects on discourse and democracy and taught a multi-university course on Tech, Media and Democracy. Yaël Eisenstat has become a key voice and public advocate for transparency and accountability in tech, particularly where real-world-consequences affect democracy and societies around the world. Her recent TED talk addresses these issues and proposes ideas for how government and society should hold the companies accountable. In 2017, she was named in Forbes' list of “40 Women to Watch Over 40”. She is also an Adjunct Professor at NYU's Center for Global Affairs, a member of the Council on Foreign Relations, and she provides context and analysis on social media, elections integrity, political and foreign affairs in the media. She has been published in the New York Times, the Washington Post, Brookings Techstream, TIME, WIRED, Quartz and The Huffington Post, has appeared on CNN, BBC World News, CBS Sunday Morning, Bloomberg News, CBS News, PBS and C-SPAN, in policy forums, and on a number of podcasts. She earned an M.A. in International Affairs from the Johns Hopkins School of Advanced International Studies (SAIS). This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Funny Business: ”Silicon Valley” writer and co-producer Dan Lyons explains what‘s funny about tech culture</title>
        <itunes:title>Funny Business: ”Silicon Valley” writer and co-producer Dan Lyons explains what‘s funny about tech culture</itunes:title>
        <link>https://dmdonig.podbean.com/e/funny-business-silicon-valley-writer-and-co-producer-dan-lyons-explains-what-s-funny-about-tech-culture/</link>
                    <comments>https://dmdonig.podbean.com/e/funny-business-silicon-valley-writer-and-co-producer-dan-lyons-explains-what-s-funny-about-tech-culture/#comments</comments>        <pubDate>Fri, 24 Sep 2021 15:55:04 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9515588a-1291-3cc7-91d4-33679cfae649</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with a personal hero, the iconic literary giant <a href='https://danlyons.io/'>Dan Lyons</a>. We discuss Dan's experience writing about tech culture for the hit HBO show "<a href='https://www.hbo.com/silicon-valley'>Silicon Valley</a>," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. </p>
<p>Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at Newsweek, a staff writer at Forbes, and a columnist for Fortune magazine, while also contributing op-ed columns to the New York Times about the economics and culture of Silicon Valley. </p>
<p>Dan is the author of two of the most important recent books about Silicon Valley: <a href='https://www.amazon.com/Disrupted-My-Misadventure-Start-Up-Bubble/dp/0316306096'>Disrupted: My Misadventure in the Startup Bubble</a>, an international best-seller, and <a href='https://www.amazon.com/Lab-Rats-Silicon-Valley-Miserable/dp/031656186X'>Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us</a>, which was chosen by The Guardian as one of the best business books of 2018. He is also the mastermind of the epic parody blog <a href='https://www.fakesteve.net/2010/04/an-open-letter-to-the-people-of-the-world.html'>“The Fake Steve Jobs Blog.”</a> </p>
<p>Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space. </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with a personal hero, the iconic literary giant <a href='https://danlyons.io/'>Dan Lyons</a>. We discuss Dan's experience writing about tech culture for the hit HBO show "<a href='https://www.hbo.com/silicon-valley'>Silicon Valley</a>," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture. </p>
<p>Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at <em>Newsweek</em>, a staff writer at <em>Forbes</em>, and a columnist for <em>Fortune</em> magazine, while also contributing op-ed columns to the <em>New York Times</em> about the economics and culture of Silicon Valley. </p>
<p>Dan is the author of two of the most important recent books about Silicon Valley: <a href='https://www.amazon.com/Disrupted-My-Misadventure-Start-Up-Bubble/dp/0316306096'><em>Disrupted: My Misadventure in the Startup Bubble</em></a>, an international best-seller, and <em><a href='https://www.amazon.com/Lab-Rats-Silicon-Valley-Miserable/dp/031656186X'>Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us</a>, </em>which was chosen by <em>The Guardian</em> as one of the best business books of 2018. He is also the mastermind of the epic parody blog <a href='https://www.fakesteve.net/2010/04/an-open-letter-to-the-people-of-the-world.html'>“The Fake Steve Jobs Blog.”</a> </p>
<p>Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space. </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="122420400" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/hkvcca/Dan_Lyons_Podcast_mixdown_27wvvq.mp3"/>
        <itunes:summary>In this episode, I sit down with a personal hero, the iconic literary giant Dan Lyons. We discuss Dan‘s experience writing about tech culture for the hit HBO show ”Silicon Valley,” and Dan‘s own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>5100</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>52</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with a personal hero, the iconic literary giant Dan Lyons. We discuss Dan's experience writing about tech culture for the hit HBO show "Silicon Valley," and Dan's own experience working in tech. We talk about what makes Silicon Valley funny--and how that humor gets at some of the deeply sobering realities of Silicon Valley culture.  Dan Lyons is one of the best-known science and technology journalists in the United States. He was the technology editor at Newsweek, a staff writer at Forbes, and a columnist for Fortune magazine, while also contributing op-ed columns to the New York Times about the economics and culture of Silicon Valley.  Dan is the author of two of the most important recent books about Silicon Valley: Disrupted: My Misadventure in the Startup Bubble, an international best-seller, and Lab Rats: How Silicon Valley Made Work Miserable for the Rest of Us, which was chosen by The Guardian as one of the best business books of 2018. He is also the mastermind of the epic parody blog “The Fake Steve Jobs Blog.”  Dan has been a consistent and vocal critic of racial, gender, and age bias in the technology industry, penning articles about "bro culture," worker exploitation, and the "hustle" mentality that leads to employee burnout. He has become a leading advocate for greater diversity in the technology industry and an early critic of the gig economy for its abuse of workers. His work helped draw attention to the brutal working conditions in Amazon warehouses. He has earned a reputation as a fearless critic of powerful interests in Silicon Valley, with a voice that sets him apart from the often fawning journalism that comes out of the technology space.  This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The TransHuman Code: Carlos Moreira imagines a human-centered technological future</title>
        <itunes:title>The TransHuman Code: Carlos Moreira imagines a human-centered technological future</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-transhuman-codecarlos-moreira-imagines-a-human-centered-technological-future/</link>
                    <comments>https://dmdonig.podbean.com/e/the-transhuman-codecarlos-moreira-imagines-a-human-centered-technological-future/#comments</comments>        <pubDate>Fri, 27 Aug 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/4aff34ed-f44b-3128-94e4-4069d41308b9</guid>
                                    <description><![CDATA[<p>In this episode of the podcast, I speak with Carlos Moreira, the CEO of WISeKey. We discuss the possibilities for building a human-centered technological future today—and the consequences if we do not. What does a human-centered model for technological production look like? How can we build human rights into our tech? And what needs to change to return human values to tech?</p>
<p>Carlos Moreira is the Founder Chairman CEO of the international cybersecurity firm <a href='https://www.wisekey.com'>WISeKey</a> and the author (along with David Fergusson) of <a href='https://transhumancode.com'>The Transhuman Code</a>, a landmark book about ethics and technology. Before founding WISeKey, he served as the United Nations Expert on CyberSecurity and Trust Models. During his 17 years as UN Expert, he became recognized worldwide as an Internet Pioneer and a distinctive authority, thought leader, and entrepreneurial force in today’s digital world where the acquisition and trusted protection of Identity, Trust and Security has become an essential step for citizens and entities across the globe.</p>
<p>Guided by a belief that the Internet needs to be safe and universal and a tool for prosperity, he began developing technologies to protect the Internet and founded WISeKey. He is the founder of SG International Organization for Secure Electronic Transactions IOSET OISTE.org, a founding member of the “Comité de Pilotage Project E-Voting” of the Geneva Government, a Member of the UN Global Compact, and a Member of the World Economic Forum Global Agenda Council—among many other leadership roles. He has previously served as an Adjunct Professor of the Graduate School of Engineering RMIT Australia, and as Head of the Trade Efficiency Lab at the Graduate School of Engineering at RMIT.</p>
<p>In The TransHuman Code, he and his co-author David Fergusson ask, ''Are we building a better future for humanity with the help of magnificent technology, or are we instead building a better future of better technology at the expense of humanity? The book imagines what it would look like to center humanity in the emerging tension between a human-controlled or a machine-controlled world.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Cover art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the podcast, I speak with Carlos Moreira, the CEO of WISeKey. We discuss the possibilities for building a human-centered technological future today—and the consequences if we do not. What does a human-centered model for technological production look like? How can we build human rights into our tech? And what needs to change to return human values to tech?</p>
<p>Carlos Moreira is the Founder Chairman CEO of the international cybersecurity firm <a href='https://www.wisekey.com'>WISeKey</a> and the author (along with David Fergusson) of <a href='https://transhumancode.com'><em>The Transhuman Code</em></a><em>, </em>a landmark book about ethics and technology. Before founding WISeKey, he served as the United Nations Expert on CyberSecurity and Trust Models. During his 17 years as UN Expert, he became recognized worldwide as an Internet Pioneer and a distinctive authority, thought leader, and entrepreneurial force in today’s digital world where the acquisition and trusted protection of Identity, Trust and Security has become an essential step for citizens and entities across the globe.</p>
<p>Guided by a belief that the Internet needs to be safe and universal and a tool for prosperity, he began developing technologies to protect the Internet and founded WISeKey. He is the founder of SG International Organization for Secure Electronic Transactions IOSET OISTE.org, a founding member of the “Comité de Pilotage Project E-Voting” of the Geneva Government, a Member of the UN Global Compact, and a Member of the World Economic Forum Global Agenda Council—among many other leadership roles. He has previously served as an Adjunct Professor of the Graduate School of Engineering RMIT Australia, and as Head of the Trade Efficiency Lab at the Graduate School of Engineering at RMIT.</p>
<p>In <em>The TransHuman Code</em>, he and his co-author David Fergusson ask, ''Are we building a better future for humanity with the help of magnificent technology, or are we instead building a better future of better technology at the expense of humanity? The book imagines what it would look like to center humanity in the emerging tension between a human-controlled or a machine-controlled world.</p>
<p>This episode was produced by Matt Perry.</p>
<p>Cover art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="81375424" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qhgj3y/Carlos_Moreira_Podcast_mixdown6f98t.mp3"/>
        <itunes:summary>In this episode of the podcast, I speak with Carlos Moreira, the CEO of WISeKey. We discuss the possibilities for building a human-centered technological future today—and the consequences if we do not. What does a human-centered model for technological production look like? How can we build human rights into our tech? And what needs to change to return human values to tech?</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3390</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>51</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the podcast, I speak with Carlos Moreira, the CEO of WISeKey. We discuss the possibilities for building a human-centered technological future today—and the consequences if we do not. What does a human-centered model for technological production look like? How can we build human rights into our tech? And what needs to change to return human values to tech? Carlos Moreira is the Founder Chairman CEO of the international cybersecurity firm WISeKey and the author (along with David Fergusson) of The Transhuman Code, a landmark book about ethics and technology. Before founding WISeKey, he served as the United Nations Expert on CyberSecurity and Trust Models. During his 17 years as UN Expert, he became recognized worldwide as an Internet Pioneer and a distinctive authority, thought leader, and entrepreneurial force in today’s digital world where the acquisition and trusted protection of Identity, Trust and Security has become an essential step for citizens and entities across the globe. Guided by a belief that the Internet needs to be safe and universal and a tool for prosperity, he began developing technologies to protect the Internet and founded WISeKey. He is the founder of SG International Organization for Secure Electronic Transactions IOSET OISTE.org, a founding member of the “Comité de Pilotage Project E-Voting” of the Geneva Government, a Member of the UN Global Compact, and a Member of the World Economic Forum Global Agenda Council—among many other leadership roles. He has previously served as an Adjunct Professor of the Graduate School of Engineering RMIT Australia, and as Head of the Trade Efficiency Lab at the Graduate School of Engineering at RMIT. In The TransHuman Code, he and his co-author David Fergusson ask, ''Are we building a better future for humanity with the help of magnificent technology, or are we instead building a better future of better technology at the expense of humanity? The book imagines what it would look like to center humanity in the emerging tension between a human-controlled or a machine-controlled world. This episode was produced by Matt Perry. Cover art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Explaining AI: Kordel France's quest to create Ethical AI</title>
        <itunes:title>Explaining AI: Kordel France's quest to create Ethical AI</itunes:title>
        <link>https://dmdonig.podbean.com/e/explaining-ai-kordel-frances-quest-to-create-ethical-ai/</link>
                    <comments>https://dmdonig.podbean.com/e/explaining-ai-kordel-frances-quest-to-create-ethical-ai/#comments</comments>        <pubDate>Fri, 20 Aug 2021 01:27:59 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/96ff3b78-ef35-3bb2-b556-738a1f0d18eb</guid>
                                    <description><![CDATA[<p>In this episode, I speak to Kordel France, the CEO of <a href='https://seekartech.com/'>Seekar Technology</a>. We discuss the challenges of building an ethical tech company, the importance of creating unbiased data sets, and Kordel discusses the importance of making AI explainable.</p>
<p>Seekar technologies builds Artificial Intelligence across industries, with a specific focus on creating ethical AI for uses in ethical contexts. His technology has been involved in creating a response to the COVID-19 pandemic that allows doctors to more efficiently screen for the virus, and in environmental protection and conversation efforts. Kordel founded Seekar technologies to set new standards for how technologists create and deply AI, and seeks to make AI mobile, ethical, explainable, and dynamic. His vision for Seekar Technologies is committed to developing a tech culture in which the use of AI will remain ethical and equitable.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak to Kordel France, the CEO of <a href='https://seekartech.com/'>Seekar Technology</a>. We discuss the challenges of building an ethical tech company, the importance of creating unbiased data sets, and Kordel discusses the importance of making AI explainable.</p>
<p>Seekar technologies builds Artificial Intelligence across industries, with a specific focus on creating ethical AI for uses in ethical contexts. His technology has been involved in creating a response to the COVID-19 pandemic that allows doctors to more efficiently screen for the virus, and in environmental protection and conversation efforts. Kordel founded Seekar technologies to set new standards for how technologists create and deply AI, and seeks to make AI mobile, ethical, explainable, and dynamic. His vision for Seekar Technologies is committed to developing a tech culture in which the use of AI will remain ethical and equitable.</p>
<p> </p>
<p>This episode was produced by Matt Perry.</p>
<p>Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="67698950" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/iensg5/Kordel_France_Podcast_mixdowna3zz0.mp3"/>
        <itunes:summary>In this episode, I speak to Kordel France, the CEO of Seekar Technology. We discuss the challenges of building an ethical tech company, the importance of creating unbiased data sets, and Kordel discusses the importance of making AI explainable.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2820</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>50</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak to Kordel France, the CEO of Seekar Technology. We discuss the challenges of building an ethical tech company, the importance of creating unbiased data sets, and Kordel discusses the importance of making AI explainable. Seekar technologies builds Artificial Intelligence across industries, with a specific focus on creating ethical AI for uses in ethical contexts. His technology has been involved in creating a response to the COVID-19 pandemic that allows doctors to more efficiently screen for the virus, and in environmental protection and conversation efforts. Kordel founded Seekar technologies to set new standards for how technologists create and deply AI, and seeks to make AI mobile, ethical, explainable, and dynamic. His vision for Seekar Technologies is committed to developing a tech culture in which the use of AI will remain ethical and equitable.   This episode was produced by Matt Perry. Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Chris Wexler's Quest to Detoxify the Internet: AI and Krunam's Fight to Stop Human Trafficking</title>
        <itunes:title>Chris Wexler's Quest to Detoxify the Internet: AI and Krunam's Fight to Stop Human Trafficking</itunes:title>
        <link>https://dmdonig.podbean.com/e/chris-wexlers-quest-to-detoxify-the-internet-ai-and-krunams-fight-to-stop-human-trafficking/</link>
                    <comments>https://dmdonig.podbean.com/e/chris-wexlers-quest-to-detoxify-the-internet-ai-and-krunams-fight-to-stop-human-trafficking/#comments</comments>        <pubDate>Fri, 13 Aug 2021 02:39:57 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e2456800-94f4-3e10-a44f-7efd49047e98</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with Chris Wexler, the CEO of <a href='https://krunam.co/'>Krunam</a>, one of the world’s leading image and video classifiers of Child Sexual Abuse Materials (CSAM). We discuss the dark side of the world wide web, and Chris explains how exploitative economies of human trafficking proliferate online. We discuss how Krunam puts AI to use combatting this exploitation. We also look at the rise of social justice-oriented technologies and the rise of <a href='https://krunam.co/partners/'>social impact investing</a>, and Chris shares why he is hopeful that the future of tech investing will be social-impact based.</p>
<p>Krunam has created one of the most potent AI tools for successfully identifying and removing digital toxic waste from the internet. By using AI to identify CSAM and other incendiary and exploitative content to improve and speed content moderation, Krunam’s technology has helped private platforms and law enforcement halt some of the most exploitative child sex trafficking outfits operating today. </p>
<p>Before founding Krunam, Chris established several leading digital and analytics practices at four different major ad agencies while working with major brands. He now devotes his time to creating safer digital environments and developing social impact technology to better serve human values and social justice.</p>
<p>*A brief content warning note about today’s episode: my interview with Chris discusses using AI to classify images containing child sexual content. The work is important, but the discussion includes frank conversations about sexual material and is not suitable for all ears. Please consider your surroundings before listening, and you may want to avoid listening in a space that you share with children.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with Chris Wexler, the CEO of <a href='https://krunam.co/'>Krunam</a>, one of the world’s leading image and video classifiers of Child Sexual Abuse Materials (CSAM). We discuss the dark side of the world wide web, and Chris explains how exploitative economies of human trafficking proliferate online. We discuss how Krunam puts AI to use combatting this exploitation. We also look at the rise of social justice-oriented technologies and the rise of <a href='https://krunam.co/partners/'>social impact investing</a>, and Chris shares why he is hopeful that the future of tech investing will be social-impact based.</p>
<p>Krunam has created one of the most potent AI tools for successfully identifying and removing digital toxic waste from the internet. By using AI to identify CSAM and other incendiary and exploitative content to improve and speed content moderation, Krunam’s technology has helped private platforms and law enforcement halt some of the most exploitative child sex trafficking outfits operating today. </p>
<p>Before founding Krunam, Chris established several leading digital and analytics practices at four different major ad agencies while working with major brands. He now devotes his time to creating safer digital environments and developing social impact technology to better serve human values and social justice.</p>
<p>*A brief content warning note about today’s episode: my interview with Chris discusses using AI to classify images containing child sexual content. The work is important, but the discussion includes frank conversations about sexual material and is not suitable for all ears. Please consider your surroundings before listening, and you may want to avoid listening in a space that you share with children.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="84971658" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/34d9wc/Chris_Wexler_Podcast_mixdown7kpx2.mp3"/>
        <itunes:summary>In this episode, I sit down with Chris Wexler, the CEO of Krunam, one of the world’s leading image and video classifiers of Child Sexual Abuse Materials (CSAM). We discuss the dark side of the world wide web, and Chris explains how exploitative economies of human trafficking proliferate online. We discuss how Krunam puts AI to use combatting this exploitation. We also look at the rise of social justice-oriented technologies and the rise of social impact investing, and Chris shares why he is hopeful that the future of tech investing will be social-impact based.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>true</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3540</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>49</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with Chris Wexler, the CEO of Krunam, one of the world’s leading image and video classifiers of Child Sexual Abuse Materials (CSAM). We discuss the dark side of the world wide web, and Chris explains how exploitative economies of human trafficking proliferate online. We discuss how Krunam puts AI to use combatting this exploitation. We also look at the rise of social justice-oriented technologies and the rise of social impact investing, and Chris shares why he is hopeful that the future of tech investing will be social-impact based. Krunam has created one of the most potent AI tools for successfully identifying and removing digital toxic waste from the internet. By using AI to identify CSAM and other incendiary and exploitative content to improve and speed content moderation, Krunam’s technology has helped private platforms and law enforcement halt some of the most exploitative child sex trafficking outfits operating today.  Before founding Krunam, Chris established several leading digital and analytics practices at four different major ad agencies while working with major brands. He now devotes his time to creating safer digital environments and developing social impact technology to better serve human values and social justice. *A brief content warning note about today’s episode: my interview with Chris discusses using AI to classify images containing child sexual content. The work is important, but the discussion includes frank conversations about sexual material and is not suitable for all ears. Please consider your surroundings before listening, and you may want to avoid listening in a space that you share with children.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Rise of the Ethical Hacker: The Wild, Wild West of Cybersecurity with Ted Harrington</title>
        <itunes:title>The Rise of the Ethical Hacker: The Wild, Wild West of Cybersecurity with Ted Harrington</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-rise-of-the-ethical-hacker-the-wild-wild-west-of-cybersecurity-with-ted-harrington/</link>
                    <comments>https://dmdonig.podbean.com/e/the-rise-of-the-ethical-hacker-the-wild-wild-west-of-cybersecurity-with-ted-harrington/#comments</comments>        <pubDate>Fri, 06 Aug 2021 03:10:30 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/58226913-3f5f-3948-8c99-500eb143837f</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with Ted Harrington, the author of <a href='https://www.amazon.com/Hackable-How-Application-Security-Right-ebook/dp/B08MFTQ7Q4'>Hackable: How to Do Application Security Right</a>, and the <a href='https://www.ise.io/about/leadership/index.html'>Executive Partner at Independent Security Evaluators (ISE)</a>, one of the most prominent global companies working in the growing industry of ethical hacking. We talk about cybersecurity, the growth of the "ethical hacker" profession, and how the next generation of humanists and technologists can keep the internet safe.</p>
<p>For his stewardship of security research that Wired Magazine says “wins the prize, hands down,” Ted has been named both Executive of the Year [by American Business Awards] and 40 Under 40 [by SD Metro]. He leads a team that started and organizes IoT Village, an event whose hacking contest is a three-time DEFCON Black Badge winner, and which represents the discovery of more than 300 zero-day vulnerabilities (and counting). Ted’s work has been featured in more than 100 media outlets, including The New York Times, Financial Times, Wall Street Journal, Washington Post, and USA Today.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with Ted Harrington, the author of <a href='https://www.amazon.com/Hackable-How-Application-Security-Right-ebook/dp/B08MFTQ7Q4'><em>Hackable: How to Do Application Security Right</em></a>, and the <a href='https://www.ise.io/about/leadership/index.html'>Executive Partner at Independent Security Evaluators (ISE)</a>, one of the most prominent global companies working in the growing industry of ethical hacking. We talk about cybersecurity, the growth of the "ethical hacker" profession, and how the next generation of humanists and technologists can keep the internet safe.</p>
<p>For his stewardship of security research that Wired Magazine says “wins the prize, hands down,” Ted has been named both Executive of the Year [by American Business Awards] and 40 Under 40 [by SD Metro]. He leads a team that started and organizes IoT Village, an event whose hacking contest is a three-time DEFCON Black Badge winner, and which represents the discovery of more than 300 zero-day vulnerabilities (and counting). Ted’s work has been featured in more than 100 media outlets, including The New York Times, Financial Times, Wall Street Journal, Washington Post, and USA Today.</p>
]]></content:encoded>
                                    
        <enclosure length="90730203" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/uwkpec/Ted_Harrington_Podcast_mixdown8gdlf.mp3"/>
        <itunes:summary>In this episode, I sit down with Ted Harrington, the author of Hackable: How to Do Application Security Right, and the Executive Partner at Independent Security Evaluators (ISE), one of the most prominent global companies working in the growing industry of ethical hacking. We talk about cybersecurity, the growth of the "ethical hacker" profession, and how the next generation of humanists and technologists can keep the internet safe.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3780</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>48</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with Ted Harrington, the author of Hackable: How to Do Application Security Right, and the Executive Partner at Independent Security Evaluators (ISE), one of the most prominent global companies working in the growing industry of ethical hacking. We talk about cybersecurity, the growth of the "ethical hacker" profession, and how the next generation of humanists and technologists can keep the internet safe. For his stewardship of security research that Wired Magazine says “wins the prize, hands down,” Ted has been named both Executive of the Year [by American Business Awards] and 40 Under 40 [by SD Metro]. He leads a team that started and organizes IoT Village, an event whose hacking contest is a three-time DEFCON Black Badge winner, and which represents the discovery of more than 300 zero-day vulnerabilities (and counting). Ted’s work has been featured in more than 100 media outlets, including The New York Times, Financial Times, Wall Street Journal, Washington Post, and USA Today.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Captivating Technology: How surveillance technology is taking over our prisons and our bodies</title>
        <itunes:title>Captivating Technology: How surveillance technology is taking over our prisons and our bodies</itunes:title>
        <link>https://dmdonig.podbean.com/e/anthony-hatch/</link>
                    <comments>https://dmdonig.podbean.com/e/anthony-hatch/#comments</comments>        <pubDate>Fri, 30 Jul 2021 01:48:56 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c68e98e8-9dc1-33b5-ba02-7d7f3a6e7e54</guid>
                                    <description><![CDATA[<p>We are back with a brand-new season of the Technically Human podcast! This week's episode features Dr. Anthony Hatch. We discuss the use of psychotropic drugs in prisons as a form of carceral technology, race, and the bioethics of food systems. Learn more about how technologies are mediating health data, how our understanding of our bodies changes in response to new monitoring and mediating technologies, and how Dr. Hatch is creating a space for the next generation of technologists, humanists, and social scientists to develop more equitable, ethical relationships to building tech products.</p>
<p>Anthony Ryan Hatch, Ph.D., is a sociologist and Associate Professor and Chair of the <a href='https://www.wesleyan.edu/academics/faculty/ahatch/profile.html'>Science in Society Program</a> at Wesleyan University where he is also affiliated faculty in the Department of African American Studies, the College of the Environment, and the Department of Sociology. He is the author of <a href='https://www.upress.umn.edu/book-division/books/silent-cells'>Silent Cells: The Secret Drugging of Captive America</a> (Minnesota, 2019) and <a href='https://www.upress.umn.edu/book-division/books/blood-sugar'>Blood Sugar: Racial Pharmacology and Food Justice in Black America</a> (Minnesota, 2016). He recently appeared in the PBS documentary <a href='https://www.pbs.org/show/blood-sugar-rising/'>Blood Sugar Rising</a> and lectures widely on health systems, medical technology, and social inequalities. </p>
<p>In Spring 2021, he started <a href='http://blackboxlabs.wescreates.wesleyan.edu/'>Black Box Labs</a>, an undergraduate research and training laboratory that offers students training in qualitative research methods aligned with science and technology studies and the opportunity to collaborate with faculty on social research. </p>
<p>This episode was produced by Matt Perry</p>
<p>Art by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>We are back with a brand-new season of the Technically Human podcast! This week's episode features Dr. Anthony Hatch. We discuss the use of psychotropic drugs in prisons as a form of carceral technology, race, and the bioethics of food systems. Learn more about how technologies are mediating health data, how our understanding of our bodies changes in response to new monitoring and mediating technologies, and how Dr. Hatch is creating a space for the next generation of technologists, humanists, and social scientists to develop more equitable, ethical relationships to building tech products.</p>
<p>Anthony Ryan Hatch, Ph.D., is a sociologist and Associate Professor and Chair of the <a href='https://www.wesleyan.edu/academics/faculty/ahatch/profile.html'>Science in Society Program</a> at Wesleyan University where he is also affiliated faculty in the Department of African American Studies, the College of the Environment, and the Department of Sociology. He is the author of <a href='https://www.upress.umn.edu/book-division/books/silent-cells'>Silent Cells: The Secret Drugging of Captive America</a> (Minnesota, 2019) and <a href='https://www.upress.umn.edu/book-division/books/blood-sugar'>Blood Sugar: Racial Pharmacology and Food Justice in Black America</a> (Minnesota, 2016). He recently appeared in the PBS documentary <a href='https://www.pbs.org/show/blood-sugar-rising/'>Blood Sugar Rising</a> and lectures widely on health systems, medical technology, and social inequalities. </p>
<p>In Spring 2021, he started <a href='http://blackboxlabs.wescreates.wesleyan.edu/'>Black Box Labs</a>, an undergraduate research and training laboratory that offers students training in qualitative research methods aligned with science and technology studies and the opportunity to collaborate with faculty on social research. </p>
<p>This episode was produced by Matt Perry</p>
<p>Art by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="102253604" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/xsck5e/Anthony_Hatch_Podcast_mixdown9ifwa.mp3"/>
        <itunes:summary>We are back with a brand-new season of the Technically Human podcast! This week's episode features Dr. Anthony Hatch. We discuss the use of psychotropic drugs in prisons as a form of carceral technology, race, and the bioethics of food systems. Learn more about how technologies are mediating health data, how our understanding of our bodies changes in response to new monitoring and mediating technologies, and how Dr. Hatch is creating a space for the next generation of technologists, humanists, and social scientists to develop more equitable, ethical relationships to building tech products.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4260</itunes:duration>
        <itunes:season>6</itunes:season>
        <itunes:episode>47</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>We are back with a brand-new season of the Technically Human podcast! This week's episode features Dr. Anthony Hatch. We discuss the use of psychotropic drugs in prisons as a form of carceral technology, race, and the bioethics of food systems. Learn more about how technologies are mediating health data, how our understanding of our bodies changes in response to new monitoring and mediating technologies, and how Dr. Hatch is creating a space for the next generation of technologists, humanists, and social scientists to develop more equitable, ethical relationships to building tech products. Anthony Ryan Hatch, Ph.D., is a sociologist and Associate Professor and Chair of the Science in Society Program at Wesleyan University where he is also affiliated faculty in the Department of African American Studies, the College of the Environment, and the Department of Sociology. He is the author of Silent Cells: The Secret Drugging of Captive America (Minnesota, 2019) and Blood Sugar: Racial Pharmacology and Food Justice in Black America (Minnesota, 2016). He recently appeared in the PBS documentary Blood Sugar Rising and lectures widely on health systems, medical technology, and social inequalities.  In Spring 2021, he started Black Box Labs, an undergraduate research and training laboratory that offers students training in qualitative research methods aligned with science and technology studies and the opportunity to collaborate with faculty on social research.  This episode was produced by Matt Perry Art by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Network Technology: Dr. Ethel Mickey explains how networks  structure the tech workforce</title>
        <itunes:title>Network Technology: Dr. Ethel Mickey explains how networks  structure the tech workforce</itunes:title>
        <link>https://dmdonig.podbean.com/e/network-technology-dr-ethel-mickey-explains-how-networks-structure-the-tech-workforce/</link>
                    <comments>https://dmdonig.podbean.com/e/network-technology-dr-ethel-mickey-explains-how-networks-structure-the-tech-workforce/#comments</comments>        <pubDate>Fri, 04 Jun 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9d184ab3-bff5-36f6-bb4f-35c9f43a2684</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Ethel Mickey about tech's "pipeline problem." We discuss STEM culture in universities, how inequality gets generated through a culture of networking, and Dr. Mickey walks me through the pipeline from campus culture to the tech workforce. </p>
<p>Dr. Ethel Mickey is a sociologist of gender, work, and organizations, with a focus on science & technology settings. She received her PhD from Northeastern University, and is currently a postdoc at the University of Massachusetts, Amherst with the NSF-funded UMass ADVANCE program. Her research broadly explores the persistence of intersectional inequalities through relational dynamics including networks and collaborative teams, and she is currently working on a book manuscript on gendered and racialized networks in the tech sector. Her work has appeared in Gender & Society, and Journal of Contemporary Ethnography, and has been recognized by the American Sociological Association and Sociologists for Women in Society. </p>
<p>And this episode concludes season 5 of the Technically Human podcast. We’ll be on hiatus for the next few weeks. Please stay tuned, and join us for an exciting new season of the show when we return in the middle of July to bring you interviews with Silicon Valley writer Dan Lyons on satire, ethics, and tech culture, sociologist Dr. Anthony Hatch on biology, tech, and prisons, and many more exciting conversations. See you in July!</p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Ethel Mickey about tech's "pipeline problem." We discuss STEM culture in universities, how inequality gets generated through a culture of networking, and Dr. Mickey walks me through the pipeline from campus culture to the tech workforce. </p>
<p>Dr. Ethel Mickey is a sociologist of gender, work, and organizations, with a focus on science & technology settings. She received her PhD from Northeastern University, and is currently a postdoc at the University of Massachusetts, Amherst with the NSF-funded UMass ADVANCE program. Her research broadly explores the persistence of intersectional inequalities through relational dynamics including networks and collaborative teams, and she is currently working on a book manuscript on gendered and racialized networks in the tech sector. Her work has appeared in <em>Gender & Society</em>, and<em> Journal of Contemporary Ethnography</em>, and has been recognized by the American Sociological Association and Sociologists for Women in Society. </p>
<p>And this episode concludes season 5 of the Technically Human podcast. We’ll be on hiatus for the next few weeks. Please stay tuned, and join us for an exciting new season of the show when we return in the middle of July to bring you interviews with Silicon Valley writer Dan Lyons on satire, ethics, and tech culture, sociologist Dr. Anthony Hatch on biology, tech, and prisons, and many more exciting conversations. See you in July!</p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="82812143" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/7y6f65/Ethel_Mickey_Podcast_mixdown7ulxq.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I talk to Dr. Ethel Mickey about tech's "pipeline problem." We discuss STEM culture in universities, how inequality gets generated through a culture of networking, and Dr. Mickey walks me through the pipeline from campus culture to the tech workforce.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3450</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>46</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I talk to Dr. Ethel Mickey about tech's "pipeline problem." We discuss STEM culture in universities, how inequality gets generated through a culture of networking, and Dr. Mickey walks me through the pipeline from campus culture to the tech workforce.  Dr. Ethel Mickey is a sociologist of gender, work, and organizations, with a focus on science &amp; technology settings. She received her PhD from Northeastern University, and is currently a postdoc at the University of Massachusetts, Amherst with the NSF-funded UMass ADVANCE program. Her research broadly explores the persistence of intersectional inequalities through relational dynamics including networks and collaborative teams, and she is currently working on a book manuscript on gendered and racialized networks in the tech sector. Her work has appeared in Gender &amp; Society, and Journal of Contemporary Ethnography, and has been recognized by the American Sociological Association and Sociologists for Women in Society.  And this episode concludes season 5 of the Technically Human podcast. We’ll be on hiatus for the next few weeks. Please stay tuned, and join us for an exciting new season of the show when we return in the middle of July to bring you interviews with Silicon Valley writer Dan Lyons on satire, ethics, and tech culture, sociologist Dr. Anthony Hatch on biology, tech, and prisons, and many more exciting conversations. See you in July! Podcast produced by Matt Perry and Ana Marsh. Podcast art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Millennial Action Technology: Steven Olikara talks tech and political activism for a new generation of leaders</title>
        <itunes:title>Millennial Action Technology: Steven Olikara talks tech and political activism for a new generation of leaders</itunes:title>
        <link>https://dmdonig.podbean.com/e/millennial-action-technology-steven-olikara-talks-tech-and-political-activism-for-a-new-generation-of-leaders/</link>
                    <comments>https://dmdonig.podbean.com/e/millennial-action-technology-steven-olikara-talks-tech-and-political-activism-for-a-new-generation-of-leaders/#comments</comments>        <pubDate>Fri, 28 May 2021 02:18:10 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/885e3443-e91f-3938-a13c-cb3a5ff797bd</guid>
                                    <description><![CDATA[<p>In this week's episode, I speak to Steven Olikara, founder of the <a href='https://www.millennialaction.org/'>Millenial Action Project (MAP</a>), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. </p>
<p>Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law & Policy, and a Forward Under 40 by the Wisconsin Alumni Association. </p>
<p>JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election.</p>
<p>To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit <a href='http://www.www.stevenolikara.com.'>www.www.stevenolikara.com. </a></p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this week's episode, I speak to Steven Olikara, founder of the <a href='https://www.millennialaction.org/'>Millenial Action Project (MAP</a>), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. </p>
<p>Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law & Policy, and a Forward Under 40 by the Wisconsin Alumni Association. </p>
<p>JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election.</p>
<p>To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit <a href='http://www.www.stevenolikara.com.'>www.www.stevenolikara.com. </a></p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="105853119" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/p2q3x7/Steven_Olikara_Podcast_mixdown7kdmh.mp3"/>
        <itunes:summary>In this week's episode, I speak to Steven Olikara, founder of the Millenial Action Project (MAP), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age. 

JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election. To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit www.www.stevenolikara.com.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4410</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>45</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this week's episode, I speak to Steven Olikara, founder of the Millenial Action Project (MAP), the largest nonpartisan organization of young lawmakers in the U.S. Steven and I discuss the role of tech in political activism and the challenges of bipartisanship in a technological age.  Steven Olikara has been named a Global Shaper by the World Economic Forum, a Forbes 30 Under 30 in Law &amp; Policy, and a Forward Under 40 by the Wisconsin Alumni Association.  JUST IN: This week, Steven announced his decision to form an exploratory committee for the U.S. Senate in Wisconsin, with the goal of running as a candidate in the 2022 election. To learn more about Steven's campaign and his vision for the senate, grounded in the ideal of dignity for all, visit www.www.stevenolikara.com.  Podcast produced by Matt Perry and Ana Marsh. Podcast art by Desi Aleman.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Hard at Work: Sharla Alegria discusses inequality in the tech workforce</title>
        <itunes:title>Hard at Work: Sharla Alegria discusses inequality in the tech workforce</itunes:title>
        <link>https://dmdonig.podbean.com/e/hard-at-work-sharla-alegria-discusses-inequality-in-the-tech-workforce/</link>
                    <comments>https://dmdonig.podbean.com/e/hard-at-work-sharla-alegria-discusses-inequality-in-the-tech-workforce/#comments</comments>        <pubDate>Fri, 21 May 2021 01:32:17 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/9fb062d5-5142-3bb2-ac0c-9ccabc7154f4</guid>
                                    <description><![CDATA[<p>In this episode, I talk to Dr. Sharla Alegria about inequality in the tech workforce. We discuss tech workplace culture, the relationship between ethics and equality, and Sharla explains why robots aren't really taking your job--but growing culture of inequality might.</p>
<p>Dr. Alegria earned her Ph.D. in Sociology with a certificate in Women, Gender, and Sexuality studies at the University of Massachusetts Amherst in 2016 and joined the faculty at the University of Toronto in 2019. She teaches classes on work; race, class, and gender; science, knowledge, and technology; stratification and inequality. Sharla’s research on inequality in the new economy, knowledge-based work examines tech work to understand why women’s representation in computing jobs has decreased since the early 1990s despite public and private sector investment. Beyond tech work, her research examines race and gender inequality in workplaces and institutions invested in diversity and equity. Her award-winning research appears in American Journal of Sociology, Gender & Society, and Ethnic and Racial Studies. </p>
<p>We are currently in the middle of a series of <a href='https://www.etcalpoly.org/events'>live events </a>on ethics and technology, scheduled for the next few weeks. On May 25, we will host a screening of the new documentary, <a href='https://www.codedbias.com/'>Coded Bias</a>, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our “<a href='https://www.etcalpoly.org/events'>Upcoming Events</a>” page for more information about the events, and to reserve your spot.</p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast Art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I talk to Dr. Sharla Alegria about inequality in the tech workforce. We discuss tech workplace culture, the relationship between ethics and equality, and Sharla explains why robots aren't really taking your job--but growing culture of inequality might.</p>
<p>Dr. Alegria earned her Ph.D. in Sociology with a certificate in Women, Gender, and Sexuality studies at the University of Massachusetts Amherst in 2016 and joined the faculty at the University of Toronto in 2019. She teaches classes on work; race, class, and gender; science, knowledge, and technology; stratification and inequality. Sharla’s research on inequality in the new economy, knowledge-based work examines tech work to understand why women’s representation in computing jobs has decreased since the early 1990s despite public and private sector investment. Beyond tech work, her research examines race and gender inequality in workplaces and institutions invested in diversity and equity. Her award-winning research appears in <em>American Journal of Sociology, Gender & Society,</em> and <em>Ethnic and Racial Studies</em>. </p>
<p>We are currently in the middle of a series of <a href='https://www.etcalpoly.org/events'>live events </a>on ethics and technology, scheduled for the next few weeks. On May 25, we will host a screening of the new documentary, <a href='https://www.codedbias.com/'>Coded Bias</a>, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our “<a href='https://www.etcalpoly.org/events'>Upcoming Events</a>” page for more information about the events, and to reserve your spot.</p>
<p>Podcast produced by Matt Perry and Ana Marsh.</p>
<p>Podcast Art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="84250203" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/px3epw/Sharla_Alegria_Podcast_mixdown9k14w.mp3"/>
        <itunes:summary>In this episode, I talk to Dr. Sharla Alegria about inequality in the tech workforce. We discuss tech workplace culture, the relationship between ethics and equality, and Sharla explains why robots aren't really taking your job--but growing culture of inequality might.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3510</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>44</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I talk to Dr. Sharla Alegria about inequality in the tech workforce. We discuss tech workplace culture, the relationship between ethics and equality, and Sharla explains why robots aren't really taking your job--but growing culture of inequality might. Dr. Alegria earned her Ph.D. in Sociology with a certificate in Women, Gender, and Sexuality studies at the University of Massachusetts Amherst in 2016 and joined the faculty at the University of Toronto in 2019. She teaches classes on work; race, class, and gender; science, knowledge, and technology; stratification and inequality. Sharla’s research on inequality in the new economy, knowledge-based work examines tech work to understand why women’s representation in computing jobs has decreased since the early 1990s despite public and private sector investment. Beyond tech work, her research examines race and gender inequality in workplaces and institutions invested in diversity and equity. Her award-winning research appears in American Journal of Sociology, Gender &amp; Society, and Ethnic and Racial Studies.  We are currently in the middle of a series of live events on ethics and technology, scheduled for the next few weeks. On May 25, we will host a screening of the new documentary, Coded Bias, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our “Upcoming Events” page for more information about the events, and to reserve your spot. Podcast produced by Matt Perry and Ana Marsh. Podcast Art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Digital Justice: Tech, policing, and the digital divide with Dr. Rashawn Ray and Dr. Nicol Turner Lee</title>
        <itunes:title>Digital Justice: Tech, policing, and the digital divide with Dr. Rashawn Ray and Dr. Nicol Turner Lee</itunes:title>
        <link>https://dmdonig.podbean.com/e/digital-justice-tech-policing-and-the-digital-divide-with-dr-rashawn-ray-and-dr-nicol-turner-lee/</link>
                    <comments>https://dmdonig.podbean.com/e/digital-justice-tech-policing-and-the-digital-divide-with-dr-rashawn-ray-and-dr-nicol-turner-lee/#comments</comments>        <pubDate>Fri, 14 May 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/82fce8e9-6649-3235-b0d0-ce44c03c5e60</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Rashawn Ray and Dr. Nicol Turner Lee, both Fellows at the Brookings Institution, about race, tech, policing, and the digital divide. We talk about the role of video technology and social media in police accountability, the dangers of surveillance technologies developed in Silicon Valley when deployed in policing, and the long history--and the consequences--of the digital divide in the context of social equity.</p>
<p><a href='https://www.rashawnray.com/'>Dr. Rashawn Ray</a>, a David M. Rubenstein Fellow in Governance Studies at The Brookings Institution, is Professor of Sociology and Executive Director of the Lab for Applied Social Science Research (LASSR) at the University of Maryland, College Park. Dr. Ray has published over 50 books, articles, and book chapters, and roughly 50 op-eds. Recently, Dr. Ray published  How Families Matter: Simply Complicated Intersections of Race, Gender, and Work (with Pamela Braboy Jackson) and another edition of Race and Ethnic Relations in the 21st Century: History, Theory, Institutions, and Policy, which has been adopted nearly 40 times in college courses. Ray has written for the Washington Post, New York Times, Newsweek,  Business Insider, Huffington Post, and NBC News. </p>
<p><a href='https://www.drnicolspeaks.com/about-me/'>Dr. Nicol Turner Lee</a> is a senior fellow in Governance Studies, the director of the Center for Technology Innovation, and the Co-Editor-In-Chief of TechTank. Dr. Turner Lee researches public policy designed to enable equitable access to technology across the U.S. and to harness its power to create change in communities across the world. Dr. Turner Lee has been cited in the New York Times, Washington Post, San Francisco Chronicle, Communications Daily, Multichannel News, and Washington Informer. She can also be seen or heard on NPR, NBC News, ABC, and more, she has testified before Congress, and she is Chair of the Telecommunications Policy Research Conference (TPRC), which is committed to joining policymakers and academics around significant tech policy issues. Her new book, Digitally Invisible: How the Internet is Creating the New Underclass (Brookings Press, 2021), examines the history, and the consequences, of the digital divide.</p>
<p>And now some exciting news! </p>
<p>We are currently in the middle of a series of <a href='https://www.etcalpoly.org/events'>live events </a>on ethics and technology, scheduled for the next few weeks. Next Tuesday, May 18, I will host a Fireside chat with former CIA officer and former NSA advisor to Joe Biden, <a href='https://www.yaeleisenstat.com/'>Yaël Eisenstat</a>, who oversaw Facebook’s Global Elections Integrity Operations for political advertising and has since become one of facebook’s leading critics. </p>
<p>The following week, on May 25, we will host a screening of the new documentary, <a href='https://www.codedbias.com/'>Coded Bias</a>, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a> for more information about the events, and to reserve your spot.</p>
<p> Hope to see you there! </p>
<p>Podcast produced by Ana Marsh and Matt Perry.
Podcast art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Rashawn Ray and Dr. Nicol Turner Lee, both Fellows at the Brookings Institution, about race, tech, policing, and the digital divide. We talk about the role of video technology and social media in police accountability, the dangers of surveillance technologies developed in Silicon Valley when deployed in policing, and the long history--and the consequences--of the digital divide in the context of social equity.</p>
<p><a href='https://www.rashawnray.com/'>Dr. Rashawn Ray</a>, a David M. Rubenstein Fellow in Governance Studies at The Brookings Institution, is Professor of Sociology and Executive Director of the Lab for Applied Social Science Research (LASSR) at the University of Maryland, College Park. Dr. Ray has published over 50 books, articles, and book chapters, and roughly 50 op-eds. Recently, Dr. Ray published  <em>How Families Matter: Simply Complicated Intersections of Race, Gender, and Work </em>(with Pamela Braboy Jackson) and another edition of <em>Race and Ethnic Relations in the 21st Century: History, Theory, Institutions, and Policy</em>, which has been adopted nearly 40 times in college courses. Ray has written for the <em>Washington Post, </em><em>New York Times</em><em>, </em><em>Newsweek</em><em>,  Business Insider</em>, <em>Huffington Post</em>, and NBC News. </p>
<p><a href='https://www.drnicolspeaks.com/about-me/'>Dr. Nicol Turner Lee</a> is a senior fellow in Governance Studies, the director of the Center for Technology Innovation, and the Co-Editor-In-Chief of TechTank. Dr. Turner Lee researches public policy designed to enable equitable access to technology across the U.S. and to harness its power to create change in communities across the world. Dr. Turner Lee has been cited in the New York Times, Washington Post, San Francisco Chronicle, Communications Daily, Multichannel News, and Washington Informer. She can also be seen or heard on NPR, NBC News, ABC, and more, she has testified before Congress, and she is Chair of the Telecommunications Policy Research Conference (TPRC), which is committed to joining policymakers and academics around significant tech policy issues. Her new book, Digitally Invisible: How the Internet is Creating the New Underclass (Brookings Press, 2021), examines the history, and the consequences, of the digital divide.</p>
<p>And now some exciting news! </p>
<p>We are currently in the middle of a series of <a href='https://www.etcalpoly.org/events'>live events </a>on ethics and technology, scheduled for the next few weeks. Next Tuesday, May 18, I will host a Fireside chat with former CIA officer and former NSA advisor to Joe Biden, <a href='https://www.yaeleisenstat.com/'>Yaël Eisenstat</a>, who oversaw Facebook’s Global Elections Integrity Operations for political advertising and has since become one of facebook’s leading critics. </p>
<p>The following week, on May 25, we will host a screening of the new documentary, <a href='https://www.codedbias.com/'>Coded Bias</a>, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a> for more information about the events, and to reserve your spot.</p>
<p> Hope to see you there! </p>
<p>Podcast produced by Ana Marsh and Matt Perry.<br>
Podcast art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="104413629" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kifryv/Dr_Ray_Dr_Turner_Lee_Podcast_mixdown5ywwo.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I talk to Dr. Rashawn Ray and Dr. Nicol Turner Lee, both Fellows at the Brookings Institution, about race, tech, policing, and the digital divide. We talk about the role of video technology and social media in police accountability, the dangers of surveillance technologies developed in Silicon Valley when deployed in policing, and the long history--and the consequences--of the digital divide in the context of social equity.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4350</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>43</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I talk to Dr. Rashawn Ray and Dr. Nicol Turner Lee, both Fellows at the Brookings Institution, about race, tech, policing, and the digital divide. We talk about the role of video technology and social media in police accountability, the dangers of surveillance technologies developed in Silicon Valley when deployed in policing, and the long history--and the consequences--of the digital divide in the context of social equity. Dr. Rashawn Ray, a David M. Rubenstein Fellow in Governance Studies at The Brookings Institution, is Professor of Sociology and Executive Director of the Lab for Applied Social Science Research (LASSR) at the University of Maryland, College Park. Dr. Ray has published over 50 books, articles, and book chapters, and roughly 50 op-eds. Recently, Dr. Ray published  How Families Matter: Simply Complicated Intersections of Race, Gender, and Work (with Pamela Braboy Jackson) and another edition of Race and Ethnic Relations in the 21st Century: History, Theory, Institutions, and Policy, which has been adopted nearly 40 times in college courses. Ray has written for the Washington Post, New York Times, Newsweek,  Business Insider, Huffington Post, and NBC News.  Dr. Nicol Turner Lee is a senior fellow in Governance Studies, the director of the Center for Technology Innovation, and the Co-Editor-In-Chief of TechTank. Dr. Turner Lee researches public policy designed to enable equitable access to technology across the U.S. and to harness its power to create change in communities across the world. Dr. Turner Lee has been cited in the New York Times, Washington Post, San Francisco Chronicle, Communications Daily, Multichannel News, and Washington Informer. She can also be seen or heard on NPR, NBC News, ABC, and more, she has testified before Congress, and she is Chair of the Telecommunications Policy Research Conference (TPRC), which is committed to joining policymakers and academics around significant tech policy issues. Her new book, Digitally Invisible: How the Internet is Creating the New Underclass (Brookings Press, 2021), examines the history, and the consequences, of the digital divide. And now some exciting news! We are currently in the middle of a series of live events on ethics and technology, scheduled for the next few weeks. Next Tuesday, May 18, I will host a Fireside chat with former CIA officer and former NSA advisor to Joe Biden, Yaël Eisenstat, who oversaw Facebook’s Global Elections Integrity Operations for political advertising and has since become one of facebook’s leading critics. The following week, on May 25, we will host a screening of the new documentary, Coded Bias, followed by a Q and A with the director, Shalini Kantayya, and All events are free, virtual, and open to the public, but space is limited. Check out our website, www.etcalpoly.org for more information about the events, and to reserve your spot. Hope to see you there!  Podcast produced by Ana Marsh and Matt Perry. Podcast art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Nintendo Nation: Jeff Ryan on Super Mario, gaming culture, and the ludology of play</title>
        <itunes:title>Nintendo Nation: Jeff Ryan on Super Mario, gaming culture, and the ludology of play</itunes:title>
        <link>https://dmdonig.podbean.com/e/nintendo-nation-jeff-ryan-on-super-mario-gaming-culture-and-the-ludology-of-play/</link>
                    <comments>https://dmdonig.podbean.com/e/nintendo-nation-jeff-ryan-on-super-mario-gaming-culture-and-the-ludology-of-play/#comments</comments>        <pubDate>Fri, 07 May 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/76a53ded-2532-33e0-a920-51e167c0c52c</guid>
                                    <description><![CDATA[<p>In this episode, I talk to Jeff Ryan, author of Super Mario: How Nintendo Conquered America. We discuss gaming culture, the gender dynamics of the gaming community, and Jeff defines the key to gaming, in what he calls "the ludology of play."</p>
<p>Jeff Ryan is the author of <a href='https://www.simonandschuster.com/books/A-Mouse-Divided/Jeff-Ryan/9781642930931'>A MOUSE DIVIDED: HOW UB IWERKS BECAME FORGOTTEN...AND WALT DISNEY BECAME UNCLE WALT</a> and <a href='https://www.penguinrandomhouse.com/books/308462/super-mario-by-jeff-ryan/'>SUPER MARIO: HOW NINTENDO CONQUERED AMERICA</a>. He has been published in Salon, Slate, Fast Company, Wired.com, Kotaku, and All Things Considered; and has been featured on NPR’s Marketplace, Time, Forbes, The New York Times, The Economist, The Independent, and Star Talk With Neil DeGrasse Tyson. He lives in Bloomfield, NJ, with his wife and two daughters. A lifelong gamer, he has reviewed over 500 video games and covered four console launches as the games editor for Katrillion, a popular dotcom-era news and entertainment Web site. He swears his books were not undertaken to write off family vacations to Orlando on his taxes.</p>
<p>A note on today’s episode: In recording this episode of “Technically Human,” our human interlocutors encountered some technical interference! None of this at all alters the brilliance of Jeff’s comments.</p>
<p>Some exciting news: We are launching a series of live events on ethics and technology, scheduled for the next few weeks, including an important and urgent conversation with Dr. Rashawn Ray on race, policing, and tech, a screening of the new documentary, Coded Bias, followed by a Q and A with the director, Shalini Kantayya, and a Fireside chat with former CIA agent and former NSA advisor to Joe Biden, Yaël Eisenstat, who, in the wake of the 2016 election, oversaw Facebook’s Global Elections Integrity Operations, and has since become one of facebook’s leading critics.</p>
<p>All events are free and open to the public, but space is limited.</p>
<p>Check out our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a> for more information about the events, and to reserve your spot.</p>
<p>This episode was produced by Ana Marsh and Matt Perry</p>
<p>Podcast art by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I talk to Jeff Ryan, author of <em>Super Mario: How Nintendo Conquered America. </em>We discuss gaming culture, the gender dynamics of the gaming community, and Jeff defines the key to gaming, in what he calls "the ludology of play."</p>
<p>Jeff Ryan is the author of <a href='https://www.simonandschuster.com/books/A-Mouse-Divided/Jeff-Ryan/9781642930931'>A MOUSE DIVIDED: HOW UB IWERKS BECAME FORGOTTEN...AND WALT DISNEY BECAME UNCLE WALT</a> and <a href='https://www.penguinrandomhouse.com/books/308462/super-mario-by-jeff-ryan/'>SUPER MARIO: HOW NINTENDO CONQUERED AMERICA</a>. He has been published in Salon, Slate, Fast Company, Wired.com, Kotaku, and All Things Considered; and has been featured on NPR’s Marketplace, Time, Forbes, The New York Times, The Economist, The Independent, and Star Talk With Neil DeGrasse Tyson. He lives in Bloomfield, NJ, with his wife and two daughters. A lifelong gamer, he has reviewed over 500 video games and covered four console launches as the games editor for Katrillion, a popular dotcom-era news and entertainment Web site. He swears his books were not undertaken to write off family vacations to Orlando on his taxes.</p>
<p>A note on today’s episode: In recording this episode of “Technically Human,” our human interlocutors encountered some technical interference! None of this at all alters the brilliance of Jeff’s comments.</p>
<p>Some exciting news: We are launching a series of live events on ethics and technology, scheduled for the next few weeks, including an important and urgent conversation with Dr. Rashawn Ray on race, policing, and tech, a screening of the new documentary, Coded Bias, followed by a Q and A with the director, Shalini Kantayya, and a Fireside chat with former CIA agent and former NSA advisor to Joe Biden, Yaël Eisenstat, who, in the wake of the 2016 election, oversaw Facebook’s Global Elections Integrity Operations, and has since become one of facebook’s leading critics.</p>
<p>All events are free and open to the public, but space is limited.</p>
<p>Check out our website, <a href='http://www.etcalpoly.org'>www.etcalpoly.org</a> for more information about the events, and to reserve your spot.</p>
<p>This episode was produced by Ana Marsh and Matt Perry</p>
<p>Podcast art by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="69854973" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/edx9ma/Jeff_Ryan_Podcast_mixdown19vfow.mp3"/>
        <itunes:summary>In this episode, I talk to Jeff Ryan, author of Super Mario: How Nintendo Conquered America. We discuss gaming culture, the gender dynamics of the gaming community, and Jeff defines the key to gaming, in what he calls "the ludology of play."</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2910</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>42</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I talk to Jeff Ryan, author of Super Mario: How Nintendo Conquered America. We discuss gaming culture, the gender dynamics of the gaming community, and Jeff defines the key to gaming, in what he calls "the ludology of play." Jeff Ryan is the author of A MOUSE DIVIDED: HOW UB IWERKS BECAME FORGOTTEN...AND WALT DISNEY BECAME UNCLE WALT and SUPER MARIO: HOW NINTENDO CONQUERED AMERICA. He has been published in Salon, Slate, Fast Company, Wired.com, Kotaku, and All Things Considered; and has been featured on NPR’s Marketplace, Time, Forbes, The New York Times, The Economist, The Independent, and Star Talk With Neil DeGrasse Tyson. He lives in Bloomfield, NJ, with his wife and two daughters. A lifelong gamer, he has reviewed over 500 video games and covered four console launches as the games editor for Katrillion, a popular dotcom-era news and entertainment Web site. He swears his books were not undertaken to write off family vacations to Orlando on his taxes. A note on today’s episode: In recording this episode of “Technically Human,” our human interlocutors encountered some technical interference! None of this at all alters the brilliance of Jeff’s comments. Some exciting news: We are launching a series of live events on ethics and technology, scheduled for the next few weeks, including an important and urgent conversation with Dr. Rashawn Ray on race, policing, and tech, a screening of the new documentary, Coded Bias, followed by a Q and A with the director, Shalini Kantayya, and a Fireside chat with former CIA agent and former NSA advisor to Joe Biden, Yaël Eisenstat, who, in the wake of the 2016 election, oversaw Facebook’s Global Elections Integrity Operations, and has since become one of facebook’s leading critics. All events are free and open to the public, but space is limited. Check out our website, www.etcalpoly.org for more information about the events, and to reserve your spot. This episode was produced by Ana Marsh and Matt Perry Podcast art by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Climate, Chemicals, Carcinogens, Cancer: Dr.Sandra Steingraber challenges the system</title>
        <itunes:title>Climate, Chemicals, Carcinogens, Cancer: Dr.Sandra Steingraber challenges the system</itunes:title>
        <link>https://dmdonig.podbean.com/e/climate-chemicals-carcinogens-cancer-drsandra-steingraber-challenges-the-system/</link>
                    <comments>https://dmdonig.podbean.com/e/climate-chemicals-carcinogens-cancer-drsandra-steingraber-challenges-the-system/#comments</comments>        <pubDate>Fri, 30 Apr 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/17dc1553-fb56-3017-b31c-00104955e322</guid>
                                    <description><![CDATA[<p>In this episode, I speak to Dr. Sandra Steingraber. We discuss the links between environmental destruction, contamination of vital resources, and the grave dangers that fracking technology poses to human health. Dr. Steingraber explains the link between environmental justice and social justice, and we talk about what the state of collaboration across fields and areas of expertise as universities increasingly turn into what she calls "Disaster Capitalism."</p>
<p>Biologist, author, and cancer survivor Dr. Sandra Steingraber, Ph.D. writes about climate change, ecology, and the links between human health and the environment. She has been named a Woman of the Year by Ms. Magazine, a Person of the Year by Treehugger, and one of 25 “Visionaries Who Are Changing Your World” by the Utne Reader. She is the recipient of the biennial Rachel Carson Leadership Award and the Jenifer Altman Foundation’s Altman Award for “the inspiring and poetic use of science to elucidate the causes of cancer.” Steingraber received a Hero Award from the Breast Cancer Fund and the Environmental Health Champion Award from Physicians for Social Responsibility, Los Angeles. She has testified in the European Parliament, at the European Commission, before the President’s Cancer Panel, and has participated in briefings to Congress, the Environmental Protection Agency, and before United Nations delegates in Geneva, Switzerland.</p>
<p>This episode was produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak to Dr. Sandra Steingraber. We discuss the links between environmental destruction, contamination of vital resources, and the grave dangers that fracking technology poses to human health. Dr. Steingraber explains the link between environmental justice and social justice, and we talk about what the state of collaboration across fields and areas of expertise as universities increasingly turn into what she calls "Disaster Capitalism."</p>
<p>Biologist, author, and cancer survivor Dr. Sandra Steingraber, Ph.D. writes about climate change, ecology, and the links between human health and the environment. She has been named a Woman of the Year by Ms. Magazine, a Person of the Year by Treehugger, and one of 25 “Visionaries Who Are Changing Your World” by the Utne Reader. She is the recipient of the biennial Rachel Carson Leadership Award and the Jenifer Altman Foundation’s Altman Award for “the inspiring and poetic use of science to elucidate the causes of cancer.” Steingraber received a Hero Award from the Breast Cancer Fund and the Environmental Health Champion Award from Physicians for Social Responsibility, Los Angeles. She has testified in the European Parliament, at the European Commission, before the President’s Cancer Panel, and has participated in briefings to Congress, the Environmental Protection Agency, and before United Nations delegates in Geneva, Switzerland.</p>
<p>This episode was produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="86412625" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/26pn25/Sandra_Steingraber_mixdownaoul9.mp3"/>
        <itunes:summary>In this episode, I speak to Dr. Sandra Steingraber. We discuss the links between environmental destruction, contamination of vital resources, and the grave dangers that fracking technology poses to human health. Dr. Steingraber explains the link between environmental justice and social justice, and we talk about what the state of collaboration across fields and areas of expertise as universities increasingly turn into what she calls "Disaster Capitalism."</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3600</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>41</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak to Dr. Sandra Steingraber. We discuss the links between environmental destruction, contamination of vital resources, and the grave dangers that fracking technology poses to human health. Dr. Steingraber explains the link between environmental justice and social justice, and we talk about what the state of collaboration across fields and areas of expertise as universities increasingly turn into what she calls "Disaster Capitalism." Biologist, author, and cancer survivor Dr. Sandra Steingraber, Ph.D. writes about climate change, ecology, and the links between human health and the environment. She has been named a Woman of the Year by Ms. Magazine, a Person of the Year by Treehugger, and one of 25 “Visionaries Who Are Changing Your World” by the Utne Reader. She is the recipient of the biennial Rachel Carson Leadership Award and the Jenifer Altman Foundation’s Altman Award for “the inspiring and poetic use of science to elucidate the causes of cancer.” Steingraber received a Hero Award from the Breast Cancer Fund and the Environmental Health Champion Award from Physicians for Social Responsibility, Los Angeles. She has testified in the European Parliament, at the European Commission, before the President’s Cancer Panel, and has participated in briefings to Congress, the Environmental Protection Agency, and before United Nations delegates in Geneva, Switzerland. This episode was produced by Matt Perry and Ana Marsh. Podcast art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Natural Technology: Defining Technology in New Ways with Dr.Timothy Morton</title>
        <itunes:title>Natural Technology: Defining Technology in New Ways with Dr.Timothy Morton</itunes:title>
        <link>https://dmdonig.podbean.com/e/natural-technology-defining-technology-in-new-ways-with-drtimothy-morton/</link>
                    <comments>https://dmdonig.podbean.com/e/natural-technology-defining-technology-in-new-ways-with-drtimothy-morton/#comments</comments>        <pubDate>Fri, 23 Apr 2021 00:45:50 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/823098cc-9d45-32cd-8e2e-f73a3cd34ba5</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with ecocritic Dr. Timothy Morton to talk about the relationship between tech and ecocriticism. We talk about the human relationship to the non-human world, we discuss the ethics of technological production in the age of the Anthropocene, and we analyze how the stories we tell ourselves about tech production as "progress" may bear devastating consequences in the form of environmental destruction. </p>
<p>Timothy Morton is Rita Shea Guffey Chair in English at Rice University. They have collaborated with Laurie Anderson, Björk, Jennifer Walshe, Hrafnhildur Arnadottir, Sabrina Scott, Adam McKay, Jeff Bridges, Justin Guariglia, Olafur Eliasson, and Pharrell Williams. Morton co-wrote and appears in Living in the Future’s Past, a 2018 film about global warming with Jeff Bridges. They are the author of the libretto for the opera Time Time Time by Jennifer Walshe.</p>
<p>Morton has written All Art Is Ecological (Penguin, 2021), Spacecraft (Bloomsbury, 2021), Being Ecological (Penguin, 2018), Humankind: Solidarity with Nonhuman People (Verso, 2017), Dark Ecology: For a Logic of Future Coexistence (Columbia, 2016), Nothing: Three Inquiries in Buddhism (Chicago, 2015), Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, 2013), Realist Magic: Objects, Ontology, Causality (Open Humanities, 2013), The Ecological Thought (Harvard, 2010), Ecology without Nature (Harvard, 2007), 8 other books and 250 essays on philosophy, ecology, literature, music, art, architecture, design and food. Morton’s work has been translated into 10 languages. In 2014 they gave the Wellek Lectures in Theory. </p>
<p>This episode was produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with ecocritic Dr. Timothy Morton to talk about the relationship between tech and ecocriticism. We talk about the human relationship to the non-human world, we discuss the ethics of technological production in the age of the Anthropocene, and we analyze how the stories we tell ourselves about tech production as "progress" may bear devastating consequences in the form of environmental destruction. </p>
<p>Timothy Morton is Rita Shea Guffey Chair in English at Rice University. They have collaborated with Laurie Anderson, Björk, Jennifer Walshe, Hrafnhildur Arnadottir, Sabrina Scott, Adam McKay, Jeff Bridges, Justin Guariglia, Olafur Eliasson, and Pharrell Williams. Morton co-wrote and appears in <em>Living in the Future’s Past</em>, a 2018 film about global warming with Jeff Bridges. They are the author of the libretto for the opera <em>Time Time Time</em> by Jennifer Walshe.</p>
<p>Morton has written <em>All Art Is Ecological </em>(Penguin, 2021), <em>Spacecraft</em> (Bloomsbury, 2021), <em>Being Ecological</em> (Penguin, 2018), <em>Humankind: Solidarity with Nonhuman People</em> (Verso, 2017), <em>Dark Ecology: For a Logic of Future Coexistence</em> (Columbia, 2016), <em>Nothing: Three Inquiries in Buddhism</em> (Chicago, 2015), <em>Hyperobjects: Philosophy and Ecology after the End of the World</em> (Minnesota, 2013), <em>Realist Magic: Objects, Ontology, Causality</em> (Open Humanities, 2013), <em>The Ecological Thought </em>(Harvard, 2010), <em>Ecology without Nature</em> (Harvard, 2007), 8 other books and 250 essays on philosophy, ecology, literature, music, art, architecture, design and food. Morton’s work has been translated into 10 languages. In 2014 they gave the Wellek Lectures in Theory. </p>
<p>This episode was produced by Matt Perry and Ana Marsh.</p>
<p>Podcast art by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="94332625" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/68wqrf/Tim_Morton_Podcast_mixdown7irfa.mp3"/>
        <itunes:summary>In this episode, I sit down with ecocritic Dr. Timothy Morton to talk about the relationship between tech and ecocriticism. We talk about the human relationship to the non-human world, we discuss the ethics of technological production in the age of the Anthropocene, and we analyze how the stories we tell ourselves about tech production as "progress" may bear devastating consequences in the form of environmental destruction.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>true</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3930</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>40</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with ecocritic Dr. Timothy Morton to talk about the relationship between tech and ecocriticism. We talk about the human relationship to the non-human world, we discuss the ethics of technological production in the age of the Anthropocene, and we analyze how the stories we tell ourselves about tech production as "progress" may bear devastating consequences in the form of environmental destruction.  Timothy Morton is Rita Shea Guffey Chair in English at Rice University. They have collaborated with Laurie Anderson, Björk, Jennifer Walshe, Hrafnhildur Arnadottir, Sabrina Scott, Adam McKay, Jeff Bridges, Justin Guariglia, Olafur Eliasson, and Pharrell Williams. Morton co-wrote and appears in Living in the Future’s Past, a 2018 film about global warming with Jeff Bridges. They are the author of the libretto for the opera Time Time Time by Jennifer Walshe. Morton has written All Art Is Ecological (Penguin, 2021), Spacecraft (Bloomsbury, 2021), Being Ecological (Penguin, 2018), Humankind: Solidarity with Nonhuman People (Verso, 2017), Dark Ecology: For a Logic of Future Coexistence (Columbia, 2016), Nothing: Three Inquiries in Buddhism (Chicago, 2015), Hyperobjects: Philosophy and Ecology after the End of the World (Minnesota, 2013), Realist Magic: Objects, Ontology, Causality (Open Humanities, 2013), The Ecological Thought (Harvard, 2010), Ecology without Nature (Harvard, 2007), 8 other books and 250 essays on philosophy, ecology, literature, music, art, architecture, design and food. Morton’s work has been translated into 10 languages. In 2014 they gave the Wellek Lectures in Theory.  This episode was produced by Matt Perry and Ana Marsh. Podcast art by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Persons and Things: ThingLogix CTO Rob Rastovich talks human values and the Internet of Things</title>
        <itunes:title>Persons and Things: ThingLogix CTO Rob Rastovich talks human values and the Internet of Things</itunes:title>
        <link>https://dmdonig.podbean.com/e/persons-and-things-thinglogix-cto-rob-rastovich-talk-human-values-and-the-internet-of-things/</link>
                    <comments>https://dmdonig.podbean.com/e/persons-and-things-thinglogix-cto-rob-rastovich-talk-human-values-and-the-internet-of-things/#comments</comments>        <pubDate>Fri, 16 Apr 2021 01:35:36 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8d07ce20-a7e3-344e-9f68-dcca71700b4a</guid>
                                    <description><![CDATA[<p>In this episode, I sit down with Rob Rastovich, CTO of ThingLogix to talk about the Internet of Things. We discuss the problem of privacy in an age where all of our things talk to one another--and where tech companies are listening in to our conversations with our devices. Rob addresses some of the ethical critiques that have emerged about IoT, and I ask him about how he understands the relationship between his work as a technologist, anchored in the digital world of tech, and his work as a rancher, anchored in the very physical world of non-human animals, plants, and land.</p>
<p>Rob Rastovich is the Chief Technology Officer of <a href='https://www.thinglogix.com/'>ThingLogix</a>, and an expert on the Internet of Things, or IoT. He has been actively involved in technology for nearly 30 years, from building a top 10 e-commerce site in a time when e-commerce was still in its infancy to establishing Amazon’s AWS IoT.

ThingLogix was awarded the 2018 IoT Platforms Leadership Award, and has become an advanced tier technology partner for Amazon Web Services. When he’s not at the forefront of IoT, Rob can be found maintaining his cattle ranch in Central Oregon.</p>
<p>Episode produced by Ana Marsh and Matt Perry.</p>
<p>Artwork by Desi Aleman.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I sit down with Rob Rastovich, CTO of ThingLogix to talk about the Internet of Things. We discuss the problem of privacy in an age where all of our things talk to one another--and where tech companies are listening in to our conversations with our devices. Rob addresses some of the ethical critiques that have emerged about IoT, and I ask him about how he understands the relationship between his work as a technologist, anchored in the digital world of tech, and his work as a rancher, anchored in the very physical world of non-human animals, plants, and land.</p>
<p>Rob Rastovich is the Chief Technology Officer of <a href='https://www.thinglogix.com/'>ThingLogix</a>, and an expert on the Internet of Things, or IoT. He has been actively involved in technology for nearly 30 years, from building a top 10 e-commerce site in a time when e-commerce was still in its infancy to establishing Amazon’s AWS IoT.<br>
<br>
ThingLogix was awarded the 2018 IoT Platforms Leadership Award, and has become an advanced tier technology partner for Amazon Web Services. When he’s not at the forefront of IoT, Rob can be found maintaining his cattle ranch in Central Oregon.</p>
<p>Episode produced by Ana Marsh and Matt Perry.</p>
<p>Artwork by Desi Aleman.</p>
]]></content:encoded>
                                    
        <enclosure length="101541541" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/jgyqqy/Rob_Rastovich_Podcast_mixdown8q7zk.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I sit down with Dr. Bruce DeBruhl to talk about cybersecurity and the ethics of privacy in our digitally connected world. We discuss the changing concept of privacy as our tech becomes increasingly integrated into the most intimate reaches of our lives, Bruce narrates the history of cybersecurity, and we consider how our colleges are preparing the next generation of tech workers to think about protecting our data and the intimate information we generate each and every day.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>true</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4230</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>39</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I sit down with Rob Rastovich, CTO of ThingLogix to talk about the Internet of Things. We discuss the problem of privacy in an age where all of our things talk to one another--and where tech companies are listening in to our conversations with our devices. Rob addresses some of the ethical critiques that have emerged about IoT, and I ask him about how he understands the relationship between his work as a technologist, anchored in the digital world of tech, and his work as a rancher, anchored in the very physical world of non-human animals, plants, and land. Rob Rastovich is the Chief Technology Officer of ThingLogix, and an expert on the Internet of Things, or IoT. He has been actively involved in technology for nearly 30 years, from building a top 10 e-commerce site in a time when e-commerce was still in its infancy to establishing Amazon’s AWS IoT. ThingLogix was awarded the 2018 IoT Platforms Leadership Award, and has become an advanced tier technology partner for Amazon Web Services. When he’s not at the forefront of IoT, Rob can be found maintaining his cattle ranch in Central Oregon. Episode produced by Ana Marsh and Matt Perry. Artwork by Desi Aleman.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Body Technology: Disability and Technology Part 2 with Paralympian Ezra Frech</title>
        <itunes:title>Body Technology: Disability and Technology Part 2 with Paralympian Ezra Frech</itunes:title>
        <link>https://dmdonig.podbean.com/e/body-technology-disability-and-technology-part-2-with-paralympian-ezra-frech/</link>
                    <comments>https://dmdonig.podbean.com/e/body-technology-disability-and-technology-part-2-with-paralympian-ezra-frech/#comments</comments>        <pubDate>Fri, 09 Apr 2021 09:50:04 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6b73f024-77d4-3749-87b0-adf8e8a1ad02</guid>
                                    <description><![CDATA[<p>In this series, we talk about adaptive technologies and physical disability. In this episode of the series, Paralympian Ezra Frech joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.</p>
<p>Ezra Frech is an American Paralympian Athlete who competes in high jump, long jump, and sprinting events in international level events. Ezra was born with congenital limb differences, missing his left knee, left shin bone, and fingers on his left hand, and has used a running blade since he was 4 years old. </p>
<p>In 2019 Ezra made the US Paralympic Track and Field Team and, as the youngest athlete on the team at 14 year’s old, competed in three international events, including the Junior World Para-Para-Athletics Championships, where he won three medals, the <a href='https://en.wikipedia.org/wiki/Parapan_American_Games'>Parapan American Games</a> where he won two silver medals, and the World Para-Athletics Championships, where he placed in the top 8 in all three of his events and was the youngest athlete out of 1,400 competitors. He’s slated to compete for Team USA in the upcoming 2021 Tokyo Paralympic Games.</p>
<p>He is an advocate for disability rights, and the inspiration behind and co-founder of <a href='http://www.angelcitygames.org'>Angel City Sports</a>, a high-growth, high-impact non-profit organization dedicated to providing the joy of sports to children and adults with physical disabilities. </p>
<p>Episode produced by Matt Perry and Ana Marsh.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this series, we talk about adaptive technologies and physical disability. In this episode of the series, Paralympian Ezra Frech joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.</p>
<p>Ezra Frech is an American Paralympian Athlete who competes in high jump, long jump, and sprinting events in international level events. Ezra was born with congenital limb differences, missing his left knee, left shin bone, and fingers on his left hand, and has used a running blade since he was 4 years old. </p>
<p>In 2019 Ezra made the US Paralympic Track and Field Team and, as the youngest athlete on the team at 14 year’s old, competed in three international events, including the Junior World Para-Para-Athletics Championships, where he won three medals, the <a href='https://en.wikipedia.org/wiki/Parapan_American_Games'>Parapan American Games</a> where he won two silver medals, and the World Para-Athletics Championships, where he placed in the top 8 in all three of his events and was the youngest athlete out of 1,400 competitors. He’s slated to compete for Team USA in the upcoming 2021 Tokyo Paralympic Games.</p>
<p>He is an advocate for disability rights, and the inspiration behind and co-founder of <a href='http://www.angelcitygames.org'>Angel City Sports</a>, a high-growth, high-impact non-profit organization dedicated to providing the joy of sports to children and adults with physical disabilities. </p>
<p>Episode produced by Matt Perry and Ana Marsh.</p>
]]></content:encoded>
                                    
        <enclosure length="69132623" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/qus5cs/Ezra_Frech_Pocast_mixdown9y2fj.mp3"/>
        <itunes:summary><![CDATA[In this series, we talk about adaptive technologies and physical disability. In this episode of the series, Paralympian Ezra Frech joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.
Ezra Frech is an American Paralympian Athlete who competes in high jump, long jump, and sprinting events in international level events. Ezra was born with congenital limb differences, missing his left knee, left shin bone, and fingers on his left hand, and has used a running blade since he was 4 years old. 
In 2019 Ezra made the US Paralympic Track and Field Team and, as the youngest athlete on the team at 14 year’s old, competed in three international events, including the Junior World Para-Para-Athletics Championships, where he won three medals, the Parapan American Games where he won two silver medals, and the World Para-Athletics Championships, where he placed in the top 8 in all three of his events and was the youngest athlete out of 1,400 competitors. He’s slated to compete for Team USA in the upcoming 2021 Tokyo Paralympic Games.
He is an advocate for disability rights, and the inspiration behind and co-founder of Angel City Sports, a high-growth, high-impact non-profit organization dedicated to providing the joy of sports to children and adults with physical disabilities. 
Episode produced by Matt Perry and Ana Marsh.]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2880</itunes:duration>
                <itunes:episode>38</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this series, we talk about adaptive technologies and physical disability. In this episode of the series, Paralympian Ezra Frech joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference. Ezra Frech is an American Paralympian Athlete who competes in high jump, long jump, and sprinting events in international level events. Ezra was born with congenital limb differences, missing his left knee, left shin bone, and fingers on his left hand, and has used a running blade since he was 4 years old.  In 2019 Ezra made the US Paralympic Track and Field Team and, as the youngest athlete on the team at 14 year’s old, competed in three international events, including the Junior World Para-Para-Athletics Championships, where he won three medals, the Parapan American Games where he won two silver medals, and the World Para-Athletics Championships, where he placed in the top 8 in all three of his events and was the youngest athlete out of 1,400 competitors. He’s slated to compete for Team USA in the upcoming 2021 Tokyo Paralympic Games. He is an advocate for disability rights, and the inspiration behind and co-founder of Angel City Sports, a high-growth, high-impact non-profit organization dedicated to providing the joy of sports to children and adults with physical disabilities.  Episode produced by Matt Perry and Ana Marsh.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Body Technology: Disability and Technology Part 1 with Clayton Frech</title>
        <itunes:title>Body Technology: Disability and Technology Part 1 with Clayton Frech</itunes:title>
        <link>https://dmdonig.podbean.com/e/body-technology-disability-and-technology-part-1-with-clayton-frech/</link>
                    <comments>https://dmdonig.podbean.com/e/body-technology-disability-and-technology-part-1-with-clayton-frech/#comments</comments>        <pubDate>Fri, 02 Apr 2021 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d39d2bd7-694b-36cf-b5a2-7015d4435a35</guid>
                                    <description><![CDATA[<p>In this series, we talk about adaptive technologies and physical disability. Clayton Frech, the founder and CEO of Angel City Sports, joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.</p>
<p>Over the last twenty-five years, Clayton Frech has held leadership roles in the business, government, and non-profit sectors. He became involved in the disability community when his first son, Ezra, was born missing his left knee and left fibula and with only one finger on his left hand.  Following Ezra’s passion for sports, Mr. Frech identified major gaps in access to sports programming for athletes with physical disabilities in the U.S.  In 2013, with the help of friends and family, he set out to address these gaps, and in 2015, he produced the first Angel City Games, which is now the largest Paralympic competition in the country, and the West Coast’s most prestigious Paralympic event. </p>
<p> In 2015, Mr. Frech started <a href='http://angelcitysports.org/'>Angel City Sports</a> to address inequities in access to sport for kids and adults living with physical disabilities. In addition to serving as a strategic advisor to a number of small and mid-sized companies, he recently launched <a href='https://www.facebook.com/AmplaInstitute/'>Ampla Institute</a>, a career development and planning firm dedicated to helping people find their purpose and optimize their career potential.</p>
<p>Stay tuned for next week’s episode, where we talk to Ezra Frech about running on a blade as a US Paralympian athlete, headed to Tokyo for the International Paralympic Games in Tokyo.</p>
<p>Episode produced by Matt Perry and Ana Marsh.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this series, we talk about adaptive technologies and physical disability. Clayton Frech, the founder and CEO of Angel City Sports, joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.</p>
<p>Over the last twenty-five years, Clayton Frech has held leadership roles in the business, government, and non-profit sectors. He became involved in the disability community when his first son, Ezra, was born missing his left knee and left fibula and with only one finger on his left hand.  Following Ezra’s passion for sports, Mr. Frech identified major gaps in access to sports programming for athletes with physical disabilities in the U.S.  In 2013, with the help of friends and family, he set out to address these gaps, and in 2015, he produced the first Angel City Games, which is now the largest Paralympic competition in the country, and the West Coast’s most prestigious Paralympic event. </p>
<p> In 2015, Mr. Frech started <a href='http://angelcitysports.org/'>Angel City Sports</a> to address inequities in access to sport for kids and adults living with physical disabilities. In addition to serving as a strategic advisor to a number of small and mid-sized companies, he recently launched <a href='https://www.facebook.com/AmplaInstitute/'>Ampla Institute</a><em>, </em>a career development and planning firm dedicated to helping people find their purpose and optimize their career potential.</p>
<p>Stay tuned for next week’s episode, where we talk to Ezra Frech about running on a blade as a US Paralympian athlete, headed to Tokyo for the International Paralympic Games in Tokyo.</p>
<p>Episode produced by Matt Perry and Ana Marsh.</p>
]]></content:encoded>
                                    
        <enclosure length="67689715" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/skp833/Clayton_Frech_Podcast_mixdown9xpxn.mp3"/>
        <itunes:summary>In this series, we talk about adaptive technologies and physical disability. Clayton Frech, the founder and CEO of Angel City Sports, joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2820</itunes:duration>
        <itunes:season>5</itunes:season>
        <itunes:episode>37</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this series, we talk about adaptive technologies and physical disability. Clayton Frech, the founder and CEO of Angel City Sports, joins us to discuss the disability community at the intersection of technology. We discuss disability in the space of intersectionality, the significance of sports for the disability community, and why designing for a diversity of bodies, with equity and empathy, makes a difference. Over the last twenty-five years, Clayton Frech has held leadership roles in the business, government, and non-profit sectors. He became involved in the disability community when his first son, Ezra, was born missing his left knee and left fibula and with only one finger on his left hand.  Following Ezra’s passion for sports, Mr. Frech identified major gaps in access to sports programming for athletes with physical disabilities in the U.S.  In 2013, with the help of friends and family, he set out to address these gaps, and in 2015, he produced the first Angel City Games, which is now the largest Paralympic competition in the country, and the West Coast’s most prestigious Paralympic event.   In 2015, Mr. Frech started Angel City Sports to address inequities in access to sport for kids and adults living with physical disabilities. In addition to serving as a strategic advisor to a number of small and mid-sized companies, he recently launched Ampla Institute, a career development and planning firm dedicated to helping people find their purpose and optimize their career potential. Stay tuned for next week’s episode, where we talk to Ezra Frech about running on a blade as a US Paralympian athlete, headed to Tokyo for the International Paralympic Games in Tokyo. Episode produced by Matt Perry and Ana Marsh.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Haley Pavone Reinvents the Heel: Fashion is an Ethics and Equity Issue</title>
        <itunes:title>Haley Pavone Reinvents the Heel: Fashion is an Ethics and Equity Issue</itunes:title>
        <link>https://dmdonig.podbean.com/e/haley-pavone-reinvents-the-heel-fashion-is-an-ethics-and-equity-issue/</link>
                    <comments>https://dmdonig.podbean.com/e/haley-pavone-reinvents-the-heel-fashion-is-an-ethics-and-equity-issue/#comments</comments>        <pubDate>Fri, 12 Mar 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8f64e4ae-a23b-3c42-95f2-3b8ec0dd7fed</guid>
                                    <description><![CDATA[<p>When Haley Pavone proposed a convertible high heel to flat shoe, industry experts told her it was impossible. When she successfully engineered a shoe that could convert from a flat to a heel, venture capitalists often couldn't see the value--or even the problem. Fashion footwear for women is an industry-developed by and run by men. In the episode, Haley talks about equity and inclusion in fashion and explains how she engineered the first convertible flat to heel shoe. We talk about empathic and humane design, feminism and entrepreneurship, and why ethical technology requires us to think about how we'd walk a mile in someone else's shoes.</p>
<p>Haley Pavone is the Founder & CEO of <a href='https://pashionfootwear.com/?gclid=Cj0KCQiAnKeCBhDPARIsAFDTLTLllDqtXuBZQ0_Xtme5oLsmOAep0PTT0WOxr02PCClC8v2mVdLpSj0aAsFtEALw_wcB'>Pashion Footwear</a>. She is a Cal Poly graduate in Business and Entrepreneurship, and an alumnus of the University's Center for Innovation and Entrepreneurship. She has appeared been profiled by Forbes, Businesswire, and she recently appeared on the critically acclaimed and multi-Emmy® Award-winning entrepreneurial-themed ABC reality show “Shark Tank.” She’s passionate about empathic and humane design, and building collectively and collaboratively, with insight, inclusion, and compassion.</p>
<p> </p>
<p>Art by Desi Aleman
Produced by Matt Perry</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>When Haley Pavone proposed a convertible high heel to flat shoe, industry experts told her it was impossible. When she successfully engineered a shoe that could convert from a flat to a heel, venture capitalists often couldn't see the value--or even the problem. Fashion footwear for women is an industry-developed by and run by men. In the episode, Haley talks about equity and inclusion in fashion and explains how she engineered the first convertible flat to heel shoe. We talk about empathic and humane design, feminism and entrepreneurship, and why ethical technology requires us to think about how we'd walk a mile in someone else's shoes.</p>
<p>Haley Pavone is the Founder & CEO of <a href='https://pashionfootwear.com/?gclid=Cj0KCQiAnKeCBhDPARIsAFDTLTLllDqtXuBZQ0_Xtme5oLsmOAep0PTT0WOxr02PCClC8v2mVdLpSj0aAsFtEALw_wcB'>Pashion Footwear</a>. She is a Cal Poly graduate in Business and Entrepreneurship, and an alumnus of the University's Center for Innovation and Entrepreneurship. She has appeared been profiled by Forbes, Businesswire, and she recently appeared on the critically acclaimed and multi-Emmy® Award-winning entrepreneurial-themed ABC reality show “Shark Tank.” She’s passionate about empathic and humane design, and building collectively and collaboratively, with insight, inclusion, and compassion.</p>
<p> </p>
<p>Art by Desi Aleman<br>
Produced by Matt Perry</p>
]]></content:encoded>
                                    
        <enclosure length="81372629" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/xux2dy/Haley_Pavone_Podcast_mixdown8spzy.mp3"/>
        <itunes:summary>When Haley Pavone proposed a convertible high heel to flat shoe, industry experts told her it was impossible. When she successfully engineered a shoe that could convert from a flat to a heel, venture capitalists often couldn't see the value--or even the problem. Fashion footwear for women is an industry developed by and run by men. In the episode, Haley talks about equity and inclusion in fashion, and explains how she engineered the first convertible flat to heel shoe. We talk about empathic and humane design, feminism and entrepreneurship, and why ethical technology requires us to think about how we'd walk a mile in someone else's shoes.

Haley Pavone is the Founder &amp; CEO at Pashion Footwear. She is a Cal Poly graduate in Business and Entrepreneurship, and an alumnus of the University's Center or Innovation and Entrepreneurship. She has appeared been profiled by Forbes, Businesswire, and she recently appeared on the critically acclaimed and multi-Emmy® Award-winning entrepreneurial-themed ABC reality show “Shark Tank.” She’s passionate about empathic and humane design, and building collectively and collaboratively, with insight, inclusion, and compassion.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3390</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>36</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>When Haley Pavone proposed a convertible high heel to flat shoe, industry experts told her it was impossible. When she successfully engineered a shoe that could convert from a flat to a heel, venture capitalists often couldn't see the value--or even the problem. Fashion footwear for women is an industry-developed by and run by men. In the episode, Haley talks about equity and inclusion in fashion and explains how she engineered the first convertible flat to heel shoe. We talk about empathic and humane design, feminism and entrepreneurship, and why ethical technology requires us to think about how we'd walk a mile in someone else's shoes. Haley Pavone is the Founder &amp; CEO of Pashion Footwear. She is a Cal Poly graduate in Business and Entrepreneurship, and an alumnus of the University's Center for Innovation and Entrepreneurship. She has appeared been profiled by Forbes, Businesswire, and she recently appeared on the critically acclaimed and multi-Emmy® Award-winning entrepreneurial-themed ABC reality show “Shark Tank.” She’s passionate about empathic and humane design, and building collectively and collaboratively, with insight, inclusion, and compassion.   Art by Desi Aleman Produced by Matt Perry</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>World Building:John Maeda designs the future of art, tech, and architecture</title>
        <itunes:title>World Building:John Maeda designs the future of art, tech, and architecture</itunes:title>
        <link>https://dmdonig.podbean.com/e/world-buildingjohn-maeda-designs-the-future-of-art-tech-and-architecture/</link>
                    <comments>https://dmdonig.podbean.com/e/world-buildingjohn-maeda-designs-the-future-of-art-tech-and-architecture/#comments</comments>        <pubDate>Fri, 05 Mar 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/d61a9a4e-9e99-31d3-8666-e933aa303169</guid>
                                    <description><![CDATA[<p>In this episode, I give my mic over to Ana Marsh and Matt Perry, two producers on the show, for an interview with John Maeda.</p>
<p>Dr. John Maeda is an American technologist and product experience leader who is known around the world for building bridges between business, engineering, and design—and his dedication to working inclusively. He is the SVP Chief Customer Experience Officer at Everbridge, where he works on the future of Critical Event Management technologies for saving lives and keeping businesses and society running.</p>
<p>He is an MIT-trained computer scientist, who blends his training as a computer scientist with an MBA. He is the author of five books including the new How To Speak Machine and the bestselling Laws of Simplicity. Among his MANY leadership positions, he serves on the board of Directors at Sonos and the Smithsonian Design Museum, he is the former President/CEO of Rhode Island School of Design (RISD), and he is a Partner at Kleiner Perkins venture capital in Silicon Valley.</p>
<p>During his early career, Dr. Maeda was an MIT research professor in computational design,  represented in the permanent collection of the Museum of Modern Art. He is also a recipient of the White House’s National Design Award. He has appeared as a speaker all over the world, from Davos to Beijing to São Paulo to New York, and his TED talks have received millions of views. </p>
<p>To quote WIRED Magazine, “Maeda is to design what Warren Buffett is to finance.”</p>
<p>Today’s hosts, Ana Marsh and Matt Perry, are producers on the Technically Human podcast.</p>
<p>Ana Marsh is a fourth-year computer science student at Cal Poly. She is graduating in the Spring of 2021 and plans to start full-time at Microsoft in the Fall. She has a deep interest in ethical technology, cultivated through her coursework in computer science and the University’s new technically human course, part of the Cal Poly ethical technology initiative. Matt Perry is a fifth-year architecture student at Cal Poly from Las Vegas, NV. Now in the final year of his degree, he is doing research on ephemeral architecture and designing for the human experience, while exploring the future of architecture. He hopes to spend his time designing architecture with the human experience at the forefront of design.</p>
<p>Ana and Matt talk about what it means to blend tech and art, how we can think about the future of humane design, and how we can make tech great again.

Art by Desi Aleman
Produced by Matt Perry</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I give my mic over to Ana Marsh and Matt Perry, two producers on the show, for an interview with John Maeda.</p>
<p>Dr. John Maeda is an American technologist and product experience leader who is known around the world for building bridges between business, engineering, and design—and his dedication to working inclusively. He is the SVP Chief Customer Experience Officer at Everbridge, where he works on the future of Critical Event Management technologies for saving lives and keeping businesses and society running.</p>
<p>He is an MIT-trained computer scientist, who blends his training as a computer scientist with an MBA. He is the author of five books including the new <em>How To Speak Machine </em>and the bestselling <em>Laws of Simplicity</em>. Among his MANY leadership positions, he serves on the board of Directors at Sonos and the Smithsonian Design Museum, he is the former President/CEO of Rhode Island School of Design (RISD), and he is a Partner at Kleiner Perkins venture capital in Silicon Valley.</p>
<p>During his early career, Dr. Maeda was an MIT research professor in computational design,  represented in the permanent collection of the Museum of Modern Art. He is also a recipient of the White House’s National Design Award. He has appeared as a speaker all over the world, from Davos to Beijing to São Paulo to New York, and his TED talks have received millions of views. </p>
<p>To quote WIRED Magazine, “Maeda is to design what Warren Buffett is to finance.”</p>
<p>Today’s hosts, Ana Marsh and Matt Perry, are producers on the Technically Human podcast.</p>
<p>Ana Marsh is a fourth-year computer science student at Cal Poly. She is graduating in the Spring of 2021 and plans to start full-time at Microsoft in the Fall. She has a deep interest in ethical technology, cultivated through her coursework in computer science and the University’s new technically human course, part of the Cal Poly ethical technology initiative. Matt Perry is a fifth-year architecture student at Cal Poly from Las Vegas, NV. Now in the final year of his degree, he is doing research on ephemeral architecture and designing for the human experience, while exploring the future of architecture. He hopes to spend his time designing architecture with the human experience at the forefront of design.</p>
<p>Ana and Matt talk about what it means to blend tech and art, how we can think about the future of humane design, and how we can make tech great again.<br>
<br>
Art by Desi Aleman<br>
Produced by Matt Perry</p>
]]></content:encoded>
                                    
        <enclosure length="70569709" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2cy9er/John_Maeda_Podcast_mixdownbi78c.mp3"/>
        <itunes:summary>In this episode, I give my mic over to Ana Marsh and Matt Perry, two producers on the show, for an interview with John Maeda.

Dr. John Maeda is an American technologist and product experience leader who is known around the world for building bridges between business, engineering, and design—and his dedication to working inclusively. He serves on the board of Directors at Sonos and the Smithsonian Design Museum, he is the former President/CEO of Rhode Island School of Design (RISD), and he is a Partner at Kleiner Perkins venture capital in Silicon Valley.

To quote WIRED Magazine, “Maeda is to design what Warren Buffett is to finance.”

Ana and Matt talk about what it means to blend tech and art, how we can think about the future of humane design, and how we can make tech great again.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2940</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>35</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I give my mic over to Ana Marsh and Matt Perry, two producers on the show, for an interview with John Maeda. Dr. John Maeda is an American technologist and product experience leader who is known around the world for building bridges between business, engineering, and design—and his dedication to working inclusively. He is the SVP Chief Customer Experience Officer at Everbridge, where he works on the future of Critical Event Management technologies for saving lives and keeping businesses and society running. He is an MIT-trained computer scientist, who blends his training as a computer scientist with an MBA. He is the author of five books including the new How To Speak Machine and the bestselling Laws of Simplicity. Among his MANY leadership positions, he serves on the board of Directors at Sonos and the Smithsonian Design Museum, he is the former President/CEO of Rhode Island School of Design (RISD), and he is a Partner at Kleiner Perkins venture capital in Silicon Valley. During his early career, Dr. Maeda was an MIT research professor in computational design,  represented in the permanent collection of the Museum of Modern Art. He is also a recipient of the White House’s National Design Award. He has appeared as a speaker all over the world, from Davos to Beijing to São Paulo to New York, and his TED talks have received millions of views.  To quote WIRED Magazine, “Maeda is to design what Warren Buffett is to finance.” Today’s hosts, Ana Marsh and Matt Perry, are producers on the Technically Human podcast. Ana Marsh is a fourth-year computer science student at Cal Poly. She is graduating in the Spring of 2021 and plans to start full-time at Microsoft in the Fall. She has a deep interest in ethical technology, cultivated through her coursework in computer science and the University’s new technically human course, part of the Cal Poly ethical technology initiative. Matt Perry is a fifth-year architecture student at Cal Poly from Las Vegas, NV. Now in the final year of his degree, he is doing research on ephemeral architecture and designing for the human experience, while exploring the future of architecture. He hopes to spend his time designing architecture with the human experience at the forefront of design. Ana and Matt talk about what it means to blend tech and art, how we can think about the future of humane design, and how we can make tech great again. Art by Desi Aleman Produced by Matt Perry</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Dr. Mark Z. Jacobson Revolutionizes Climate Science: How tech can save the world from climate change and what YOU can do to help</title>
        <itunes:title>Dr. Mark Z. Jacobson Revolutionizes Climate Science: How tech can save the world from climate change and what YOU can do to help</itunes:title>
        <link>https://dmdonig.podbean.com/e/dr-mark-z-jacobson-revolutionizes-climate-science-how-tech-can-save-the-world-from-climate-change-and-what-you-can-do-to-help/</link>
                    <comments>https://dmdonig.podbean.com/e/dr-mark-z-jacobson-revolutionizes-climate-science-how-tech-can-save-the-world-from-climate-change-and-what-you-can-do-to-help/#comments</comments>        <pubDate>Fri, 26 Feb 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/8edde9ff-9d70-3325-9ba2-8ce079ed58f4</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Mark Z. Jacobson, one of the world's leading experts in climate science. </p>
<p>We talk about the technologies that can prevent environmental destruction, and how tech innovation can drive a clean energy vision for the future. Mark explains why we already have the science and tech to build this future, and how--with the political and social will--we can create a world powered by renewable energy--not in a distant future, but NOW. </p>
<p>Mark Z. Jacobson is Director of the Atmosphere/Energy Program and Professor of Civil and Environmental  Engineering at Stanford University. He seeks to understand air pollution and global warming problems, and to develop large-scale clean, renewable energy solutions to these major and urgent problems. His most recent book, published by Cambridge University Press, is titled <a href='https://web.stanford.edu/group/efmh/jacobson/WWSBook/WWSBook.html'>100 Percent Clean, Renewable </a><a href='https://web.stanford.edu/group/efmh/jacobson/WWSBook/WWSBook.html'>Energy and Storage for Everything</a>. The book is the culmination of Dr. Jacobson’s life's work on 
transitioning the world to 100% clean, renewable energy, and it examines the 
technologies, economics, and social/political aspects of that transition.</p>
<p>On February 9, as part of the Joint Declaration of the Global 100% Renewable Energy Strategy Group, Dr. Jacobson joined other leading climate scientists and experts to propose a 10 point declaration to transform the world’s energy supply to 100% renewable energy. This statement will be specifically published in support of President Biden’s United States climate change agenda.</p>
<p>To support the transformation to renewable energy by signing your name to the declaration, please visit <a href='http://www.global100restrategygroup.org'>www.global100restrategygroup.org</a>.</p>
<p>This episode was produced by Matt Perry.
Podcast art by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Mark Z. Jacobson, one of the world's leading experts in climate science. </p>
<p>We talk about the technologies that can prevent environmental destruction, and how tech innovation can drive a clean energy vision for the future. Mark explains why we already have the science and tech to build this future, and how--with the political and social will--we can create a world powered by renewable energy--not in a distant future, but NOW. </p>
<p>Mark Z. Jacobson is Director of the Atmosphere/Energy Program and Professor of Civil and Environmental  Engineering at Stanford University. He seeks to understand air pollution and global warming problems, and to develop large-scale clean, renewable energy solutions to these major and urgent problems. His most recent book, published by Cambridge University Press, is titled <a href='https://web.stanford.edu/group/efmh/jacobson/WWSBook/WWSBook.html'><em>100 Percent Clean, Renewable</em> </a><em><a href='https://web.stanford.edu/group/efmh/jacobson/WWSBook/WWSBook.html'>Energy and Storage for Everything</a>. </em>The book is the culmination of Dr. Jacobson’s life's work on <br>
transitioning the world to 100% clean, renewable energy, and it examines the <br>
technologies, economics, and social/political aspects of that transition.</p>
<p>On February 9, as part of the Joint Declaration of the Global 100% Renewable Energy Strategy Group, Dr. Jacobson joined other leading climate scientists and experts to propose a 10 point declaration to transform the world’s energy supply to 100% renewable energy. This statement will be specifically published in support of President Biden’s United States climate change agenda.</p>
<p>To support the transformation to renewable energy by signing your name to the declaration, please visit <a href='http://www.global100restrategygroup.org'>www.global100restrategygroup.org</a>.</p>
<p>This episode was produced by Matt Perry.<br>
Podcast art by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="91461001" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2ewzki/Mark_Jacobson_Podcast_mixdownatwek.mp3"/>
        <itunes:summary><![CDATA[In this episode of "Technically Human," I sit down with Dr. Mark Z. Jacobson, one of the world's leading experts in climate science. 
We talk about the technologies that can prevent environmental destruction, and how tech innovation can drive a clean energy vision for the future. Mark explains why we already have the science and tech to build this future, and how--with the political and social will--we can create a world powered by renewable energy--not in a distant future, but NOW. 
Mark Z. Jacobson is Director of the Atmosphere/Energy Program and Professor of Civil and Environmental  Engineering at Stanford University. He seeks to understand air pollution and global warming problems, and to develop large-scale clean, renewable energy solutions to these major and urgent problems. His most recent book, published by Cambridge University Press, is titled 100 Percent Clean, Renewable Energy and Storage for Everything. The book is the culmination of Dr. Jacobson’s life's work on transitioning the world to 100% clean, renewable energy, and it examines the technologies, economics, and social/political aspects of that transition.
On February 9, as part of the Joint Declaration of the Global 100% Renewable Energy Strategy Group, Dr. Jacobson joined other leading climate scientists and experts to propose a 10 point declaration to transform the world’s energy supply to 100% renewable energy. This statement will be specifically published in support of President Biden’s United States climate change agenda.
To support the transformation to renewable energy by signing your name to the declaration, please visit www.global100restrategygroup.org.
This episode was produced by Matt Perry.Podcast art by Desi Aleman]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3810</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>34</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I sit down with Dr. Mark Z. Jacobson, one of the world's leading experts in climate science.  We talk about the technologies that can prevent environmental destruction, and how tech innovation can drive a clean energy vision for the future. Mark explains why we already have the science and tech to build this future, and how--with the political and social will--we can create a world powered by renewable energy--not in a distant future, but NOW.  Mark Z. Jacobson is Director of the Atmosphere/Energy Program and Professor of Civil and Environmental  Engineering at Stanford University. He seeks to understand air pollution and global warming problems, and to develop large-scale clean, renewable energy solutions to these major and urgent problems. His most recent book, published by Cambridge University Press, is titled 100 Percent Clean, Renewable Energy and Storage for Everything. The book is the culmination of Dr. Jacobson’s life's work on  transitioning the world to 100% clean, renewable energy, and it examines the  technologies, economics, and social/political aspects of that transition. On February 9, as part of the Joint Declaration of the Global 100% Renewable Energy Strategy Group, Dr. Jacobson joined other leading climate scientists and experts to propose a 10 point declaration to transform the world’s energy supply to 100% renewable energy. This statement will be specifically published in support of President Biden’s United States climate change agenda. To support the transformation to renewable energy by signing your name to the declaration, please visit www.global100restrategygroup.org. This episode was produced by Matt Perry. Podcast art by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Virtually Human: Living in Jaron Lanier’s virtual reality</title>
        <itunes:title>Virtually Human: Living in Jaron Lanier’s virtual reality</itunes:title>
        <link>https://dmdonig.podbean.com/e/virtually-human-living-in-jaron-lanier-s-virtual-reality/</link>
                    <comments>https://dmdonig.podbean.com/e/virtually-human-living-in-jaron-lanier-s-virtual-reality/#comments</comments>        <pubDate>Fri, 19 Feb 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c04dd127-772e-394f-bf23-31ba6e8287a8</guid>
                                    <description><![CDATA[<p>In this episode of the “Technically Human” podcast, I sit down with Jaron Lanier, the creator of virtual reality. Jaron and I discuss the meaning, and the future, of reality in an increasingly virtual world, and we talk about what virtual reality was in its early stages. Jaron outlines his concerns and critique of technological culture, and he explains why the kind of behavior modification and manipulation engineered by social media platforms has become, in his mind, a “Behaviours of Users Modified, and Made into an Empire for Rent," or a BUMMER.</p>
<p>Jaron Lanier is the founder of the field of virtual reality. From 2009, he has worked at Microsoft Research as an Interdisciplinary Scientist in a role called “The Octopus,” (which stands for Office of the Chief Technology Officer Prime Unifying Scientist).</p>
<p>In 2010, Lanier was named to the Time 100 list of most influential people. In 2018, Lanier was named one of the 25 most influential people in the previous 25 years of tech history by Wired Magazine, and one of the 100 top public intellectuals by Foreign Policy Magazine. His books include the bestsellers “You Are Not a Gadget, A Manifesto,” “Who Owns the Future?,” and “Ten Arguments for Deleting Your Social Media Accounts Right Now.” His writing appears in The New York Times, Discover, The Wall Street Journal, Forbes, Harpers Magazine, Atlantic, Wired Magazine (where he was a founding contributing editor), and Scientific American. He has appeared on TV shows such as The View, PBS NewsHour, The Colbert Report, Nightline and Charlie Rose, and has been profiled on the front pages of The Wall Street Journal and The New York Times multiple times. He regularly serves as a creative consultant for movies, including Minority Report and The Circle.</p>
<p>He has received honorary doctorates from the New Jersey Institute of Technology and Franklin and Marshall College, was the recipient of CMU's Watson award in 2001, was a finalist for the first Edge of Computation Award in 2005, and received a Lifetime Career Award from the IEEE in 2009 for contributions to Virtual Reality.

Jaron Lanier is also a musician and artist. He has been active in the world of new "classical" music since the late '70s and writes chamber and orchestral works. He is a pianist and a specialist in unusual and historical musical instruments; he maintains one of the largest and most varied collections of actively played instruments in the world.</p>
<p>Produced by Matt Perry
Art by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of the “Technically Human” podcast, I sit down with Jaron Lanier, the creator of virtual reality. Jaron and I discuss the meaning, and the future, of reality in an increasingly virtual world, and we talk about what virtual reality was in its early stages. Jaron outlines his concerns and critique of technological culture, and he explains why the kind of behavior modification and manipulation engineered by social media platforms has become, in his mind, a “Behaviours of Users Modified, and Made into an Empire for Rent," or a BUMMER.</p>
<p>Jaron Lanier is the founder of the field of virtual reality. From 2009, he has worked at Microsoft Research as an Interdisciplinary Scientist in a role called “The Octopus,” (which stands for Office of the Chief Technology Officer Prime Unifying Scientist).</p>
<p>In 2010, Lanier was named to the <em>Time 100 </em>list of most influential people. In 2018, Lanier was named one of the 25 most influential people in the previous 25 years of tech history by Wired Magazine, and one of the 100 top public intellectuals by Foreign Policy Magazine. His books include the bestsellers “You Are Not a Gadget, A Manifesto,” “Who Owns the Future?,” and “Ten Arguments for Deleting Your Social Media Accounts Right Now.” His writing appears in The New York Times, Discover, The Wall Street Journal, Forbes, Harpers Magazine, Atlantic, Wired Magazine (where he was a founding contributing editor), and Scientific American. He has appeared on TV shows such as The View, PBS NewsHour, The Colbert Report, Nightline and Charlie Rose, and has been profiled on the front pages of The Wall Street Journal and The New York Times multiple times. He regularly serves as a creative consultant for movies, including Minority Report and The Circle.</p>
<p>He has received honorary doctorates from the New Jersey Institute of Technology and Franklin and Marshall College, was the recipient of CMU's Watson award in 2001, was a finalist for the first Edge of Computation Award in 2005, and received a Lifetime Career Award from the IEEE in 2009 for contributions to Virtual Reality.<br>
<br>
Jaron Lanier is also a musician and artist. He has been active in the world of new "classical" music since the late '70s and writes chamber and orchestral works. He is a pianist and a specialist in unusual and historical musical instruments; he maintains one of the largest and most varied collections of actively played instruments in the world.</p>
<p>Produced by Matt Perry<br>
Art by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="79211657" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/74ktan/Jaron_Lanier_Podcast_mixdown80d6i.mp3"/>
        <itunes:summary>In this episode of the “Technically Human” podcast, I sit down with Jaron Lanier, the creator of virtual reality. Jaron and I discuss the meaning, and the future, of reality in an increasingly virtual world, and we talk about what virtual reality was in its early stages. Jaron outlines his concerns and critique of technological culture, and he explains why the kind of behavior modification and manipulation engineered by social media platforms has become, in his mind, a “Behaviours of Users Modified, and Made into an Empire for Rent," or a BUMMER.

Jaron Lanier is the founder of the field of virtual reality. From 2009, he has worked at Microsoft Research as an Interdisciplinary Scientist in a role called “The Octopus,” (which stands for Office of the Chief Technology Officer Prime Unifying Scientist).</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3300</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>33</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of the “Technically Human” podcast, I sit down with Jaron Lanier, the creator of virtual reality. Jaron and I discuss the meaning, and the future, of reality in an increasingly virtual world, and we talk about what virtual reality was in its early stages. Jaron outlines his concerns and critique of technological culture, and he explains why the kind of behavior modification and manipulation engineered by social media platforms has become, in his mind, a “Behaviours of Users Modified, and Made into an Empire for Rent," or a BUMMER. Jaron Lanier is the founder of the field of virtual reality. From 2009, he has worked at Microsoft Research as an Interdisciplinary Scientist in a role called “The Octopus,” (which stands for Office of the Chief Technology Officer Prime Unifying Scientist). In 2010, Lanier was named to the Time 100 list of most influential people. In 2018, Lanier was named one of the 25 most influential people in the previous 25 years of tech history by Wired Magazine, and one of the 100 top public intellectuals by Foreign Policy Magazine. His books include the bestsellers “You Are Not a Gadget, A Manifesto,” “Who Owns the Future?,” and “Ten Arguments for Deleting Your Social Media Accounts Right Now.” His writing appears in The New York Times, Discover, The Wall Street Journal, Forbes, Harpers Magazine, Atlantic, Wired Magazine (where he was a founding contributing editor), and Scientific American. He has appeared on TV shows such as The View, PBS NewsHour, The Colbert Report, Nightline and Charlie Rose, and has been profiled on the front pages of The Wall Street Journal and The New York Times multiple times. He regularly serves as a creative consultant for movies, including Minority Report and The Circle. He has received honorary doctorates from the New Jersey Institute of Technology and Franklin and Marshall College, was the recipient of CMU's Watson award in 2001, was a finalist for the first Edge of Computation Award in 2005, and received a Lifetime Career Award from the IEEE in 2009 for contributions to Virtual Reality. Jaron Lanier is also a musician and artist. He has been active in the world of new "classical" music since the late '70s and writes chamber and orchestral works. He is a pianist and a specialist in unusual and historical musical instruments; he maintains one of the largest and most varied collections of actively played instruments in the world. Produced by Matt Perry Art by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The "Changing Minds" Series: Episode 3 with Minds CEO Bill Ottman</title>
        <itunes:title>The "Changing Minds" Series: Episode 3 with Minds CEO Bill Ottman</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-changing-minds-series-episode-3-with-minds-ceo-bill-ottman/</link>
                    <comments>https://dmdonig.podbean.com/e/the-changing-minds-series-episode-3-with-minds-ceo-bill-ottman/#comments</comments>        <pubDate>Fri, 12 Feb 2021 02:59:54 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/92899ee5-0464-3728-9101-79d743276351</guid>
                                    <description><![CDATA[<p>We are back with our third and final episode in the “Changing Minds” series!</p>
<p>Across the 3 episodes, we focus on civil discourse, and we ask what it means to engage in dialogue with people with whom we disagree, sometimes deeply.  In the first two episodes, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.</p>
<p>In our third and final episode of the “changing minds” series, I sit down with Bill Ottman. Bill Ottman is the founder of Minds.com, a new community-owned, “open source” social media network that prizes privacy, transparency, and open exchange. We explore the advantages, and the challenges, of unfettered free speech, we talk about relationship between tech and civil discourse, and Bill talks about his vision of a social media ecosystem that can help pave the way toward creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.</p>
<p>In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do.</p>
<p>Here's part 3, with Bill Ottman</p>
<p>Produced by Matt Perry
Artwork by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>We are back with our third and final episode in the “Changing Minds” series!</p>
<p>Across the 3 episodes, we focus on civil discourse, and we ask what it means to engage in dialogue with people with whom we disagree, sometimes deeply.  In the first two episodes, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.</p>
<p>In our third and final episode of the “changing minds” series, I sit down with Bill Ottman. Bill Ottman is the founder of Minds.com, a new community-owned, “open source” social media network that prizes privacy, transparency, and open exchange. We explore the advantages, and the challenges, of unfettered free speech, we talk about relationship between tech and civil discourse, and Bill talks about his vision of a social media ecosystem that can help pave the way toward creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.</p>
<p>In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do.</p>
<p>Here's part 3, with Bill Ottman</p>
<p>Produced by Matt Perry<br>
Artwork by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="138251406" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ixvke4/Bill_Ottman_Podcast_mixdown6zo0n.mp3"/>
        <itunes:summary>We are back with our third and final episode in the “Changing Minds” series!

Across the 3 episodes, we focus on civil discourse, and we ask what it means to engage in dialogue with people with whom we disagree, sometimes deeply.  In the first two episodes, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.

Bill Ottman is the founder of Minds.com, a new community-owned, “open source” social media network that prizes privacy, transparency, and open exchange. We explore the advantages, and the challenges, of unfettered free speech, we talk about relationship between tech and civil discourse, and Bill talks about his vision of a social media ecosystem that can help pave the way toward creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>5760</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>32</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>We are back with our third and final episode in the “Changing Minds” series! Across the 3 episodes, we focus on civil discourse, and we ask what it means to engage in dialogue with people with whom we disagree, sometimes deeply.  In the first two episodes, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements. In our third and final episode of the “changing minds” series, I sit down with Bill Ottman. Bill Ottman is the founder of Minds.com, a new community-owned, “open source” social media network that prizes privacy, transparency, and open exchange. We explore the advantages, and the challenges, of unfettered free speech, we talk about relationship between tech and civil discourse, and Bill talks about his vision of a social media ecosystem that can help pave the way toward creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them. In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do. Here's part 3, with Bill Ottman Produced by Matt Perry Artwork by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The "Changing Minds" Series: Episode 2 with Daryl Davis</title>
        <itunes:title>The "Changing Minds" Series: Episode 2 with Daryl Davis</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-changing-minds-series-episode-2-with-daryl-davis/</link>
                    <comments>https://dmdonig.podbean.com/e/the-changing-minds-series-episode-2-with-daryl-davis/#comments</comments>        <pubDate>Fri, 05 Feb 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/5c7a1636-bbd2-3ab0-af74-b43f32ac6514</guid>
                                    <description><![CDATA[<p>Welcome back to part 2 of our 3-episode series on Changing Minds. In this series, we’re doing something a little bit different. The three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first episode, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements. He talked to me about his work and his views on navigating this particularly fraught moment.</p>
<p>Daryl Davis is a Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at <a href='https://www.google.com/url?q=http://change.minds.com&sa=D&source=hangouts&ust=1611693206616000&usg=AFQjCNHUpAziSQuCktLEbz2okaOR-D3e6A'>change.minds.com</a> to help people connect in a civil way online.    </p>
<p>In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do.</p>
<p>In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation and points us to the possibility of ethical engagement across that distance.</p>
<p>Next week, I sit down with Bill Ottman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Building on my conversations with Daryl in the first two episodes, Bill and I explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them. </p>
<p>Here's part 2 of my conversation with Daryl.</p>
<p> </p>
<p>Produced by Matt Perry</p>
<p>Artwork by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to part 2 of our 3-episode series on Changing Minds. In this series, we’re doing something a little bit different. The three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first episode, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements. He talked to me about his work and his views on navigating this particularly fraught moment.</p>
<p>Daryl Davis is a Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at <a href='https://www.google.com/url?q=http://change.minds.com&sa=D&source=hangouts&ust=1611693206616000&usg=AFQjCNHUpAziSQuCktLEbz2okaOR-D3e6A'>change.minds.com</a> to help people connect in a civil way online.    </p>
<p>In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do.</p>
<p>In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation and points us to the possibility of ethical engagement across that distance.</p>
<p>Next week, I sit down with Bill Ottman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Building on my conversations with Daryl in the first two episodes, Bill and I explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them. </p>
<p>Here's part 2 of my conversation with Daryl.</p>
<p> </p>
<p>Produced by Matt Perry</p>
<p>Artwork by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="103691667" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/n2djtd/Daryl_Davis_Podcast_Pt_2_mixdownbkn7h.mp3"/>
        <itunes:summary>It's part 2 of our 3-episode series on Changing Minds. The three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4320</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>31</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to part 2 of our 3-episode series on Changing Minds. In this series, we’re doing something a little bit different. The three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first episode, I spoke with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements. He talked to me about his work and his views on navigating this particularly fraught moment. Daryl Davis is a Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at change.minds.com to help people connect in a civil way online.     In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Social media offers the potential for new connections, or new levels of disconnect and partisanship. The ethics and intentions we bring to social media matter, and our approach starts far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse, how we think about meeting other folks where they are, considering the journey that led them to believe as they do. In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation and points us to the possibility of ethical engagement across that distance. Next week, I sit down with Bill Ottman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Building on my conversations with Daryl in the first two episodes, Bill and I explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.  Here's part 2 of my conversation with Daryl.   Produced by Matt Perry Artwork by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The "Changing Minds" Series: Episode 1 with Daryl Davis</title>
        <itunes:title>The "Changing Minds" Series: Episode 1 with Daryl Davis</itunes:title>
        <link>https://dmdonig.podbean.com/e/changing-minds-episode-1/</link>
                    <comments>https://dmdonig.podbean.com/e/changing-minds-episode-1/#comments</comments>        <pubDate>Fri, 29 Jan 2021 09:52:34 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/fdb5f5cf-5d4a-3fef-ba72-f7871151840a</guid>
                                    <description><![CDATA[<p>For the next few weeks, we’re going to be doing something a little bit different. The next three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first two episodes, I speak with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.</p>
<p>Daryl Davis is an Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at <a href='https://www.google.com/url?q=http://change.minds.com&sa=D&source=hangouts&ust=1611693206616000&usg=AFQjCNHUpAziSQuCktLEbz2okaOR-D3e6A'>change.minds.com</a> to help people connect in a civil way online.    </p>
<p>Across these two episodes, I talk to Daryl about his work and his views on navigating this particularly fraught moment. In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Those ethics start far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse. In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation, and points us to the possibility of ethical engagement across that distance.</p>
<p>In the third of these episodes, I speak to Bill Otman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Between these conversations, and across these 3 episodes, we explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.</p>
<p> </p>
<p>Produced by Matt Perry</p>
<p>Artwork by Desi Aleman</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>For the next few weeks, we’re going to be doing something a little bit different. The next three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first two episodes, I speak with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.</p>
<p>Daryl Davis is an Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at <a href='https://www.google.com/url?q=http://change.minds.com&sa=D&source=hangouts&ust=1611693206616000&usg=AFQjCNHUpAziSQuCktLEbz2okaOR-D3e6A'>change.minds.com</a> to help people connect in a civil way online.    </p>
<p>Across these two episodes, I talk to Daryl about his work and his views on navigating this particularly fraught moment. In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Those ethics start far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse. In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation, and points us to the possibility of ethical engagement across that distance.</p>
<p>In the third of these episodes, I speak to Bill Otman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Between these conversations, and across these 3 episodes, we explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.</p>
<p> </p>
<p>Produced by Matt Perry</p>
<p>Artwork by Desi Aleman</p>
]]></content:encoded>
                                    
        <enclosure length="100114705" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/24ibzx/Daryl_Davis_Podcast_PT1_mixdownfixed39z3jy.mp3"/>
        <itunes:summary>For the next few weeks, we’re going to be doing something a little bit different. The next three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first two episodes, I speak with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4171</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>30</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>For the next few weeks, we’re going to be doing something a little bit different. The next three episodes of the series focus on the theme of changing minds: what it means to engage in dialogue with people with whom we disagree, sometimes deeply, and the importance of civil discourse, particularly in this deeply polarized national moment. In the first two episodes, I speak with Daryl Davis about racism, and how he envisions the possibility of changing the minds of those who believe, and participate in, white supremacist and separatist movements. Daryl Davis is an Black singer and author who has facilitated over 200 members of the KKK to leave the organization, simply by befriending them and letting them know who he is. He's big on simply reaching out rather than censorship and has created a deradicalization movement at change.minds.com to help people connect in a civil way online.     Across these two episodes, I talk to Daryl about his work and his views on navigating this particularly fraught moment. In thinking about the ethics of technology, and in particular, its relationship to our moment of political, cultural, and ideological polarization, the ethics of technology extend far beyond how we use tech. Those ethics start far before we ever sit down at our computer to respond to a Facebook post, or broadcast our views on Twitter. They start with how we imagine, and practice, civil discourse. In my conversations with Daryl, we explore what those ethics can look like, and how they can come to transform our approach to engaging in dialogue with distant others. Distant others can mean geographical distance. It can also mean political distance, ideological distance, or cultural difference. Daryl’s work, and his activism, shows an important alternative to the discord that dominates our current conversation, and points us to the possibility of ethical engagement across that distance. In the third of these episodes, I speak to Bill Otman, the CEO of Minds.com, a social media platform that provides an alternative to Facebook, and that seeks to prioritize privacy, transparency, and open exchange. Between these conversations, and across these 3 episodes, we explore the relationship between tech and civil discourse, and ways that we all can be part of creating a healthier and more vibrant national conversation—not in spite of our differences and distances, but because of them.   Produced by Matt Perry Artwork by Desi Aleman</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Protecting Our Tech: Dr. Bruce DeBruhl breaks down cybersecurity and what that means for the world, the country, and you</title>
        <itunes:title>Protecting Our Tech: Dr. Bruce DeBruhl breaks down cybersecurity and what that means for the world, the country, and you</itunes:title>
        <link>https://dmdonig.podbean.com/e/protecting-our-tech-bruce-debruhl-talks-about-cyber-security-and-what-that-means-for-the-world-the-country-and-you/</link>
                    <comments>https://dmdonig.podbean.com/e/protecting-our-tech-bruce-debruhl-talks-about-cyber-security-and-what-that-means-for-the-world-the-country-and-you/#comments</comments>        <pubDate>Fri, 22 Jan 2021 01:26:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/4dad2ac6-f569-3ea7-b5de-50dc86f9a072</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Bruce DeBruhl to talk about cybersecurity and the ethics of privacy in our digitally connected world. We discuss the changing concept of privacy as our tech becomes increasingly integrated into the most intimate reaches of our lives, Bruce narrates the history of cybersecurity, and we consider how our colleges are preparing the next generation of tech workers to think about protecting our data and the intimate information we generate each and every day.

Dr. Bruce DeBruhl is an Associate Professor at California Polytechnic State University, where most of his teaching focuses on cybersecurity and privacy education. His educational goal is to develop opportunities for diverse students to get hands-on experience with security and privacy. Dr. DeBruhl’s research interests include wireless security, cyber-physical security, location privacy, and automotive security. In 2020, he was nominated for the Outstanding Faculty Scholarship Award for the California State University system, and he co-leads the Transforming Access to Cybersecurity in California through a Strategic Research Initiative, awarded by Cal Poly. The initiative aims to address the fundamental problems of access to cybersecurity training and to cybersecurity services, both by developing holistic cybersecurity training and education, and by developing ways to provide cybersecurity in underserved communities.</p>
<p> </p>
<p>Produced by Matt Perry
Podcast art designed by Desi Aleman
Music by Bensound</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I sit down with Dr. Bruce DeBruhl to talk about cybersecurity and the ethics of privacy in our digitally connected world. We discuss the changing concept of privacy as our tech becomes increasingly integrated into the most intimate reaches of our lives, Bruce narrates the history of cybersecurity, and we consider how our colleges are preparing the next generation of tech workers to think about protecting our data and the intimate information we generate each and every day.<br>
<br>
Dr. Bruce DeBruhl is an Associate Professor at California Polytechnic State University, where most of his teaching focuses on cybersecurity and privacy education. His educational goal is to develop opportunities for diverse students to get hands-on experience with security and privacy. Dr. DeBruhl’s research interests include wireless security, cyber-physical security, location privacy, and automotive security. In 2020, he was nominated for the Outstanding Faculty Scholarship Award for the California State University system, and he co-leads the Transforming Access to Cybersecurity in California through a Strategic Research Initiative, awarded by Cal Poly. The initiative aims to address the fundamental problems of access to cybersecurity training and to cybersecurity services, both by developing holistic cybersecurity training and education, and by developing ways to provide cybersecurity in underserved communities.</p>
<p> </p>
<p>Produced by Matt Perry<br>
Podcast art designed by Desi Aleman<br>
Music by Bensound</p>
]]></content:encoded>
                                    
        <enclosure length="84969715" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/9dy3if/Bruce_Debruhl_Podcast_mixdown9nuaf.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I sit down with Dr. Bruce DeBruhl to talk about cybersecurity and the ethics of privacy in our digitally connected world. We discuss the changing concept of privacy as our tech becomes increasingly integrated into the most intimate reaches of our lives, Bruce narrates the history of cybersecurity, and we consider how our colleges are preparing the next generation of tech workers to think about protecting our data and the intimate information we generate each and every day.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3540</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>29</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I sit down with Dr. Bruce DeBruhl to talk about cybersecurity and the ethics of privacy in our digitally connected world. We discuss the changing concept of privacy as our tech becomes increasingly integrated into the most intimate reaches of our lives, Bruce narrates the history of cybersecurity, and we consider how our colleges are preparing the next generation of tech workers to think about protecting our data and the intimate information we generate each and every day. Dr. Bruce DeBruhl is an Associate Professor at California Polytechnic State University, where most of his teaching focuses on cybersecurity and privacy education. His educational goal is to develop opportunities for diverse students to get hands-on experience with security and privacy. Dr. DeBruhl’s research interests include wireless security, cyber-physical security, location privacy, and automotive security. In 2020, he was nominated for the Outstanding Faculty Scholarship Award for the California State University system, and he co-leads the Transforming Access to Cybersecurity in California through a Strategic Research Initiative, awarded by Cal Poly. The initiative aims to address the fundamental problems of access to cybersecurity training and to cybersecurity services, both by developing holistic cybersecurity training and education, and by developing ways to provide cybersecurity in underserved communities.   Produced by Matt Perry Podcast art designed by Desi Aleman Music by Bensound</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The American Dream Goes Digital: The myths and technologies that bind us with Dr. Julie Albright</title>
        <itunes:title>The American Dream Goes Digital: The myths and technologies that bind us with Dr. Julie Albright</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-american-dream-goes-digital-the-myths-and-technologies-that-bind-us-with-dr-julie-albright/</link>
                    <comments>https://dmdonig.podbean.com/e/the-american-dream-goes-digital-the-myths-and-technologies-that-bind-us-with-dr-julie-albright/#comments</comments>        <pubDate>Fri, 15 Jan 2021 01:00:00 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/593dfc13-5f50-3cff-a11c-d5c963b1d7bf</guid>
                                    <description><![CDATA[<p>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</p>
<p><a href='http://www.drjuliealbright.com/'>Dr. Julie Albright</a> is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure. </p>
<p>Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems.</p>
<p>She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others. </p>
<p>Her new book, <a href='https://www.amazon.com/Left-Their-Own-Devices-Reshaping/dp/1633884449'>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream </a>(Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society. </p>
<p>
This episode was produced by Matt Perry.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: <em>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. </em>We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</p>
<p><a href='http://www.drjuliealbright.com/'>Dr. Julie Albright</a> is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure. </p>
<p>Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems.</p>
<p>She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others. </p>
<p>Her new book, <a href='https://www.amazon.com/Left-Their-Own-Devices-Reshaping/dp/1633884449'><em>Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream</em> </a>(Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society. </p>
<p><br>
This episode was produced by Matt Perry.</p>
]]></content:encoded>
                                    
        <enclosure length="107716635" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/e26ebu/Julie_Albright_Podcast_mixdownbf7zj.mp3"/>
        <itunes:summary>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3504</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>28</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of Technically Human, I sit down with Dr. Julie Albright to talk about her new book: Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream. We talk about the way that digital culture is changing the American Dream for the next generation, we discuss how the internet is changing political culture, and Julie explains how our connections to our devices are changing the way we seek partnerships, form relationships, and how romance has been gamified in our world of online dating. Dr. Julie Albright is a Sociologist specializing in digital culture and communications. She has a Masters Degree in Social and Systemic Studies and a Dual Doctorate in Sociology and Marriage and Family Therapy from the University of Southern California. Dr. Albright is currently a Lecturer in the departments of Applied Psychology and Engineering at USC, where She teaches master’s level courses on the Psychology of Interactive Technologies and Sustainable Infrastructure.  Dr. Albright’s research has focused on the growing intersection of technology and social / behavioral systems. She is also a sought after keynote speaker, and has given talks for major data center and energy conferences including SAP for Utilities, IBM Global, Data Center Dynamics and the Dept. of Defense. She has appeared as an expert on national media including the Today Show, CNN, NBC Nightly News, CBS, the Wall Street Journal, New York Times, NPR Radio and many others.  Her new book, Left to Their Own Devices: How Digital Natives Are Reshaping the American Dream (Random House/ Prometheus press), investigates the impacts of mobile - social- – digital technologies on society.  This episode was produced by Matt Perry.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>How Tech is Changing Democracy Around the Globe: Mohamed Abubakr on democratic revolutions, here and abroad</title>
        <itunes:title>How Tech is Changing Democracy Around the Globe: Mohamed Abubakr on democratic revolutions, here and abroad</itunes:title>
        <link>https://dmdonig.podbean.com/e/how-tech-is-changing-democracy-around-the-globe-mohamed-abubakr-on-democratic-revolutions-here-and-abroad/</link>
                    <comments>https://dmdonig.podbean.com/e/how-tech-is-changing-democracy-around-the-globe-mohamed-abubakr-on-democratic-revolutions-here-and-abroad/#comments</comments>        <pubDate>Fri, 08 Jan 2021 00:37:46 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ec2b1251-3781-31c1-9098-94f1abbcc7b7</guid>
                                    <description><![CDATA[<p>To kick off the 4th season of Technically Human, I sit down with Mohamed Abubakr, the president of the African and Middle Eastern Leadership Project (AMEL), to talk about the present and the future state of democracy worldwide.</p>
<p>We discuss the role of social media in mobilizing democratic movements, including anti-autocratic movements in the Middle East and Africa, and its simultaneous role in the collapse of democratic norms in the United States. We talk about the future of both democracy and online networks, what it means to know of the lives of others virtually, and how global connection creates new possibilities for revolution.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>To kick off the 4th season of Technically Human, I sit down with Mohamed Abubakr, the president of the African and Middle Eastern Leadership Project (AMEL), to talk about the present and the future state of democracy worldwide.</p>
<p>We discuss the role of social media in mobilizing democratic movements, including anti-autocratic movements in the Middle East and Africa, and its simultaneous role in the collapse of democratic norms in the United States. We talk about the future of both democracy and online networks, what it means to know of the lives of others virtually, and how global connection creates new possibilities for revolution.</p>
]]></content:encoded>
                                    
        <enclosure length="84134167" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ymnxgf/Mohamed_Abubaker_Episode_mixdown628u5.mp3"/>
        <itunes:summary>To kick off the 4th season of Technically Human, I sit down with Mohamed Abubakr, the president of the African and Middle Eastern Leadership Project (AMEL), to talk about the present and the future state of democracy worldwide.

We discuss the role of social media in mobilizing democratic movements, including anti-autocratic movements in the Middle East and Africa, and its simultaneous role in the collapse of democratic norms in the United States. We talk about the future of both democracy and online networks, what it means to know of the lives of others virtually, and how global connection creates new possibilities for revolution.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3504</itunes:duration>
        <itunes:season>4</itunes:season>
        <itunes:episode>27</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>To kick off the 4th season of Technically Human, I sit down with Mohamed Abubakr, the president of the African and Middle Eastern Leadership Project (AMEL), to talk about the present and the future state of democracy worldwide. We discuss the role of social media in mobilizing democratic movements, including anti-autocratic movements in the Middle East and Africa, and its simultaneous role in the collapse of democratic norms in the United States. We talk about the future of both democracy and online networks, what it means to know of the lives of others virtually, and how global connection creates new possibilities for revolution.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Tech Stands Up: Brad Taylor builds the new technological revolution</title>
        <itunes:title>Tech Stands Up: Brad Taylor builds the new technological revolution</itunes:title>
        <link>https://dmdonig.podbean.com/e/technically/</link>
                    <comments>https://dmdonig.podbean.com/e/technically/#comments</comments>        <pubDate>Fri, 20 Nov 2020 14:37:47 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/f8c3f5b5-fb57-379a-b829-a5f79860fb3b</guid>
                                    <description><![CDATA[<p>In 2017, Brad Taylor founded "Tech Stands Up," bringing together thousands of technologists to call for change, social activism, and socially responsible leadership in the tech industry, staging a rally on Pi Day (March 14) in the wake of the election of Donald Trump.</p>
<p>In this episode of "Technically Human,' I speak to Brad about what motivated his decision to stand up. We talk about how technologists can leverage their position to demand social change, we discuss the challenges of speaking up, and Brad talks about how he envisions the world that he hopes his kids, and the next generation, will inherit.</p>
<p>Brad Taylor is the founder of the "Tech Stands Up" movement. He is an engineer, founder, and father who has worked in Silicon Valley for the last 15 years building digital marketing tools for some of the world's largest brands.</p>
<p>Brad hosts the <a href='https://www.techstandsup.org/podcast'>Tech Stands Up Podcast</a>, a series dedicated to the intersection of technology and civic engagement at the local, state, and federal levels. He discusses the extraordinary impact new technologies have on our society and how we can solve some of the most challenging problems we face today.</p>
<p>This episode was produced by Matthew Perry.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In 2017, Brad Taylor founded "Tech Stands Up," bringing together thousands of technologists to call for change, social activism, and socially responsible leadership in the tech industry, staging a rally on Pi Day (March 14) in the wake of the election of Donald Trump.</p>
<p>In this episode of "Technically Human,' I speak to Brad about what motivated his decision to stand up. We talk about how technologists can leverage their position to demand social change, we discuss the challenges of speaking up, and Brad talks about how he envisions the world that he hopes his kids, and the next generation, will inherit.</p>
<p>Brad Taylor is the founder of the "Tech Stands Up" movement. He is an engineer, founder, and father who has worked in Silicon Valley for the last 15 years building digital marketing tools for some of the world's largest brands.</p>
<p>Brad hosts the <a href='https://www.techstandsup.org/podcast'>Tech Stands Up Podcast</a>, a series dedicated to the intersection of technology and civic engagement at the local, state, and federal levels. He discusses the extraordinary impact new technologies have on our society and how we can solve some of the most challenging problems we face today.</p>
<p>This episode was produced by Matthew Perry.</p>
]]></content:encoded>
                                    
        <enclosure length="100214805" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/rm8mfj/Brad_Taylor_Podcast_mixdown78747.mp3"/>
        <itunes:summary>In 2017, Brad Taylor founded "Tech Stands Up," bringing together thousands of technologists to call for change, social activism, and socially responsible leadership in the tech industry, staging a rally on Pi Day (March 14) in the wake of the election of Donald Trump. In this episode of "Technically Human,' I speak to Brad about what motivated his decision to stand up. We talk about how technologists can leverage their position to demand social change, we discuss the challenges of speaking up, and Brad talks about how he envisions the world that he hopes his kids, and the next generation, will inherit.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4175</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>26</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In 2017, Brad Taylor founded "Tech Stands Up," bringing together thousands of technologists to call for change, social activism, and socially responsible leadership in the tech industry, staging a rally on Pi Day (March 14) in the wake of the election of Donald Trump. In this episode of "Technically Human,' I speak to Brad about what motivated his decision to stand up. We talk about how technologists can leverage their position to demand social change, we discuss the challenges of speaking up, and Brad talks about how he envisions the world that he hopes his kids, and the next generation, will inherit. Brad Taylor is the founder of the "Tech Stands Up" movement. He is an engineer, founder, and father who has worked in Silicon Valley for the last 15 years building digital marketing tools for some of the world's largest brands. Brad hosts the Tech Stands Up Podcast, a series dedicated to the intersection of technology and civic engagement at the local, state, and federal levels. He discusses the extraordinary impact new technologies have on our society and how we can solve some of the most challenging problems we face today. This episode was produced by Matthew Perry.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Active Imagination: Malka Older talks humanitarianism, science fiction, and the future of democracy</title>
        <itunes:title>Active Imagination: Malka Older talks humanitarianism, science fiction, and the future of democracy</itunes:title>
        <link>https://dmdonig.podbean.com/e/active-imagination-malka-older-talks-humanitarianism-science-fiction-and-the-future-of-democracy/</link>
                    <comments>https://dmdonig.podbean.com/e/active-imagination-malka-older-talks-humanitarianism-science-fiction-and-the-future-of-democracy/#comments</comments>        <pubDate>Fri, 13 Nov 2020 10:07:36 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/a747030e-bac6-35bd-891c-ccb1b68b83d9</guid>
                                    <description><![CDATA[<p>Malka Older moves between the practice of on-the-ground disaster relief work and the practice of in-the-mind science fiction writing. She converges her experience with real-life dystopias with the imagined dystopias of science-fiction.</p>
<p>In the episode, I ask Malka about how these areas of her work blend together, how she understands the boundaries and the interplay between the real and the imaginary, and the role that science fiction plays in our understanding of tech—its promises and its perils.</p>
<p>We talk about use of science fiction in helping us avoid disasters, the relationship between tech, science fiction, and democracy, and Malka shares her predictions about the future of tech, democracy, and human rights as we move into the future of the 21st century.</p>
<p>Malka Older is a writer, academic, and aid worker. She is currently a Faculty Associate at <a href='https://sfis.asu.edu/school-future-innovation-society'>Arizona State University’s School for the Future of Innovation in Society</a> and an Associate Researcher at the <a href='http://cso.edu/home.asp'>Centre de Sociologie des Organisations</a>. Her  science-fiction political thriller Infomocracy was named one of the best books of 2016 by Kirkus, Book Riot, and the Washington Post. She has written opinion pieces for the New York Times, The Nation, Foreign Policy, and NBC Think.</p>
<p>She has more a decade of experience in humanitarian aid and development, ranging from field level experience as a Head of Office in Darfur to supporting global programs and agency-wide strategy as a disaster risk reduction technical specialist. Her doctoral work on the sociology of organizations at the Institut d’Études Politques de Paris (Sciences Po), completed in 2019, explored the dynamics of multi-level governance and disaster response using the cases of Hurricane Katrina and the Japan tsunami of 2011. As part of this work she has been selected as a visiting scholar at Columbia University, on an Alliance Grant, and at the Fletcher School of International Affairs at Tufts University. She has an undergraduate degree in literature from Harvard and a Masters in international relations and economics from the School of Advanced International Studies (SAIS) Johns Hopkins University.</p>
<p>She was named Senior Fellow for Technology and Risk at the Carnegie Council for Ethics in International Affairs for 2015, and has conducted research for the French Institut de Radioprotection et de Sûreté Nucléaire (IRSN) on the human and organizational factors involved in the Fukushima Dai-Ichi crisis. Her research interests include intra-governmental relations in crises; the paradox of well-funded disaster responses; measurement and evaluation of disaster responses; and the effects of competition among actors in humanitarian aid.</p>
<p>This episode was produced by Matthew Perry.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Malka Older moves between the practice of on-the-ground disaster relief work and the practice of in-the-mind science fiction writing. She converges her experience with real-life dystopias with the imagined dystopias of science-fiction.</p>
<p>In the episode, I ask Malka about how these areas of her work blend together, how she understands the boundaries and the interplay between the real and the imaginary, and the role that science fiction plays in our understanding of tech—its promises and its perils.</p>
<p>We talk about use of science fiction in helping us avoid disasters, the relationship between tech, science fiction, and democracy, and Malka shares her predictions about the future of tech, democracy, and human rights as we move into the future of the 21st century.</p>
<p>Malka Older is a writer, academic, and aid worker. She is currently a Faculty Associate at <a href='https://sfis.asu.edu/school-future-innovation-society'>Arizona State University’s School for the Future of Innovation in Society</a> and an Associate Researcher at the <a href='http://cso.edu/home.asp'>Centre de Sociologie des Organisations</a>. Her  science-fiction political thriller <em>Infomocracy</em> was named one of the best books of 2016 by Kirkus, Book Riot, and the Washington Post. She has written opinion pieces for the New York Times, The Nation, Foreign Policy, and NBC Think.</p>
<p>She has more a decade of experience in humanitarian aid and development, ranging from field level experience as a Head of Office in Darfur to supporting global programs and agency-wide strategy as a disaster risk reduction technical specialist. Her doctoral work on the sociology of organizations at the Institut d’Études Politques de Paris (Sciences Po), completed in 2019, explored the dynamics of multi-level governance and disaster response using the cases of Hurricane Katrina and the Japan tsunami of 2011. As part of this work she has been selected as a visiting scholar at Columbia University, on an Alliance Grant, and at the Fletcher School of International Affairs at Tufts University. She has an undergraduate degree in literature from Harvard and a Masters in international relations and economics from the School of Advanced International Studies (SAIS) Johns Hopkins University.</p>
<p>She was named Senior Fellow for Technology and Risk at the Carnegie Council for Ethics in International Affairs for 2015, and has conducted research for the French Institut de Radioprotection et de Sûreté Nucléaire (IRSN) on the human and organizational factors involved in the Fukushima Dai-Ichi crisis. Her research interests include intra-governmental relations in crises; the paradox of well-funded disaster responses; measurement and evaluation of disaster responses; and the effects of competition among actors in humanitarian aid.</p>
<p>This episode was produced by Matthew Perry.</p>
]]></content:encoded>
                                    
        <enclosure length="108032525" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/yhyck3/Malka_Older_Podcast_1__mixdown78p6w.mp3"/>
        <itunes:summary>Malka Older moves between the practice of on-the-ground disaster relief work and the practice of in-the-mind science fiction writing. She converges her experience with real-life dystopias with the imagined dystopias of science-fiction. In the episode, I ask Malka about how these areas of her work blend together, how she understands the boundaries and the interplay between the real and the imaginary, and the role that science fiction plays in our understanding of tech—its promises and its perils.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4500</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>25</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Malka Older moves between the practice of on-the-ground disaster relief work and the practice of in-the-mind science fiction writing. She converges her experience with real-life dystopias with the imagined dystopias of science-fiction. In the episode, I ask Malka about how these areas of her work blend together, how she understands the boundaries and the interplay between the real and the imaginary, and the role that science fiction plays in our understanding of tech—its promises and its perils. We talk about use of science fiction in helping us avoid disasters, the relationship between tech, science fiction, and democracy, and Malka shares her predictions about the future of tech, democracy, and human rights as we move into the future of the 21st century. Malka Older is a writer, academic, and aid worker. She is currently a Faculty Associate at Arizona State University’s School for the Future of Innovation in Society and an Associate Researcher at the Centre de Sociologie des Organisations. Her  science-fiction political thriller Infomocracy was named one of the best books of 2016 by Kirkus, Book Riot, and the Washington Post. She has written opinion pieces for the New York Times, The Nation, Foreign Policy, and NBC Think. She has more a decade of experience in humanitarian aid and development, ranging from field level experience as a Head of Office in Darfur to supporting global programs and agency-wide strategy as a disaster risk reduction technical specialist. Her doctoral work on the sociology of organizations at the Institut d’Études Politques de Paris (Sciences Po), completed in 2019, explored the dynamics of multi-level governance and disaster response using the cases of Hurricane Katrina and the Japan tsunami of 2011. As part of this work she has been selected as a visiting scholar at Columbia University, on an Alliance Grant, and at the Fletcher School of International Affairs at Tufts University. She has an undergraduate degree in literature from Harvard and a Masters in international relations and economics from the School of Advanced International Studies (SAIS) Johns Hopkins University. She was named Senior Fellow for Technology and Risk at the Carnegie Council for Ethics in International Affairs for 2015, and has conducted research for the French Institut de Radioprotection et de Sûreté Nucléaire (IRSN) on the human and organizational factors involved in the Fukushima Dai-Ichi crisis. Her research interests include intra-governmental relations in crises; the paradox of well-funded disaster responses; measurement and evaluation of disaster responses; and the effects of competition among actors in humanitarian aid. This episode was produced by Matthew Perry.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Block Power: Marcus Miller on mobilizing Black voters, the 2020 Election and grassroots organizing in the age of tech</title>
        <itunes:title>Block Power: Marcus Miller on mobilizing Black voters, the 2020 Election and grassroots organizing in the age of tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/test-file-1604688485/</link>
                    <comments>https://dmdonig.podbean.com/e/test-file-1604688485/#comments</comments>        <pubDate>Fri, 06 Nov 2020 10:49:33 -0800</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/69e78e5d-db7a-3dd1-9573-779d1c881632</guid>
                                    <description><![CDATA[<p>As the counting continues, it becomes clear that the 2020 election may be decided by a handful of voters in a few key states. In the months leading up to the election, Block Power, a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process, mobilized over 200,000 voters through a grassroots organizing campaign that took place almost exclusively online.

In this episode, Marcus Miller, a lead organizer in the Block Power grassroots movement, talks to me about how Block Power successfully built a major voter mobilization strategy that would help decide the future of the country. We talk about the ethics and stakes of representation, particularly for historically marginalized communities, the history of Black voter suppression and the challenges of mobilizing Black voters, and the symbolism of this election falling on the 150 year anniversary of the 15th Amendment, which granted African-American men the right to vote, and how a team of humanists and technologists built a movement.</p>
<p><a href='https://blockpower.us/'>Block Power</a> is a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process. Their strategy enlists tech-based strategies, including a software platform that makes grassroots get-out-the-vote efforts more efficient, effective, and data-driven.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>As the counting continues, it becomes clear that the 2020 election may be decided by a handful of voters in a few key states. In the months leading up to the election, Block Power, a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process, mobilized over 200,000 voters through a grassroots organizing campaign that took place almost exclusively online.<br>
<br>
In this episode, Marcus Miller, a lead organizer in the Block Power grassroots movement, talks to me about how Block Power successfully built a major voter mobilization strategy that would help decide the future of the country. We talk about the ethics and stakes of representation, particularly for historically marginalized communities, the history of Black voter suppression and the challenges of mobilizing Black voters, and the symbolism of this election falling on the 150 year anniversary of the 15th Amendment, which granted African-American men the right to vote, and how a team of humanists and technologists built a movement.</p>
<p><a href='https://blockpower.us/'>Block Power</a> is a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process. Their strategy enlists tech-based strategies, including a software platform that makes grassroots get-out-the-vote efforts more efficient, effective, and data-driven.</p>
]]></content:encoded>
                                    
        <enclosure length="111396729" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/wvpp2u/Block_PowerMarchs_Miller_4_mixdown6mn4z.mp3"/>
        <itunes:summary>As the counting continues, it becomes clear that the 2020 election may be decided by a handful of voters in a few key states. In the months leading up to the election, Block Power, a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process, mobilized over 200,000 voters through a grassroots organizing campaign that took place almost exclusively online.

In this episode, Marcus Miller, a lead organizer in the Block Power grassroots movement, talks to me about how Block Power successfully built a major voter mobilization strategy that would help decide the future of the country. We talk about the ethics and stakes of representation, particularly for historically marginalized communities, the history of Black voter suppression and the challenges of mobilizing Black voters, and the symbolism of this election falling on the 150 year anniversary of the 15th Amendment, which granted African-American men the right to vote, and how a team of humanists and technologists built a movement.

Block Power  is a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process. Their strategy enlists tech-based strategies, including a software platform that makes grassroots get-out-the-vote efforts more efficient, effective, and data-driven.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4640</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>24</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>As the counting continues, it becomes clear that the 2020 election may be decided by a handful of voters in a few key states. In the months leading up to the election, Block Power, a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process, mobilized over 200,000 voters through a grassroots organizing campaign that took place almost exclusively online. In this episode, Marcus Miller, a lead organizer in the Block Power grassroots movement, talks to me about how Block Power successfully built a major voter mobilization strategy that would help decide the future of the country. We talk about the ethics and stakes of representation, particularly for historically marginalized communities, the history of Black voter suppression and the challenges of mobilizing Black voters, and the symbolism of this election falling on the 150 year anniversary of the 15th Amendment, which granted African-American men the right to vote, and how a team of humanists and technologists built a movement. Block Power is a nonprofit focused on uplifting Black voices by engaging new and infrequent Black voters in the electoral process. Their strategy enlists tech-based strategies, including a software platform that makes grassroots get-out-the-vote efforts more efficient, effective, and data-driven.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Server Technology: Ret.Col Robert Gordon III on tech and service</title>
        <itunes:title>Server Technology: Ret.Col Robert Gordon III on tech and service</itunes:title>
        <link>https://dmdonig.podbean.com/e/server-technology-lt-rob-gordon-iii-on-tech-and-service/</link>
                    <comments>https://dmdonig.podbean.com/e/server-technology-lt-rob-gordon-iii-on-tech-and-service/#comments</comments>        <pubDate>Fri, 30 Oct 2020 00:46:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/e8af1390-eaa3-3af3-92f4-8cafae48b6c6</guid>
                                    <description><![CDATA[<p>What is the relationship between technological production and the United States military? What can career technologists learn from service careers? And how can entrepreneurs stay the course of their vision of “doing good,” and also “do well?”</p>
<p>In this episode, I sit down with Ret. Col. Rob Gordon III, now the Chief Growth Officer of SBG Solutions to talk about what the tech sphere can learn from the culture and values of national service.</p>
<p>Robert L. Gordon III has extensive senior management and cross-sector experience in the military, government, high tech, and nonprofit sectors. He is currently the Chief Growth Officer of SBG Technology Solutions, leading SBG's growth and strategy portfolio to expand capabilities in national security and health among SBG’s government clientele.</p>
<p>In 2010 Rob was appointed the Deputy Under Secretary of Defense for Military Community and Family Policy in the Obama Administration, where Rob was responsible for defense-wide policy, program execution and oversight of more than $20 billion of the Department of Defense’s (DoD) worldwide community and family programs and initiatives affecting over four million military active duty service and family members, and two million retirees. Rob also led the effort to revitalize 160 public schools on US based military installations – a $900 million initiative. For his Pentagon service Rob was awarded the Secretary of Defense Medal for Outstanding Public Service.</p>
<p>He is an advisor to several technology startups, and is on the advisory council of Princeton University’s School for Public and International Affairs. Among Rob’s awards and recognitions, he is the recipient of the Bernard Gill Urban Service-Learning Leadership Award from the National Youth Leadership Council; Princeton University's Edward P. Bullard Distinguished Alumnus Award; two awards of the Honorable Order of Saint Barbara; and the Franklin Award by the National Conference on Citizenship.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>What is the relationship between technological production and the United States military? What can career technologists learn from service careers? And how can entrepreneurs stay the course of their vision of “doing good,” and also “do well?”</p>
<p>In this episode, I sit down with Ret. Col. Rob Gordon III, now the Chief Growth Officer of SBG Solutions to talk about what the tech sphere can learn from the culture and values of national service.</p>
<p>Robert L. Gordon III has extensive senior management and cross-sector experience in the military, government, high tech, and nonprofit sectors. He is currently the Chief Growth Officer of SBG Technology Solutions, leading SBG's growth and strategy portfolio to expand capabilities in national security and health among SBG’s government clientele.</p>
<p>In 2010 Rob was appointed the Deputy Under Secretary of Defense for Military Community and Family Policy in the Obama Administration, where Rob was responsible for defense-wide policy, program execution and oversight of more than $20 billion of the Department of Defense’s (DoD) worldwide community and family programs and initiatives affecting over four million military active duty service and family members, and two million retirees. Rob also led the effort to revitalize 160 public schools on US based military installations – a $900 million initiative. For his Pentagon service Rob was awarded the Secretary of Defense Medal for Outstanding Public Service.</p>
<p>He is an advisor to several technology startups, and is on the advisory council of Princeton University’s School for Public and International Affairs. Among Rob’s awards and recognitions, he is the recipient of the Bernard Gill Urban Service-Learning Leadership Award from the National Youth Leadership Council; Princeton University's Edward P. Bullard Distinguished Alumnus Award; two awards of the Honorable Order of Saint Barbara; and the Franklin Award by the National Conference on Citizenship.</p>
]]></content:encoded>
                                    
        <enclosure length="88108307" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/y9hmbq/Rob_Gordon_Podcast_FINAL_mixdown6bb0o.mp3"/>
        <itunes:summary>What is the relationship between technological production and the United States military? What can career technologists learn from service careers? And how can entrepreneurs stay the course of their vision of “doing good,” and also “do well?”

In this episode, I sit down with Ret. Col. Rob Gordon III, now the Chief Growth Officer of SBG Solutions to talk about what the tech sphere can learn from the culture and values of national service.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3669</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>23</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>What is the relationship between technological production and the United States military? What can career technologists learn from service careers? And how can entrepreneurs stay the course of their vision of “doing good,” and also “do well?” In this episode, I sit down with Ret. Col. Rob Gordon III, now the Chief Growth Officer of SBG Solutions to talk about what the tech sphere can learn from the culture and values of national service. Robert L. Gordon III has extensive senior management and cross-sector experience in the military, government, high tech, and nonprofit sectors. He is currently the Chief Growth Officer of SBG Technology Solutions, leading SBG's growth and strategy portfolio to expand capabilities in national security and health among SBG’s government clientele. In 2010 Rob was appointed the Deputy Under Secretary of Defense for Military Community and Family Policy in the Obama Administration, where Rob was responsible for defense-wide policy, program execution and oversight of more than $20 billion of the Department of Defense’s (DoD) worldwide community and family programs and initiatives affecting over four million military active duty service and family members, and two million retirees. Rob also led the effort to revitalize 160 public schools on US based military installations – a $900 million initiative. For his Pentagon service Rob was awarded the Secretary of Defense Medal for Outstanding Public Service. He is an advisor to several technology startups, and is on the advisory council of Princeton University’s School for Public and International Affairs. Among Rob’s awards and recognitions, he is the recipient of the Bernard Gill Urban Service-Learning Leadership Award from the National Youth Leadership Council; Princeton University's Edward P. Bullard Distinguished Alumnus Award; two awards of the Honorable Order of Saint Barbara; and the Franklin Award by the National Conference on Citizenship.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Impact of Impact: Ethical and socially responsible tech investing</title>
        <itunes:title>The Impact of Impact: Ethical and socially responsible tech investing</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-impact-of-impact-ethical-and-socially-responsible-tech-investing/</link>
                    <comments>https://dmdonig.podbean.com/e/the-impact-of-impact-ethical-and-socially-responsible-tech-investing/#comments</comments>        <pubDate>Fri, 23 Oct 2020 00:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/896d095e-5b51-3c49-a0dc-c663a269156e</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I talk to Todd Johnson about socially responsible and impact investing. Todd explains why investment is a critical point in the ecosystem of tech production and culture, and he argues that socially responsible investing is not just a growing dimension of the investment market: it is, rather, the future of investing.</p>
<p>We talk about how entrepreneurs can stay ethically motivated while growing, we discuss the challenges of mission drift, and we ask what it means to invest in the future.</p>
<p>R. Todd Johnson us a leader, counselor, advisor, lawyer and mentor. iPar, built by the CAPROCK Group, is a multi-family office that deploys more than $1 billion for impact. During his 29 years as a lawyer, partner and leader at Jones Day, Todd founded the Firm’s Northern California presence and served as the founder and Global Head for its Renewable Energy and Sustainability practice, where he served companies, funds, family offices, multi-family offices and nonprofits focused on renewable energy, sustainability, and models designed to help our planet and its people to flourish. </p>
<p>Todd serves on the Boards of Directors of Activate Global (a nonprofit working to activate the world’s innovators to redefine how science serves society and sustains the planet, offering fellowships with the Lawrence Berkeley National Laboratory and Lincoln Labs at MIT) and ImpactAlpha (providing investment news for a sustainable edge).</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I talk to Todd Johnson about socially responsible and impact investing. Todd explains why investment is a critical point in the ecosystem of tech production and culture, and he argues that socially responsible investing is not just a growing dimension of the investment market: it is, rather, the future of investing.</p>
<p>We talk about how entrepreneurs can stay ethically motivated while growing, we discuss the challenges of mission drift, and we ask what it means to invest in the future.</p>
<p>R. Todd Johnson us a leader, counselor, advisor, lawyer and mentor. iPar, built by the CAPROCK Group, is a multi-family office that deploys more than $1 billion for impact. During his 29 years as a lawyer, partner and leader at Jones Day, Todd founded the Firm’s Northern California presence and served as the founder and Global Head for its Renewable Energy and Sustainability practice, where he served companies, funds, family offices, multi-family offices and nonprofits focused on renewable energy, sustainability, and models designed to help our planet and its people to flourish. </p>
<p>Todd serves on the Boards of Directors of Activate Global (a nonprofit working to activate the world’s innovators to redefine how science serves society and sustains the planet, offering fellowships with the Lawrence Berkeley National Laboratory and Lincoln Labs at MIT) and ImpactAlpha (providing investment news for a sustainable edge).</p>
]]></content:encoded>
                                    
        <enclosure length="85473931" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/s724xt/Todd_Johnson_Podcastau87z.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I talk to Todd Johnson about socially responsible and impact investing. Todd explains why investment is a critical point in the ecosystem of tech production and culture, and he argues that socially responsible investing is not just a growing dimension of the investment market: it is, rather, the future of investing.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3560</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>22</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I talk to Todd Johnson about socially responsible and impact investing. Todd explains why investment is a critical point in the ecosystem of tech production and culture, and he argues that socially responsible investing is not just a growing dimension of the investment market: it is, rather, the future of investing. We talk about how entrepreneurs can stay ethically motivated while growing, we discuss the challenges of mission drift, and we ask what it means to invest in the future. R. Todd Johnson us a leader, counselor, advisor, lawyer and mentor. iPar, built by the CAPROCK Group, is a multi-family office that deploys more than $1 billion for impact. During his 29 years as a lawyer, partner and leader at Jones Day, Todd founded the Firm’s Northern California presence and served as the founder and Global Head for its Renewable Energy and Sustainability practice, where he served companies, funds, family offices, multi-family offices and nonprofits focused on renewable energy, sustainability, and models designed to help our planet and its people to flourish.  Todd serves on the Boards of Directors of Activate Global (a nonprofit working to activate the world’s innovators to redefine how science serves society and sustains the planet, offering fellowships with the Lawrence Berkeley National Laboratory and Lincoln Labs at MIT) and ImpactAlpha (providing investment news for a sustainable edge).</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Technically Legal: Professor Jeff Ward explores the relationship between law and tech</title>
        <itunes:title>Technically Legal: Professor Jeff Ward explores the relationship between law and tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/technically-legal-professor-jeff-ward-explores-the-relationship-between-law-and-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/technically-legal-professor-jeff-ward-explores-the-relationship-between-law-and-tech/#comments</comments>        <pubDate>Fri, 16 Oct 2020 01:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6f9f80a9-3fb6-3440-bc93-3c0ad716a6fe</guid>
                                    <description><![CDATA[<p>It's our 21st episode of Technically Human, and we're celebrating by going legal!</p>
<p>In the episode, I sit down with Professor Jeff Ward, the Associate Dean of Technology and Innovation and the Director of Duke’s Center on Law & Technology (DCLT), which coordinates Duke’s leadership at the intersection of law and technology with programs such as the <a href='http://www.dukelawtechlab.com/'>Duke Law Tech Lab</a>, a pre-accelerator for legal technology companies, and the Access Tech Tools initiative, a program to help students and Duke’s community partners to employ human-centered design thinking and available technologies to create tools to enhance access to legal services.</p>
<p>Jeff and I talk about the relationship between ethics and the law, we talk about the role that legal practitioners play in tech, and Jeff explains how and why we need new legal frames to govern tech culture—and what we need to know as technologies emerge with the ability to govern legal structures. How can we design laws to protect us from some of the unintended and destructive consequences of technology? Could we one day have AI as our judges and juries? And how can we train the next generation of lawyers to engage with the ethics and justice in the tech sphere?</p>
<p>This episode of Technically Human was produced by Matthew Perry</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>It's our 21st episode of Technically Human, and we're celebrating by going legal!</p>
<p>In the episode, I sit down with Professor Jeff Ward, the Associate Dean of Technology and Innovation and the Director of Duke’s Center on Law & Technology (DCLT), which coordinates Duke’s leadership at the intersection of law and technology with programs such as the <a href='http://www.dukelawtechlab.com/'>Duke Law Tech Lab</a>, a pre-accelerator for legal technology companies, and the Access Tech Tools initiative, a program to help students and Duke’s community partners to employ human-centered design thinking and available technologies to create tools to enhance access to legal services.</p>
<p>Jeff and I talk about the relationship between ethics and the law, we talk about the role that legal practitioners play in tech, and Jeff explains how and why we need new legal frames to govern tech culture—and what we need to know as technologies emerge with the ability to govern legal structures. How can we design laws to protect us from some of the unintended and destructive consequences of technology? Could we one day have AI as our judges and juries? And how can we train the next generation of lawyers to engage with the ethics and justice in the tech sphere?</p>
<p>This episode of Technically Human was produced by Matthew Perry</p>
]]></content:encoded>
                                    
        <enclosure length="113122875" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kqjjts/Jeff_Ward_Podcast_mixdown73w9i.mp3"/>
        <itunes:summary>It's our 21st episode of Technically Human, and we're celebrating by GOING LEGALl!
In the episode, I sit down with Professor Jeff Ward, the Associate Dean of Technology and Innovation and the Director of Duke’s Center on Law &amp; Technology (DCLT).

How can we design laws to protect us from some of the unintended and destructive consequences of technology? Could we one day have AI as our judges and juries? And how can we train the next generation of lawyers to engage with the ethics and justice in the tech sphere?</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4713</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>21</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>It's our 21st episode of Technically Human, and we're celebrating by going legal! In the episode, I sit down with Professor Jeff Ward, the Associate Dean of Technology and Innovation and the Director of Duke’s Center on Law &amp; Technology (DCLT), which coordinates Duke’s leadership at the intersection of law and technology with programs such as the Duke Law Tech Lab, a pre-accelerator for legal technology companies, and the Access Tech Tools initiative, a program to help students and Duke’s community partners to employ human-centered design thinking and available technologies to create tools to enhance access to legal services. Jeff and I talk about the relationship between ethics and the law, we talk about the role that legal practitioners play in tech, and Jeff explains how and why we need new legal frames to govern tech culture—and what we need to know as technologies emerge with the ability to govern legal structures. How can we design laws to protect us from some of the unintended and destructive consequences of technology? Could we one day have AI as our judges and juries? And how can we train the next generation of lawyers to engage with the ethics and justice in the tech sphere? This episode of Technically Human was produced by Matthew Perry</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Of the people, by the people, for the people: Public Interest Technology with Hana Schank</title>
        <itunes:title>Of the people, by the people, for the people: Public Interest Technology with Hana Schank</itunes:title>
        <link>https://dmdonig.podbean.com/e/of-the-people-by-the-people-for-the-people-public-interest-technology-with-hana-schank/</link>
                    <comments>https://dmdonig.podbean.com/e/of-the-people-by-the-people-for-the-people-public-interest-technology-with-hana-schank/#comments</comments>        <pubDate>Fri, 09 Oct 2020 09:38:40 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/330dfda9-99c2-308b-91ce-75ddd49eff30</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human,” I talk to Hana Schank, Director of Strategy for Public Interest Technology at New America. We discuss a new vision for technologists working in the public sector, we discuss the relationship between ethical technology and public interest technology, and Hana persuades me that the future of the tech workforce is in solving public problems.</p>
<p><a href='https://www.newamerica.org/our-people/hana-schank/'>Hana Schank </a>is the Director of Strategy for <a href='https://www.newamerica.org/pit/'>Public Interest Technology </a>at New America, where she works to develop the public interest technology field via research, storytelling and fostering connections. She founded and edits The Commons, a publication for people working in and around government innovation efforts.</p>
<p>Previously, as a part of the United States Digital Service, Schank was a director with the Department of Homeland Security, where she worked with TSA and Customs and Border Protection to improve the air travel experience. In the private sector, Schank founded and ran CollectiveUX, a user experience consultancy, for over a decade, working with startups, Fortune 500 companies, and governmental organizations to research and design human-centered products.</p>
<p>In addition to her research and design work, Schank is a frequent contributor to the New York Times, the Washington Post, the Atlantic, and the author of three works of nonfiction. She is a graduate of Northwestern University, and holds an MFA in nonfiction writing from Columbia University. Schank lives in Brooklyn, N.Y. with her husband and two children.</p>
<p>Cal Poly joined the <a href='https://www.newamerica.org/pit/university-network/'>Public Interest Technology University Network</a> in 2019.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human,” I talk to Hana Schank, Director of Strategy for Public Interest Technology at New America. We discuss a new vision for technologists working in the public sector, we discuss the relationship between ethical technology and public interest technology, and Hana persuades me that the future of the tech workforce is in solving public problems.</p>
<p><a href='https://www.newamerica.org/our-people/hana-schank/'>Hana Schank </a>is the Director of Strategy for <a href='https://www.newamerica.org/pit/'>Public Interest Technology </a>at New America, where she works to develop the public interest technology field via research, storytelling and fostering connections. She founded and edits The Commons, a publication for people working in and around government innovation efforts.</p>
<p>Previously, as a part of the United States Digital Service, Schank was a director with the Department of Homeland Security, where she worked with TSA and Customs and Border Protection to improve the air travel experience. In the private sector, Schank founded and ran CollectiveUX, a user experience consultancy, for over a decade, working with startups, Fortune 500 companies, and governmental organizations to research and design human-centered products.</p>
<p>In addition to her research and design work, Schank is a frequent contributor to the <em>New York Times</em>, the <em>Washington Post</em>, the <em>Atlantic</em>, and the author of three works of nonfiction. She is a graduate of Northwestern University, and holds an MFA in nonfiction writing from Columbia University. Schank lives in Brooklyn, N.Y. with her husband and two children.</p>
<p>Cal Poly joined the <a href='https://www.newamerica.org/pit/university-network/'>Public Interest Technology University Network</a> in 2019.</p>
]]></content:encoded>
                                    
        <enclosure length="93866511" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/n6afiz/Hana_Schank_Podcast_mixdowna3f4x.mp3"/>
        <itunes:summary>In this episode of “Technically Human,” I talk to Hana Schank, Director of Strategy for Public Interest Technology at New America. We discuss a new vision for technologists working in the public sector, we discuss the relationship between ethical technology and public interest technology, and Hana persuades me that the future of the tech workforce is in solving public problems.

Hana Schank is the Director of Strategy for Public Interest Technology at New America, where she works to develop the public interest technology field via research, storytelling and fostering connections. She founded and edits The Commons, a publication for people working in and around government innovation efforts.

Cal Poly joined the Public Interest Technology University Network in 2019.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3910</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>20</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human,” I talk to Hana Schank, Director of Strategy for Public Interest Technology at New America. We discuss a new vision for technologists working in the public sector, we discuss the relationship between ethical technology and public interest technology, and Hana persuades me that the future of the tech workforce is in solving public problems. Hana Schank is the Director of Strategy for Public Interest Technology at New America, where she works to develop the public interest technology field via research, storytelling and fostering connections. She founded and edits The Commons, a publication for people working in and around government innovation efforts. Previously, as a part of the United States Digital Service, Schank was a director with the Department of Homeland Security, where she worked with TSA and Customs and Border Protection to improve the air travel experience. In the private sector, Schank founded and ran CollectiveUX, a user experience consultancy, for over a decade, working with startups, Fortune 500 companies, and governmental organizations to research and design human-centered products. In addition to her research and design work, Schank is a frequent contributor to the New York Times, the Washington Post, the Atlantic, and the author of three works of nonfiction. She is a graduate of Northwestern University, and holds an MFA in nonfiction writing from Columbia University. Schank lives in Brooklyn, N.Y. with her husband and two children. Cal Poly joined the Public Interest Technology University Network in 2019.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Ethics of the Algorithm: Digital innovation and humanistic computation with Dr. Todd Presner</title>
        <itunes:title>The Ethics of the Algorithm: Digital innovation and humanistic computation with Dr. Todd Presner</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-ethics-of-the-algorithm/</link>
                    <comments>https://dmdonig.podbean.com/e/the-ethics-of-the-algorithm/#comments</comments>        <pubDate>Fri, 02 Oct 2020 02:00:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6c40a9da-dd89-3dfb-991b-9f0d4cc72efb</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dr. Todd Presner to talk about ethics, algorithms, and the future of digital innovation. We discuss the need for technologists and humanists to work collaboratively together across disciplinary divides and specializations to solve complex problems, we discuss the consequences of automating the status quo, and we grapple with the ethical questions that algorithms evoke. How do we make algorithms accountable to the public? Just because we can automate something, should we? And how can we imagine differently, toward better possibilities, toward a world that we all want to live in, and in which we can all live generatively?</p>
<p>Professor Presner is the Chair of UCLA’s <a href='http://dh.ucla.edu/'>Digital Humanities Program</a> and the Ross Professor of Germanic Languages and Comparative Literature.</p>
<p>His work at the intersection of tech and ethics includes <a href='http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=13036'>Digital_Humanities</a> (published by MIT Press, 2012), co-authored with Anne Burdick, Johanna Drucker, Peter Lunenfeld, and Jeffrey Schnapp, which proposes a critical-theoretical exploration of the emerging field of digital humanities, and <a href='http://www.hup.harvard.edu/catalog.php?isbn=9780674725348'>HyperCities: Thick Mapping in the Digital Humanities</a> (Harvard University Press, 2014), with David Shepard and Yoh Kawano, which explores digital cultural mapping using the HyperCities project, awarded the “digital media and learning” prize by the MacArthur Foundation/HASTAC in 2008.</p>
<p>Since 2018, Dr. Presner is the Associate Dean of Digital Innovation in the Division of Humanities and Adviser to the Vice-Chancellor of Research for Humanities, Arts, and Social Sciences research.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dr. Todd Presner to talk about ethics, algorithms, and the future of digital innovation. We discuss the need for technologists and humanists to work collaboratively together across disciplinary divides and specializations to solve complex problems, we discuss the consequences of automating the status quo, and we grapple with the ethical questions that algorithms evoke. How do we make algorithms accountable to the public? Just because we <em>can</em> automate something, <em>should </em>we? And how can we imagine differently, toward better possibilities, toward a world that we all want to live in, and in which we can all live generatively?</p>
<p>Professor Presner is the Chair of UCLA’s <a href='http://dh.ucla.edu/'>Digital Humanities Program</a> and the Ross Professor of Germanic Languages and Comparative Literature.</p>
<p>His work at the intersection of tech and ethics includes <a href='http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=13036'><em>Digital_Humanities</em></a> (published by MIT Press, 2012), co-authored with Anne Burdick, Johanna Drucker, Peter Lunenfeld, and Jeffrey Schnapp, which proposes a critical-theoretical exploration of the emerging field of digital humanities, and <a href='http://www.hup.harvard.edu/catalog.php?isbn=9780674725348'><em>HyperCities: Thick Mapping in the Digital Humanities</em></a> (Harvard University Press, 2014), with David Shepard and Yoh Kawano, which explores digital cultural mapping using the HyperCities project, awarded the “digital media and learning” prize by the MacArthur Foundation/HASTAC in 2008.</p>
<p>Since 2018, Dr. Presner is the Associate Dean of Digital Innovation in the Division of Humanities and Adviser to the Vice-Chancellor of Research for Humanities, Arts, and Social Sciences research.</p>
]]></content:encoded>
                                    
        <enclosure length="81372841" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/7dvy6a/Todd_Presner_Podcast_mixdown8k8np.mp3"/>
        <itunes:summary>In this episode of “Technically Human,” I sit down with Dr. Todd Presner to talk about ethics, algorithms, and the future of digital innovation. We discuss the need for technologists and humanists to work collaboratively together across disciplinary divides and specializations to solve complex problems, we discuss the consequences of automating the status quo, and we grapple with the ethical questions that algorithms evoke. How do we make algorithms accountable to the public? Just because we can automate something, should we? And how can we imagine differently, toward better possibilities, toward a world that we all want to live in, and in which we can all live generatively?

Professor Presner is the Chair of UCLA’s Digital Humanities Program and the Ross Professor of Germanic Languages and Comparative Literature.

His work at the intersection of tech and ethics includes Digital_Humanities (published by MIT Press, 2012), co-authored with Anne Burdick, Johanna Drucker, Peter Lunenfeld, and Jeffrey Schnapp, which proposes a critical-theoretical exploration of the emerging field of digital humanities, and HyperCities: Thick Mapping in the Digital Humanities (Harvard University Press, 2014), with David Shepard and Yoh Kawano, which explores digital cultural mapping using the HyperCities project, awarded the “digital media and learning” prize by the MacArthur Foundation/HASTAC in 2008.

Since 2018, Dr. Presner is the Associate Dean of Digital Innovation in the Division of Humanities and Adviser to the Vice-Chancellor of Research for Humanities, Arts, and Social Sciences research.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3390</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>19</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human,” I sit down with Dr. Todd Presner to talk about ethics, algorithms, and the future of digital innovation. We discuss the need for technologists and humanists to work collaboratively together across disciplinary divides and specializations to solve complex problems, we discuss the consequences of automating the status quo, and we grapple with the ethical questions that algorithms evoke. How do we make algorithms accountable to the public? Just because we can automate something, should we? And how can we imagine differently, toward better possibilities, toward a world that we all want to live in, and in which we can all live generatively? Professor Presner is the Chair of UCLA’s Digital Humanities Program and the Ross Professor of Germanic Languages and Comparative Literature. His work at the intersection of tech and ethics includes Digital_Humanities (published by MIT Press, 2012), co-authored with Anne Burdick, Johanna Drucker, Peter Lunenfeld, and Jeffrey Schnapp, which proposes a critical-theoretical exploration of the emerging field of digital humanities, and HyperCities: Thick Mapping in the Digital Humanities (Harvard University Press, 2014), with David Shepard and Yoh Kawano, which explores digital cultural mapping using the HyperCities project, awarded the “digital media and learning” prize by the MacArthur Foundation/HASTAC in 2008. Since 2018, Dr. Presner is the Associate Dean of Digital Innovation in the Division of Humanities and Adviser to the Vice-Chancellor of Research for Humanities, Arts, and Social Sciences research.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Way Way Back Machine: A Dive into the Archive with Dr. Jason Lustig</title>
        <itunes:title>The Way Way Back Machine: A Dive into the Archive with Dr. Jason Lustig</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-way-way-back-machine-a-dive-into-the-archive-with-dr-jason-lustig/</link>
                    <comments>https://dmdonig.podbean.com/e/the-way-way-back-machine-a-dive-into-the-archive-with-dr-jason-lustig/#comments</comments>        <pubDate>Fri, 25 Sep 2020 17:07:08 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/0c6ccbf2-92a0-3847-9d65-241b76f22c29</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Jason Lustig about the concept of the archive, and how we might understand its history in a digital and virtual context.</p>
<p> </p>
<p>We tend to believe that the internet stores all, but does it? What do we gain, and what do we lose, in an internet age that proposes to keep even the smallest details of our lives? Who owns our data? Do we have the right to be forgotten? And who gets to decide what information about ourselves lives on forever?</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I talk to Dr. Jason Lustig about the concept of the archive, and how we might understand its history in a digital and virtual context.</p>
<p> </p>
<p>We tend to believe that the internet stores all, but does it? What do we gain, and what do we lose, in an internet age that proposes to keep even the smallest details of our lives? Who owns our data? Do we have the right to be forgotten? And who gets to decide what information about ourselves lives on forever?</p>
]]></content:encoded>
                                    
        <enclosure length="85807891" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/ka5q2j/Untitled_Session_4_mixdowna7i27.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I talk to Dr. Jason Lustig about the concept of the archive, and how we might understand its history in a digital and virtual context. We tend to believe that the internet stores all, but does it? What do we gain, and what do we lose, in an internet age that proposes to keep even the smallest details of our lives? Who owns our data? Do we have the right to be forgotten? And who gets to decide what information about ourselves lives on forever?</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3574</itunes:duration>
        <itunes:season>3</itunes:season>
        <itunes:episode>18</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I talk to Dr. Jason Lustig about the concept of the archive, and how we might understand its history in a digital and virtual context.   We tend to believe that the internet stores all, but does it? What do we gain, and what do we lose, in an internet age that proposes to keep even the smallest details of our lives? Who owns our data? Do we have the right to be forgotten? And who gets to decide what information about ourselves lives on forever?</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>PODCAST TAKEOVER SERIES: Episode 3</title>
        <itunes:title>PODCAST TAKEOVER SERIES: Episode 3</itunes:title>
        <link>https://dmdonig.podbean.com/e/podcast-takeover-episode-3/</link>
                    <comments>https://dmdonig.podbean.com/e/podcast-takeover-episode-3/#comments</comments>        <pubDate>Fri, 18 Sep 2020 14:24:43 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/6e0f682d-3499-332f-9258-4f833b9be88a</guid>
                                    <description><![CDATA[<p>In the final episode of the Podcast Takeover Series, "School is Out," I get real about the move to online classes with Dr. Shira Lee Katz, Emily Bowden, and Erin Jeffs. In the first segment of the episode, we discuss how Coursera, a company that offers online education to distance learners, is leading the move in online education, and we talk about the advantages and disadvantages of virtual education.  </p>
<p>In the second segment of the episode, Erin and Emily discuss their experience of virtual classes at Cal Poly, and we talk about what we gain, and what we lose, when we can't meet for classes in person.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In the final episode of the Podcast Takeover Series, "School is Out," I get real about the move to online classes with Dr. Shira Lee Katz, Emily Bowden, and Erin Jeffs. In the first segment of the episode, we discuss how Coursera, a company that offers online education to distance learners, is leading the move in online education, and we talk about the advantages and disadvantages of virtual education.  </p>
<p>In the second segment of the episode, Erin and Emily discuss their experience of virtual classes at Cal Poly, and we talk about what we gain, and what we lose, when we can't meet for classes in person.</p>
]]></content:encoded>
                                    
        <enclosure length="87375667" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/4nvjgt/Shira_Sept_18_mixdown8hj1p.mp3"/>
        <itunes:summary>In the final episode of the Podcast Takeover Series, "School is Out," I get real about the move to online classes with Dr. Shira Lee Katz, Emily Bowden, and Erin Jeffs. In the first segment of the episode, we discuss how Coursera, a company that offers online education to distance learners, is leading the move in online education, and we talk about the advantages and disadvantages of virtual education.  

In the second segment of the episode, Erin and Emily discuss their experience of virtual classes at Cal Poly, and we talk about what we gain, and what we lose, when we can't meet for classes in person.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3640</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>17</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In the final episode of the Podcast Takeover Series, "School is Out," I get real about the move to online classes with Dr. Shira Lee Katz, Emily Bowden, and Erin Jeffs. In the first segment of the episode, we discuss how Coursera, a company that offers online education to distance learners, is leading the move in online education, and we talk about the advantages and disadvantages of virtual education.   In the second segment of the episode, Erin and Emily discuss their experience of virtual classes at Cal Poly, and we talk about what we gain, and what we lose, when we can't meet for classes in person.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>PODCAST TAKEOVER SERIES: Episode 2</title>
        <itunes:title>PODCAST TAKEOVER SERIES: Episode 2</itunes:title>
        <link>https://dmdonig.podbean.com/e/podcast-takeover-series-episode-2/</link>
                    <comments>https://dmdonig.podbean.com/e/podcast-takeover-series-episode-2/#comments</comments>        <pubDate>Sun, 13 Sep 2020 09:49:22 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/edb5a7d1-5df8-3def-9de7-bc1e961a651b</guid>
                                    <description><![CDATA[<p>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.</p>
<p>In our second episode of the podcast takeover series, we are delving into the depths of technological futurity. Over the next hour, we’ll imagine possible utopian and dystopian technological futures, and explore how tech creates, and warns about, the possibilities that it engineers. We will take a trip into the intergalactic realm to think about how science fiction depicts the interaction between intelligent life elsewhere in the galaxy and human…well, we’ll call it intelligence for now. And we will think about how the next generation of technologists are understanding the futures they will help to build.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.</p>
<p>In our second episode of the podcast takeover series, we are delving into the depths of technological futurity. Over the next hour, we’ll imagine possible utopian and dystopian technological futures, and explore how tech creates, and warns about, the possibilities that it engineers. We will take a trip into the intergalactic realm to think about how science fiction depicts the interaction between intelligent life elsewhere in the galaxy and human…well, we’ll call it intelligence for now. And we will think about how the next generation of technologists are understanding the futures they will help to build.</p>
]]></content:encoded>
                                    
        <enclosure length="77607393" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/en2chg/Podcast_Takeover_Series_2_FINAL6yzxb.mp3"/>
        <itunes:summary>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.

In our second episode of the podcast takeover series, we are delving into the depths of technological futurity. Over the next hour, we’ll imagine possible utopian and dystopian technological futures, and explore how tech creates, and warns about, the possibilities that it engineers. We will take a trip into the intergalactic realm to think about how science fiction depicts the interaction between intelligent life elsewhere in the galaxy and human…well, we’ll call it intelligence for now. And we will think about how the next generation of technologists are understanding the futures they will help to build.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3233</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>16</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology. In our second episode of the podcast takeover series, we are delving into the depths of technological futurity. Over the next hour, we’ll imagine possible utopian and dystopian technological futures, and explore how tech creates, and warns about, the possibilities that it engineers. We will take a trip into the intergalactic realm to think about how science fiction depicts the interaction between intelligent life elsewhere in the galaxy and human…well, we’ll call it intelligence for now. And we will think about how the next generation of technologists are understanding the futures they will help to build.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>PODCAST TAKEOVER SERIES: Episode 1</title>
        <itunes:title>PODCAST TAKEOVER SERIES: Episode 1</itunes:title>
        <link>https://dmdonig.podbean.com/e/podcast-takeover-series-episode-1/</link>
                    <comments>https://dmdonig.podbean.com/e/podcast-takeover-series-episode-1/#comments</comments>        <pubDate>Fri, 28 Aug 2020 13:25:19 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/cd5bd58f-bdb3-3f95-a78d-f1b69fbdd8b0</guid>
                                    <description><![CDATA[<p>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.</p>
<p>In this week's episode, we’ll hear them talk about the history and future of AI, the dangers and promises of Transplant Tech, and how we might understand the ethical concept of the “good” and its relationship to technology. They’ll discuss their vision for ethical technology, the history of these technological developments, and their concerns regarding the present and future of technological innovations.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.</p>
<p>In this week's episode, we’ll hear them talk about the history and future of AI, the dangers and promises of Transplant Tech, and how we might understand the ethical concept of the “good” and its relationship to technology. They’ll discuss their vision for ethical technology, the history of these technological developments, and their concerns regarding the present and future of technological innovations.</p>
]]></content:encoded>
                                    
        <enclosure length="78743646" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/5a5qgr/Takeover_Series_Episode_1b5ro5.mp3"/>
        <itunes:summary>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology.

In this week's episode, we’ll hear them talk about the history and future of AI, the dangers and promises of Transplant Tech, and how we might understand the ethical concept of the “good” and its relationship to technology. They’ll discuss their vision for ethical technology, the history of these technological developments, and their concerns regarding the present and future of technological innovations.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3280</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>15</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>It's a podcast takeover! In this series of episodes, I give my mic over to the next generation of humanists and technologists at Cal Poly who represent the future of ethical technology. Over the next hour, we will hear from the Summer 2020 “Technically Human” class. They have worked together to present to you their thinking about some of the most important and urgent issue in ethical technology. In this week's episode, we’ll hear them talk about the history and future of AI, the dangers and promises of Transplant Tech, and how we might understand the ethical concept of the “good” and its relationship to technology. They’ll discuss their vision for ethical technology, the history of these technological developments, and their concerns regarding the present and future of technological innovations.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Tech Stands Up: Talking tech leadership with Dex Hunter-Torricke</title>
        <itunes:title>Tech Stands Up: Talking tech leadership with Dex Hunter-Torricke</itunes:title>
        <link>https://dmdonig.podbean.com/e/tech-stands-up-talking-tech-leadership-with-dex-hunter-torricke/</link>
                    <comments>https://dmdonig.podbean.com/e/tech-stands-up-talking-tech-leadership-with-dex-hunter-torricke/#comments</comments>        <pubDate>Fri, 21 Aug 2020 10:18:15 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c0026bb9-8fc3-38d6-9c56-4c773189d606</guid>
                                    <description><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dex Torricke-Hunter. We talk about Dex’s movement from working in the United Nations to working with some of the biggest names in tech, we talk about the global implications of social media driven connectivity, and we discuss whether we should really “move fast and break things.”</p>
<p>Dex Hunter-Torricke is head of communications for the Oversight Board, the new independent body that will be making binding decisions on Facebook and Instagram’s most challenging content issues. During his career, Dex has served in a string of high-profile roles across the tech and policy worlds, including as head of communications for SpaceX, head of executive communications for Facebook – including four years as speechwriter for Mark Zuckerberg – and as Google’s first executive speechwriter, where he worked with Eric Schmidt and Larry Page. Before that, he was a speechwriter for the office of UN Secretary-General Ban Ki-moon. </p>
<p>In 2016, a week after the US election Dex left his job at SpaceX to spend the next 18 months focusing on working with leaders on social and political causes, including advising political leaders and candidates in Europe and the US. Dex is a New York Times-bestselling ghostwriter and frequent public speaker on technology issues. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of “Technically Human,” I sit down with Dex Torricke-Hunter. We talk about Dex’s movement from working in the United Nations to working with some of the biggest names in tech, we talk about the global implications of social media driven connectivity, and we discuss whether we should really “move fast and break things.”</p>
<p>Dex Hunter-Torricke is head of communications for the Oversight Board, the new independent body that will be making binding decisions on Facebook and Instagram’s most challenging content issues. During his career, Dex has served in a string of high-profile roles across the tech and policy worlds, including as head of communications for SpaceX, head of executive communications for Facebook – including four years as speechwriter for Mark Zuckerberg – and as Google’s first executive speechwriter, where he worked with Eric Schmidt and Larry Page. Before that, he was a speechwriter for the office of UN Secretary-General Ban Ki-moon. </p>
<p>In 2016, a week after the US election Dex left his job at SpaceX to spend the next 18 months focusing on working with leaders on social and political causes, including advising political leaders and candidates in Europe and the US. Dex is a New York Times-bestselling ghostwriter and frequent public speaker on technology issues. </p>
]]></content:encoded>
                                    
        <enclosure length="79789601" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/kbqy34/Dex_Torricke_Hunter_Podcast_FInal8xn53.mp3"/>
        <itunes:summary>In this episode of “Technically Human,” I sit down with Dex Torricke-Hunter. We talk about Dex’s movement from working in the United Nations to working with some of the biggest names in tech, we talk about the global implications of social media driven connectivity, and we discuss whether we should really “move fast and break things.”

Dex Hunter-Torricke is head of communications for the Oversight Board, the new independent body that will be making binding decisions on Facebook and Instagram’s most challenging content issues. During his career, Dex has served in a string of high-profile roles across the tech and policy worlds, including as head of communications for SpaceX, head of executive communications for Facebook – including four years as speechwriter for Mark Zuckerberg – and as Google’s first executive speechwriter, where he worked with Eric Schmidt and Larry Page. Before that, he was a speechwriter for the office of UN Secretary-General Ban Ki-moon. 

In 2016, a week after the US election Dex left his job at SpaceX to spend the next 18 months focusing on working with leaders on social and political causes, including advising political leaders and candidates in Europe and the US. Dex is a New York Times-bestselling ghostwriter and frequent public speaker on technology issues.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3324</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>14</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of “Technically Human,” I sit down with Dex Torricke-Hunter. We talk about Dex’s movement from working in the United Nations to working with some of the biggest names in tech, we talk about the global implications of social media driven connectivity, and we discuss whether we should really “move fast and break things.” Dex Hunter-Torricke is head of communications for the Oversight Board, the new independent body that will be making binding decisions on Facebook and Instagram’s most challenging content issues. During his career, Dex has served in a string of high-profile roles across the tech and policy worlds, including as head of communications for SpaceX, head of executive communications for Facebook – including four years as speechwriter for Mark Zuckerberg – and as Google’s first executive speechwriter, where he worked with Eric Schmidt and Larry Page. Before that, he was a speechwriter for the office of UN Secretary-General Ban Ki-moon.  In 2016, a week after the US election Dex left his job at SpaceX to spend the next 18 months focusing on working with leaders on social and political causes, including advising political leaders and candidates in Europe and the US. Dex is a New York Times-bestselling ghostwriter and frequent public speaker on technology issues. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Biotechnically Human: George Estreich on disability, biotechnology, and how technologies are defining who counts as "human"</title>
        <itunes:title>Biotechnically Human: George Estreich on disability, biotechnology, and how technologies are defining who counts as "human"</itunes:title>
        <link>https://dmdonig.podbean.com/e/biotechnically-human-george-estreich-on-disability-biotechnology-and-how-technologies-define-who-counts-as-human/</link>
                    <comments>https://dmdonig.podbean.com/e/biotechnically-human-george-estreich-on-disability-biotechnology-and-how-technologies-define-who-counts-as-human/#comments</comments>        <pubDate>Fri, 14 Aug 2020 10:00:16 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/f27b6807-f347-36a8-8fe8-d84c37360a95</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I speak to author George Estreich about the intersection of biotechnology and disability. We discuss the ways in which biotechnology is changing the nature of "the human," the ways in which technology defines and determines our understanding of disability, and George talks about what it means to write about disability and technology at the intersection of the personal and the political.</p>
<p>George Estreich's publications include a book of poems, Textbook Illustrations of the Human Body, which won the Gorsline Prize from Cloudbank Books; the Oregon Book Award-winning memoir The Shape of the Eye; and Fables and Futures: Biotechnology, Disability, and the Stories we Tell Ourselves, which NPR's Science Friday named a Best Science Book of 2019. Estreich has also published prose in The New York Times, Salon, The American Medical Association Journal of Ethics, Tin House, Essay Daily, and McSweeney’s Internet Tendency. He lives in Corvallis, Oregon, with his family, where he teaches in Oregon State’s MFA program in Creative Nonfiction. You can read more about his work at <a href='http://georgeestreich.com/'>georgeestreich.com</a>.</p>
<p>This episode of "Technically Human" was produced by Emily Bowden and Erin Jeffs.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I speak to author George Estreich about the intersection of biotechnology and disability. We discuss the ways in which biotechnology is changing the nature of "the human," the ways in which technology defines and determines our understanding of disability, and George talks about what it means to write about disability and technology at the intersection of the personal and the political.</p>
<p>George Estreich's publications include a book of poems, <em>Textbook Illustrations of the Human Body</em>, which won the Gorsline Prize from Cloudbank Books; the Oregon Book Award-winning memoir <em>The Shape of the Eye</em>; and <em>Fables and Futures: Biotechnology, Disability, and the Stories we Tell Ourselves</em>, which NPR's <em>Science Friday </em>named a Best Science Book of 2019.<em> </em>Estreich has also published prose in <em>The New York Times, Salon, The American Medical Association Journal of Ethics, Tin House, Essay Daily, </em>and <em>McSweeney’s Internet Tendency</em>. He lives in Corvallis, Oregon, with his family, where he teaches in Oregon State’s MFA program in Creative Nonfiction. You can read more about his work at <a href='http://georgeestreich.com/'>georgeestreich.com</a>.</p>
<p>This episode of "Technically Human" was produced by Emily Bowden and Erin Jeffs.</p>
]]></content:encoded>
                                    
        <enclosure length="94526266" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/59sawe/George_Estreich_Podcast_2_7-297oqgm.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I speak to author George Estreich about the intersection of biotechnology and disability. We discuss the ways in which biotechnology is changing the nature of "the human," the ways in which technology defines and determines our understanding of disability, and George talks about what it means to write about disability and technology at the intersection of the personal and the political.

George Estreich's publications include a book of poems, Textbook Illustrations of the Human Body, which won the Gorsline Prize from Cloudbank Books; the Oregon Book Award-winning memoir The Shape of the Eye; and Fables and Futures: Biotechnology, Disability, and the Stories we Tell Ourselves, which NPR's Science Friday named a Best Science Book of 2019. Estreich has also published prose in The New York Times, Salon, The American Medical Association Journal of Ethics, Tin House, Essay Daily, and McSweeney’s Internet Tendency. He lives in Corvallis, Oregon, with his family, where he teaches in Oregon State’s MFA program in Creative Nonfiction. You can read more about his work at georgeestreich.com.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3936</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>13</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I speak to author George Estreich about the intersection of biotechnology and disability. We discuss the ways in which biotechnology is changing the nature of "the human," the ways in which technology defines and determines our understanding of disability, and George talks about what it means to write about disability and technology at the intersection of the personal and the political. George Estreich's publications include a book of poems, Textbook Illustrations of the Human Body, which won the Gorsline Prize from Cloudbank Books; the Oregon Book Award-winning memoir The Shape of the Eye; and Fables and Futures: Biotechnology, Disability, and the Stories we Tell Ourselves, which NPR's Science Friday named a Best Science Book of 2019. Estreich has also published prose in The New York Times, Salon, The American Medical Association Journal of Ethics, Tin House, Essay Daily, and McSweeney’s Internet Tendency. He lives in Corvallis, Oregon, with his family, where he teaches in Oregon State’s MFA program in Creative Nonfiction. You can read more about his work at georgeestreich.com. This episode of "Technically Human" was produced by Emily Bowden and Erin Jeffs.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Linking In: HireClub Founder Ketan Anjaria breaks down getting hired in the tech industry</title>
        <itunes:title>Linking In: HireClub Founder Ketan Anjaria breaks down getting hired in the tech industry</itunes:title>
        <link>https://dmdonig.podbean.com/e/linking-inhireclub-founder-ketan-anjaria-breaks-down-getting-hired-in-the-tech-industry/</link>
                    <comments>https://dmdonig.podbean.com/e/linking-inhireclub-founder-ketan-anjaria-breaks-down-getting-hired-in-the-tech-industry/#comments</comments>        <pubDate>Fri, 07 Aug 2020 10:19:52 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/4fec02a9-ee55-3820-87e6-e7ac06836b27</guid>
                                    <description><![CDATA[<p>In this episode of Technically Human, I talk to Ketan Anjaria, the founder of HireClub, a social network that enlists media platforms and leverages network connections to help job seekers find jobs. Anchored in SF, in the heart of the tech industry, Ketan outlines the logic behind hiring, discusses the gaps between academic and practical preparation, and explains how tech industry's hiring practices shape and determine the outcomes of the tech products that a global public interacts with on a daily basis.

Learn more about Ketan's work and HireClub: <a href='https://hireclub.com/'>https://hireclub.com/</a></p>
<p>This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of Technically Human, I talk to Ketan Anjaria, the founder of HireClub, a social network that enlists media platforms and leverages network connections to help job seekers find jobs. Anchored in SF, in the heart of the tech industry, Ketan outlines the logic behind hiring, discusses the gaps between academic and practical preparation, and explains how tech industry's hiring practices shape and determine the outcomes of the tech products that a global public interacts with on a daily basis.<br>
<br>
Learn more about Ketan's work and HireClub: <a href='https://hireclub.com/'>https://hireclub.com/</a></p>
<p>This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</p>
]]></content:encoded>
                                    
        <enclosure length="103211461" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/gs5min/KetanAnjiara_01_mixdown.mp3"/>
        <itunes:summary>In this episode of Technically Human, I talk to Ketan Anjaria, the founder of HireClub, a social network that enlists media platforms and leverages network connections to help job seekers find jobs. Anchored in SF, in the heart of the tech industry, Ketan outlines the logic behind hiring, discusses the gaps between academic and practical preparation, and explains how tech industry's hiring practices shape and determine the outcomes of the tech products that a global public interacts with on a daily basis.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4299</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>12</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of Technically Human, I talk to Ketan Anjaria, the founder of HireClub, a social network that enlists media platforms and leverages network connections to help job seekers find jobs. Anchored in SF, in the heart of the tech industry, Ketan outlines the logic behind hiring, discusses the gaps between academic and practical preparation, and explains how tech industry's hiring practices shape and determine the outcomes of the tech products that a global public interacts with on a daily basis. Learn more about Ketan's work and HireClub: https://hireclub.com/ This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>California Dreaming: Silicon Valley's Moral Vision</title>
        <itunes:title>California Dreaming: Silicon Valley's Moral Vision</itunes:title>
        <link>https://dmdonig.podbean.com/e/california-dreaming-silicon-valleys-moral-vision/</link>
                    <comments>https://dmdonig.podbean.com/e/california-dreaming-silicon-valleys-moral-vision/#comments</comments>        <pubDate>Thu, 30 Jul 2020 23:18:35 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/bbccd10f-c4e8-3805-87bf-ac92cedf3ec2</guid>
                                    <description><![CDATA[<p>Welcome back to Season 2 of the "Technically Human" podcast! In the first episode of this season, I talk to Dr. Morgan Ames about the concept of utopia and Silicon Valley's moral vision. We discuss the ideas and frictions at the heart of Silicon Valley's growth, we talk about the inequalities at the heart of technological culture, and Morgan describes how Silicon Valley became THE Valley.</p>
<p> </p>
<p>Dr. Ames is an assistant adjunct professor in the <a href='http://ischool.berkeley.edu/'>School of Information</a> at the University of California, Berkeley, where she teaches in <a href='http://datascience.berkeley.edu/'>Data Science</a> and administers the <a href='http://cstms.berkeley.edu/teaching/de-in-sts/'>Designated Emphasis in Science and Technology Studies</a> in affiliation with the <a href='http://cstms.berkeley.edu/'>Center for Science, Technology, Medicine and Society</a>. She is also affiliated with the <a href='http://afog.berkeley.edu/'>Algorithmic Fairness and Opacity Working Group</a>, the <a href='http://ctsp.berkeley.edu/'>Center for Science, Technology, Society and Policy</a>, and the <a href='http://bids.berkeley.edu/'>Berkeley Institute of Data Science</a>. Her research has been funded by the National Science Foundation (NSF), Intel, and other organizations, and she has been invited to <a href='http://morganya.org/cv.morganya.org'>present her work</a> at conferences around the world, including South by Southwest (SXSW).</p>
<p> </p>
<p><a href='http://morganya.org/research.html'>Her next project</a> explores the role that utopianism plays in discourses around childhood, education, and 'development' in two geographically overlapping but culturally divided worlds: developer culture of Silicon Valley and the working-class and immigrant communities in the San Francisco Bay Area. </p>
<p> </p>
<p>This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>Welcome back to Season 2 of the "Technically Human" podcast! In the first episode of this season, I talk to Dr. Morgan Ames about the concept of utopia and Silicon Valley's moral vision. We discuss the ideas and frictions at the heart of Silicon Valley's growth, we talk about the inequalities at the heart of technological culture, and Morgan describes how Silicon Valley became THE Valley.</p>
<p> </p>
<p>Dr. Ames is an assistant adjunct professor in the <a href='http://ischool.berkeley.edu/'>School of Information</a> at the University of California, Berkeley, where she teaches in <a href='http://datascience.berkeley.edu/'>Data Science</a> and administers the <a href='http://cstms.berkeley.edu/teaching/de-in-sts/'>Designated Emphasis in Science and Technology Studies</a> in affiliation with the <a href='http://cstms.berkeley.edu/'>Center for Science, Technology, Medicine and Society</a>. She is also affiliated with the <a href='http://afog.berkeley.edu/'>Algorithmic Fairness and Opacity Working Group</a>, the <a href='http://ctsp.berkeley.edu/'>Center for Science, Technology, Society and Policy</a>, and the <a href='http://bids.berkeley.edu/'>Berkeley Institute of Data Science</a>. Her research has been funded by the National Science Foundation (NSF), Intel, and other organizations, and she has been invited to <a href='http://morganya.org/cv.morganya.org'>present her work</a> at conferences around the world, including South by Southwest (SXSW).</p>
<p> </p>
<p><a href='http://morganya.org/research.html'>Her next project</a> explores the role that utopianism plays in discourses around childhood, education, and 'development' in two geographically overlapping but culturally divided worlds: developer culture of Silicon Valley and the working-class and immigrant communities in the San Francisco Bay Area. </p>
<p> </p>
<p>This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</p>
]]></content:encoded>
                                    
        <enclosure length="85717831" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/jzd64y/morgan_ames_podcast_final_mixdownaws50.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I talk to Dr. Morgan Ames about the concept of utopia and Silicon Valley's moral vision. We discuss the ideas and frictions at the heart of Silicon Valley's growth, we talk about the inequalities at the heart of technological culture, and Morgan describes how Silicon Valley became THE Valley.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3570</itunes:duration>
        <itunes:season>2</itunes:season>
        <itunes:episode>11</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>Welcome back to Season 2 of the "Technically Human" podcast! In the first episode of this season, I talk to Dr. Morgan Ames about the concept of utopia and Silicon Valley's moral vision. We discuss the ideas and frictions at the heart of Silicon Valley's growth, we talk about the inequalities at the heart of technological culture, and Morgan describes how Silicon Valley became THE Valley.   Dr. Ames is an assistant adjunct professor in the School of Information at the University of California, Berkeley, where she teaches in Data Science and administers the Designated Emphasis in Science and Technology Studies in affiliation with the Center for Science, Technology, Medicine and Society. She is also affiliated with the Algorithmic Fairness and Opacity Working Group, the Center for Science, Technology, Society and Policy, and the Berkeley Institute of Data Science. Her research has been funded by the National Science Foundation (NSF), Intel, and other organizations, and she has been invited to present her work at conferences around the world, including South by Southwest (SXSW).   Her next project explores the role that utopianism plays in discourses around childhood, education, and 'development' in two geographically overlapping but culturally divided worlds: developer culture of Silicon Valley and the working-class and immigrant communities in the San Francisco Bay Area.    This week's episode was produced and edited by Emily Bowden and Erin Jeffs.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Next Generation of Technologists: a roundtable with the future of ethical tech</title>
        <itunes:title>The Next Generation of Technologists: a roundtable with the future of ethical tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-next-generation-of-technologists-a-roundtable-with-the-future-of-ethical-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/the-next-generation-of-technologists-a-roundtable-with-the-future-of-ethical-tech/#comments</comments>        <pubDate>Sat, 20 Jun 2020 01:56:00 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/fb4202ff-523e-5f66-a422-286f1441199e</guid>
                                    <description><![CDATA[<p>In this episode, I speak with three Cal Poly undergraduates from my class on ethical tech. We discuss their vision for ethical technology, we talk about their concerns about the present and future of human values and tech, and they tell me about their quarter of distant learning, and what it has meant for them to live in the time of COVID.</p>
<p>Erin Jeffs, Nick Bell, and Geoff Sanhueza are undergraduate students at Cal Poly, working in different majors across campus, from architecture to computer science to Animal Science. We spent the last quarter getting to know each other in my class on ethical technology, where we thought together about how we might envision a more ethical, more equitable future of technological production and ideation.</p>
<p>They join me in this episode to share their insights about, and their vision for, ethical technology as they navigate the move from student to practitioner.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with three Cal Poly undergraduates from my class on ethical tech. We discuss their vision for ethical technology, we talk about their concerns about the present and future of human values and tech, and they tell me about their quarter of distant learning, and what it has meant for them to live in the time of COVID.</p>
<p>Erin Jeffs, Nick Bell, and Geoff Sanhueza are undergraduate students at Cal Poly, working in different majors across campus, from architecture to computer science to Animal Science. We spent the last quarter getting to know each other in my class on ethical technology, where we thought together about how we might envision a more ethical, more equitable future of technological production and ideation.</p>
<p>They join me in this episode to share their insights about, and their vision for, ethical technology as they navigate the move from student to practitioner.</p>
]]></content:encoded>
                                    
        <enclosure length="88936535" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2tdbve/Erin_Geoff_Nick_Podcast_mixdown_9zazq.mp3"/>
        <itunes:summary>In this episode, I speak with three Cal Poly undergraduates from my class on ethical tech. We discuss their vision for ethical technology, we talk about their concerns about the present and future of human values and tech, and they tell me about their quarter of distant learning, and what it has meant for them to live in the time of COVID.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3705</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>10</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with three Cal Poly undergraduates from my class on ethical tech. We discuss their vision for ethical technology, we talk about their concerns about the present and future of human values and tech, and they tell me about their quarter of distant learning, and what it has meant for them to live in the time of COVID. Erin Jeffs, Nick Bell, and Geoff Sanhueza are undergraduate students at Cal Poly, working in different majors across campus, from architecture to computer science to Animal Science. We spent the last quarter getting to know each other in my class on ethical technology, where we thought together about how we might envision a more ethical, more equitable future of technological production and ideation. They join me in this episode to share their insights about, and their vision for, ethical technology as they navigate the move from student to practitioner.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Science as a Human Endeavor: Carl Zimmer explains the meaning of life, and what it means to write about it</title>
        <itunes:title>Science as a Human Endeavor: Carl Zimmer explains the meaning of life, and what it means to write about it</itunes:title>
        <link>https://dmdonig.podbean.com/e/science-as-a-human-endeavor-carl-zimmer-explains-the-meaning-of-life-and-what-it-means-to-write-about-it/</link>
                    <comments>https://dmdonig.podbean.com/e/science-as-a-human-endeavor-carl-zimmer-explains-the-meaning-of-life-and-what-it-means-to-write-about-it/#comments</comments>        <pubDate>Thu, 11 Jun 2020 23:19:38 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/03fe3007-994f-56fa-b6ee-30eea233864b</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I speak with Carl Zimmer, who reports from the frontiers of biology, where scientists are expanding our understanding of life.</p>
<p>Drawing from his experience hosting the popular podcast "What is Life?" Carl explains the meaning of life to me, and he talks about what it means to write about it. We talk about the challenges of writing and reading about science, and how we should read articles about the most urgent scientific concern of our moment: Coronavirus.</p>
<p>Zimmer is a popular speaker at universities, medical schools, museums, and festivals, and he teaches workshops and seminars at Yale. His column Matter appears weekly in The New York Times, and he is the author of thirteen books about science, including his newest book is She Has Her Mother’s Laugh: The Power, Perversions, and Potential of Heredity.  </p>
<p>He is, to his knowledge, the only writer after whom a species of tapeworm has been named.</p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I speak with Carl Zimmer, who reports from the frontiers of biology, where scientists are expanding our understanding of life.</p>
<p>Drawing from his experience hosting the popular podcast "What is Life?" Carl explains the meaning of life to me, and he talks about what it means to write about it. We talk about the challenges of writing and reading about science, and how we should read articles about the most urgent scientific concern of our moment: Coronavirus.</p>
<p>Zimmer is a popular speaker at universities, medical schools, museums, and festivals, and he teaches workshops and seminars at Yale. His column <em>Matter</em> appears weekly in <em>The New York Times</em>, and he is the author of thirteen books about science, including his newest book is <em>She Has Her Mother’s Laugh: The Power, Perversions, and Potential of Heredity</em>.  </p>
<p>He is, to his knowledge, the only writer after whom a species of tapeworm has been named.</p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="68570109" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/hp41ve/Carl_Zimmer_Podcast_2_b7x41.mp3"/>
        <itunes:summary>In this episode of "Technically Human," I speak with Carl Zimmer, who reports from the frontiers of biology, where scientists are expanding our understanding of life.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>2856</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>9</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I speak with Carl Zimmer, who reports from the frontiers of biology, where scientists are expanding our understanding of life. Drawing from his experience hosting the popular podcast "What is Life?" Carl explains the meaning of life to me, and he talks about what it means to write about it. We talk about the challenges of writing and reading about science, and how we should read articles about the most urgent scientific concern of our moment: Coronavirus. Zimmer is a popular speaker at universities, medical schools, museums, and festivals, and he teaches workshops and seminars at Yale. His column Matter appears weekly in The New York Times, and he is the author of thirteen books about science, including his newest book is She Has Her Mother’s Laugh: The Power, Perversions, and Potential of Heredity.   He is, to his knowledge, the only writer after whom a species of tapeworm has been named.  </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Making up our minds: how AI is rewiring our brains with Professor De Kai</title>
        <itunes:title>Making up our minds: how AI is rewiring our brains with Professor De Kai</itunes:title>
        <link>https://dmdonig.podbean.com/e/making-up-our-minds-how-ai-is-rewiring-our-brains-with-professor-de-kai/</link>
                    <comments>https://dmdonig.podbean.com/e/making-up-our-minds-how-ai-is-rewiring-our-brains-with-professor-de-kai/#comments</comments>        <pubDate>Fri, 05 Jun 2020 09:43:36 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/5bfcee0b-1baf-5093-972e-8e5cf12eae76</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I speak to Professor De Kai at Hong Kong University. De Kai is one of eight members of Google's AI Ethics Council and is listed as one of Hong Kong's 100 most influential figures. We debate whether laptops and lapdogs have souls, De Kai tells us why President Obama is retweeting his recent work on mask simulations in the context of the coronavirus epidemic, and we discuss the possibility that an AI encoded with human biases will drive extremism to the point of civilization's collapse.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I speak to Professor De Kai at Hong Kong University. De Kai is one of eight members of Google's AI Ethics Council and is listed as one of Hong Kong's 100 most influential figures. We debate whether laptops and lapdogs have souls, De Kai tells us why President Obama is retweeting his recent work on mask simulations in the context of the coronavirus epidemic, and we discuss the possibility that an AI encoded with human biases will drive extremism to the point of civilization's collapse.</p>
]]></content:encoded>
                                    
        <enclosure length="81841423" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/2f3c6g/De_Kai_FINAL_JUNE_5_mixdown_ail3f.mp3"/>
        <itunes:summary><![CDATA[In this episode of "Technically Human," I speak to Professor De Kai at Hong Kong University. De Kai is one of eight members of Google's AI Ethics Council and is listed as one of Hong Kong's 100 most influential figures. We debate whether laptops and lapdogs have souls, De Kai tells us why President Obama is retweeting his recent work on mask simulations in the context of the coronavirus epidemic, and we discuss the possibility that an AI encoded with human biases will drive extremism to the point of civilization's collapse.]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3408</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>8</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I speak to Professor De Kai at Hong Kong University. De Kai is one of eight members of Google's AI Ethics Council and is listed as one of Hong Kong's 100 most influential figures. We debate whether laptops and lapdogs have souls, De Kai tells us why President Obama is retweeting his recent work on mask simulations in the context of the coronavirus epidemic, and we discuss the possibility that an AI encoded with human biases will drive extremism to the point of civilization's collapse.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Meaning of Life: Professor Arthur Caplan Discusses Medical Tech and the Rise of Bioethics </title>
        <itunes:title>The Meaning of Life: Professor Arthur Caplan Discusses Medical Tech and the Rise of Bioethics </itunes:title>
        <link>https://dmdonig.podbean.com/e/the-meaning-of-life-professor-arthur-caplan-discusses-medical-tech-and-the-rise-of-bioethics/</link>
                    <comments>https://dmdonig.podbean.com/e/the-meaning-of-life-professor-arthur-caplan-discusses-medical-tech-and-the-rise-of-bioethics/#comments</comments>        <pubDate>Thu, 28 May 2020 23:29:25 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/ad9aed3f-3397-5319-9a8c-d8820353b27e</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Professor Arthur L. Caplan, the Drs. William F and Virginia Connolly Mitty Professor at the NYU School of Medicine in New York City and the founding head of NYU’s Division of Medical Ethics. </p>
<p>We talk about brain death, moral worth, the ethics of the non-human, and the concept of the "self" as humans increasingly turn our bodies and biology over to technological interventions. Dr. Caplan discusses medical privacy as the right to know becomes increasingly in tension with the right to privacy, how the practice of medicine interacts with humanist practices, and what is keeping him up at night.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Professor Arthur L. Caplan, the Drs. William F and Virginia Connolly Mitty Professor at the NYU School of Medicine in New York City and the founding head of NYU’s Division of Medical Ethics. </p>
<p>We talk about brain death, moral worth, the ethics of the non-human, and the concept of the "self" as humans increasingly turn our bodies and biology over to technological interventions. Dr. Caplan discusses medical privacy as the right to know becomes increasingly in tension with the right to privacy, how the practice of medicine interacts with humanist practices, and what is keeping him up at night.</p>
]]></content:encoded>
                                    
        <enclosure length="83641575" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/zpegbt/Art_Caplan_Podcast_FINAL_9r61k.mp3"/>
        <itunes:summary><![CDATA[In this episode, I speak with Professor Arthur L. Caplan, the Drs. William F and Virginia Connolly Mitty Professor at the NYU School of Medicine in New York City and the founding head of NYU’s Division of Medical Ethics. 
We talk about brain death, moral worth, the ethics of the non-human, and the concept of the "self" as humans increasingly turn our bodies and biology over to technological interventions. Dr. Caplan discusses medical privacy as the right to know becomes increasingly in tension with the right to privacy, how the practice of medicine interacts with humanist practices, and what is keeping him up at night.]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3484</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>7</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Professor Arthur L. Caplan, the Drs. William F and Virginia Connolly Mitty Professor at the NYU School of Medicine in New York City and the founding head of NYU’s Division of Medical Ethics.  We talk about brain death, moral worth, the ethics of the non-human, and the concept of the "self" as humans increasingly turn our bodies and biology over to technological interventions. Dr. Caplan discusses medical privacy as the right to know becomes increasingly in tension with the right to privacy, how the practice of medicine interacts with humanist practices, and what is keeping him up at night.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Cultural Revolution: Chris Ategeka calls for a paradigm shift in tech</title>
        <itunes:title>Cultural Revolution: Chris Ategeka calls for a paradigm shift in tech</itunes:title>
        <link>https://dmdonig.podbean.com/e/cultural-revolution-chris-ategeka-calls-for-a-paradigm-shift-in-tech/</link>
                    <comments>https://dmdonig.podbean.com/e/cultural-revolution-chris-ategeka-calls-for-a-paradigm-shift-in-tech/#comments</comments>        <pubDate>Thu, 21 May 2020 22:15:26 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/532167c5-f456-53cf-b663-3f6be610a293</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I speak with Chris Ategeka, the CEO of UCOT, the Center for the Unintended Consequences of Technology. Chris talks about what led him to build UCOT, what drives unethical tech, and how the destructive consequences in technological culture and products may be less unintended than willfully ignored. Chris and I talk about the relationship between diversity in tech culture on the one hand, and equity in tech products and outcomes on the other. Finally, Chris makes a case for why staying optimistic in this moment is not a choice--for him, it is an ethical mandate. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I speak with Chris Ategeka, the CEO of UCOT, the Center for the Unintended Consequences of Technology. Chris talks about what led him to build UCOT, what drives unethical tech, and how the destructive consequences in technological culture and products may be less unintended than willfully ignored. Chris and I talk about the relationship between diversity in tech culture on the one hand, and equity in tech products and outcomes on the other. Finally, Chris makes a case for why staying optimistic in this moment is not a choice--for him, it is an ethical mandate. </p>
]]></content:encoded>
                                    
        <enclosure length="75301329" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/m6q7ne/ChrisAtegekaPodcastMay22_mixdown8k88y.mp3"/>
        <itunes:summary><![CDATA[In this episode of "Technically Human," I speak with Chris Ategeka, the CEO of UCOT, the Center for the Unintended Consequences of Technology. Chris talks about what led him to build UCOT, what drives unethical tech, and how the destructive consequences in technological culture and products may be less unintended than willfully ignored. Chris and I talk about the relationship between diversity in tech culture on the one hand, and equity in tech products and outcomes on the other. Finally, Chris makes a case for why staying optimistic in this moment is not a choice--for him, it is an ethical mandate. ]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3137</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>6</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I speak with Chris Ategeka, the CEO of UCOT, the Center for the Unintended Consequences of Technology. Chris talks about what led him to build UCOT, what drives unethical tech, and how the destructive consequences in technological culture and products may be less unintended than willfully ignored. Chris and I talk about the relationship between diversity in tech culture on the one hand, and equity in tech products and outcomes on the other. Finally, Chris makes a case for why staying optimistic in this moment is not a choice--for him, it is an ethical mandate. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Data Dystopia: Dave Eggers Discusses Digital Human Rights </title>
        <itunes:title>Data Dystopia: Dave Eggers Discusses Digital Human Rights </itunes:title>
        <link>https://dmdonig.podbean.com/e/data-dystopia-dave-eggers-discusses-digital-human-rights/</link>
                    <comments>https://dmdonig.podbean.com/e/data-dystopia-dave-eggers-discusses-digital-human-rights/#comments</comments>        <pubDate>Fri, 15 May 2020 09:10:49 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/be582583-22c8-52ba-bf0a-f2e88da00172</guid>
                                    <description><![CDATA[<p>In this episode of "Technically Human," I talk to author Dave Eggers about his novel The Circle. We discuss the growth of digital tracking, the evolution of Silicon Valley culture, and the idea that people under surveillance are not free. Dave discusses the role and of and possibilities for art, literature, and satire in creating change, and he tells me why he is optimistic about the next generation of students creating powerful, lasting change.</p>
<p>Dave Eggers is the author of The Circle, A Heartbreaking Work of Staggering Genius, What is the What, A Hologram for the King, and The Lifters, among many other books.</p>
<p>He is the founder of McSweeney’s, which publishes literature, satire, and "Voice of Witness," a nonprofit book series that uses oral history to illuminate human rights crises around the world.</p>
<p>Eggers is the co-founder of 826 National, a network of youth writing and tutoring centers around the United States. Realizing the need for greater college access for low-income students, Eggers founded ScholarMatch, a nonprofit organization designed to connect students with resources, schools and donors to make college possible.</p>
<p>McSweeneys: <a href='https://www.mcsweeneys.net/pages/about-dave-eggers'>https://www.mcsweeneys.net/pages/about-dave-eggers</a></p>
<p>ScholarMatch:<a href='https://scholarmatch.org/'>https://scholarmatch.org/</a></p>
<p>826National: <a href='https://826national.org/'>https://826national.org/</a></p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of "Technically Human," I talk to author Dave Eggers about his novel <em>The Circle</em>. We discuss the growth of digital tracking, the evolution of Silicon Valley culture, and the idea that people under surveillance are not free. Dave discusses the role and of and possibilities for art, literature, and satire in creating change, and he tells me why he is optimistic about the next generation of students creating powerful, lasting change.</p>
<p>Dave Eggers is the author of <em>The Circle, A Heartbreaking Work of Staggering Genius, What is the What, A Hologram for the King,</em> and <em>The Lifters</em>, among many other books.</p>
<p>He is the founder of McSweeney’s, which publishes literature, satire, and "Voice of Witness," a nonprofit book series that uses oral history to illuminate human rights crises around the world.</p>
<p>Eggers is the co-founder of 826 National, a network of youth writing and tutoring centers around the United States. Realizing the need for greater college access for low-income students, Eggers founded ScholarMatch, a nonprofit organization designed to connect students with resources, schools and donors to make college possible.</p>
<p>McSweeneys: <a href='https://www.mcsweeneys.net/pages/about-dave-eggers'>https://www.mcsweeneys.net/pages/about-dave-eggers</a></p>
<p>ScholarMatch:<a href='https://scholarmatch.org/'>https://scholarmatch.org/</a></p>
<p>826National: <a href='https://826national.org/'>https://826national.org/</a></p>
]]></content:encoded>
                                    
        <enclosure length="86663750" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/iw8b91/DaveEggersPodcatApril30_mixdown76m8p.mp3"/>
        <itunes:summary><![CDATA[In this episode of "Technically Human," I talk to author Dave Eggers about his novel The Circle. We discuss the growth of digital tracking, the evolution of Silicon Valley culture, and the idea that people under surveillance are not free. Dave discusses the role and of and possibilities for art, literature, and satire in creating change, and he tells me why he is optimistic about the next generation of students creating powerful, lasting change.
Dave Eggers is the author of The Circle, A Heartbreaking Work of Staggering Genius, What is the What, A Hologram for the King, and The Lifters, among many other books.
He is the founder of McSweeney’s, which publishes literature, satire, and "Voice of Witness," a nonprofit book series that uses oral history to illuminate human rights crises around the world.
Eggers is the co-founder of 826 National, a network of youth writing and tutoring centers around the United States. Realizing the need for greater college access for low-income students, Eggers founded ScholarMatch, a nonprofit organization designed to connect students with resources, schools and donors to make college possible.
McSweeneys: https://www.mcsweeneys.net/pages/about-dave-eggers
ScholarMatch:https://scholarmatch.org/
826National: https://826national.org/]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3610</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>5</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of "Technically Human," I talk to author Dave Eggers about his novel The Circle. We discuss the growth of digital tracking, the evolution of Silicon Valley culture, and the idea that people under surveillance are not free. Dave discusses the role and of and possibilities for art, literature, and satire in creating change, and he tells me why he is optimistic about the next generation of students creating powerful, lasting change. Dave Eggers is the author of The Circle, A Heartbreaking Work of Staggering Genius, What is the What, A Hologram for the King, and The Lifters, among many other books. He is the founder of McSweeney’s, which publishes literature, satire, and "Voice of Witness," a nonprofit book series that uses oral history to illuminate human rights crises around the world. Eggers is the co-founder of 826 National, a network of youth writing and tutoring centers around the United States. Realizing the need for greater college access for low-income students, Eggers founded ScholarMatch, a nonprofit organization designed to connect students with resources, schools and donors to make college possible. McSweeneys: https://www.mcsweeneys.net/pages/about-dave-eggers ScholarMatch:https://scholarmatch.org/ 826National: https://826national.org/</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Tech Represents: Black Millennials go digital</title>
        <itunes:title>Tech Represents: Black Millennials go digital</itunes:title>
        <link>https://dmdonig.podbean.com/e/tech-represents-black-millenials-go-digital/</link>
                    <comments>https://dmdonig.podbean.com/e/tech-represents-black-millenials-go-digital/#comments</comments>        <pubDate>Fri, 08 May 2020 08:32:19 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/18fab336-886a-5f15-bb7f-6daa5cf4ce13</guid>
                                    <description><![CDATA[<p>In this episode, I speak to Aaron Samuels, the COO of Blavity, one of the largest digital communities for Black Millennials. We talk about the importance of diversifying perspectives in and outside of the tech sphere, the importance of narrative in establishing identity, and how Aaron negotiates the boundaries between multiple identities--Black/Jewish, Humanist/Technologist, and Digital/Embodied existence. </p>
<p> </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak to Aaron Samuels, the COO of Blavity, one of the largest digital communities for Black Millennials. We talk about the importance of diversifying perspectives in and outside of the tech sphere, the importance of narrative in establishing identity, and how Aaron negotiates the boundaries between multiple identities--Black/Jewish, Humanist/Technologist, and Digital/Embodied existence. </p>
<p> </p>
]]></content:encoded>
                                    
        <enclosure length="109741285" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/th5xtk/Aaron_Samuels_Podcast_May_5.mp3"/>
        <itunes:summary><![CDATA[In this episode, I speak to Aaron Samuels, the COO of Blavity, one of the largest digital communities for Black Millennials. We talk about the importance of diversifying perspectives in and outside of the tech sphere, the importance of narrative in establishing identity, and how Aaron negotiates the boundaries between multiple identities--Black/Jewish, Humanist/Technologist, and Digital/Embodied existence. 
 ]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4571</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>4</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak to Aaron Samuels, the COO of Blavity, one of the largest digital communities for Black Millennials. We talk about the importance of diversifying perspectives in and outside of the tech sphere, the importance of narrative in establishing identity, and how Aaron negotiates the boundaries between multiple identities--Black/Jewish, Humanist/Technologist, and Digital/Embodied existence.   </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Ethical Tech Goes to College: Humane technology on campus</title>
        <itunes:title>Ethical Tech Goes to College: Humane technology on campus</itunes:title>
        <link>https://dmdonig.podbean.com/e/ethical-tech-goes-to-college-humane-technology-on-campus/</link>
                    <comments>https://dmdonig.podbean.com/e/ethical-tech-goes-to-college-humane-technology-on-campus/#comments</comments>        <pubDate>Fri, 01 May 2020 09:23:28 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/497815bc-98e6-5c79-adfe-e9016c60227f</guid>
                                    <description><![CDATA[<p>In this episode of Technically Human, I speak with Dr. Matthew Harsh, an associate professor of science, technology, and society studies at California Polytechnic State University in San Luis Obispo. We talk about Matt's vision for creating a new curriculum for ethical technology on college campuses, the importance of addressing deep issues of social equity in tech, and Matt tells me why he's optimistic about the next generation of technologists.</p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of Technically Human, I speak with Dr. Matthew Harsh, an associate professor of science, technology, and society studies at California Polytechnic State University in San Luis Obispo. We talk about Matt's vision for creating a new curriculum for ethical technology on college campuses, the importance of addressing deep issues of social equity in tech, and Matt tells me why he's optimistic about the next generation of technologists.</p>
]]></content:encoded>
                                    
        <enclosure length="78459914" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/nhx88z/Matt_Harsh_April_23_Podcast_FINAL_FOR_RELEASE.mp3"/>
        <itunes:summary><![CDATA[In this episode of Technically Human, I speak with Dr. Matthew Harsh, an associate professor of science, technology, and society studies at California Polytechnic State University in San Luis Obispo. We talk about Matt's vision for creating a new curriculum for ethical technology on college campuses, the importance of addressing deep issues of social equity in tech, and Matt tells me why he's optimistic about the next generation of technologists.]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3375</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>3</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of Technically Human, I speak with Dr. Matthew Harsh, an associate professor of science, technology, and society studies at California Polytechnic State University in San Luis Obispo. We talk about Matt's vision for creating a new curriculum for ethical technology on college campuses, the importance of addressing deep issues of social equity in tech, and Matt tells me why he's optimistic about the next generation of technologists.</itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>The Good Place: talking ethical tech and philosophies of "the good" with Dr. Ryan Jenkins</title>
        <itunes:title>The Good Place: talking ethical tech and philosophies of "the good" with Dr. Ryan Jenkins</itunes:title>
        <link>https://dmdonig.podbean.com/e/the-good-place-talking-ethical-tech-and-philosophies-of-the-good-with-dr-ryan-jenkins/</link>
                    <comments>https://dmdonig.podbean.com/e/the-good-place-talking-ethical-tech-and-philosophies-of-the-good-with-dr-ryan-jenkins/#comments</comments>        <pubDate>Fri, 24 Apr 2020 12:30:41 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/b0f62624-41f1-5897-9cad-6e8a0d3b0a3b</guid>
                                    <description><![CDATA[<p>In this episode of Technically Human, I speak with Dr. Ryan Jenkins, an associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. We talk about the value of philosophy, we debate deontological vs consequentialist ethics, and Ryan answers the age-old question of whether Google is making us stupid. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode of Technically Human, I speak with Dr. Ryan Jenkins, an associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. We talk about the value of philosophy, we debate deontological vs consequentialist ethics, and Ryan answers the age-old question of whether Google is making us stupid. </p>
]]></content:encoded>
                                    
        <enclosure length="97574102" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/73qp5b/RYAN_INTERVIEW_EDITEDmp3.mp3"/>
        <itunes:summary><![CDATA[In this episode of Technically Human, I speak with Dr. Ryan Jenkins, an associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. We talk about the value of philosophy, we debate deontological vs consequentialist ethics, and Ryan answers the age-old question of whether Google is making us stupid. ]]></itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>4064</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>2</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
            <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode of Technically Human, I speak with Dr. Ryan Jenkins, an associate professor of philosophy and a senior fellow at the Ethics + Emerging Sciences Group at California Polytechnic State University in San Luis Obispo. We talk about the value of philosophy, we debate deontological vs consequentialist ethics, and Ryan answers the age-old question of whether Google is making us stupid. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
    <item>
        <title>Phoning Home: Using social media to help end homelessness</title>
        <itunes:title>Phoning Home: Using social media to help end homelessness</itunes:title>
        <link>https://dmdonig.podbean.com/e/phoning-home-using-social-media-to-help-end-homelessness/</link>
                    <comments>https://dmdonig.podbean.com/e/phoning-home-using-social-media-to-help-end-homelessness/#comments</comments>        <pubDate>Fri, 17 Apr 2020 16:51:15 -0700</pubDate>
        <guid isPermaLink="false">dmdonig.podbean.com/c5202558-2976-5ea1-b6ca-dc7f048fe066</guid>
                                    <description><![CDATA[<p>In this episode, I speak with Kevin Adler and Jessica Donig from Miracle Messages, an SF based organization that uses online social media platforms to help people experiencing homelessness connect with their loved ones. </p>
]]></description>
                                                            <content:encoded><![CDATA[<p>In this episode, I speak with Kevin Adler and Jessica Donig from Miracle Messages, an SF based organization that uses online social media platforms to help people experiencing homelessness connect with their loved ones. </p>
]]></content:encoded>
                                    
        <enclosure length="77268419" type="audio/mpeg" url="https://mcdn.podbean.com/mf/web/aeais5/Jess_and_Kevin_Miracle_Messages.mp3"/>
        <itunes:summary>In this episode, I speak with Kevin Adler and Jessica Donig from Miracle Messages, an SF based organization that uses online social media platforms to help people experiencing homelessness connect with their loved ones.</itunes:summary>
        <itunes:author>dmdonig</itunes:author>
        <itunes:explicit>false</itunes:explicit>
        <itunes:block>No</itunes:block>
        <itunes:duration>3219</itunes:duration>
        <itunes:season>1</itunes:season>
        <itunes:episode>1</itunes:episode>
        <itunes:episodeType>full</itunes:episodeType>
        <itunes:image href="https://pbcdn1.podbean.com/imglogo/ep-logo/pbblog6889828/Screen_Shot_2020-04-16_at_11_21_07_PM.png"/>    <author>Deb Donig, ddonig@calpoly.edu (Deb Donig)</author><itunes:subtitle>In this episode, I speak with Kevin Adler and Jessica Donig from Miracle Messages, an SF based organization that uses online social media platforms to help people experiencing homelessness connect with their loved ones. </itunes:subtitle><itunes:keywords>tech,ethics,education,human,Silicon,Valley,science,fiction</itunes:keywords></item>
</channel>
</rss>