<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Talking Machines</title>
    <description>
      <![CDATA[Talking Machines is your window into the world of machine learning. Your hosts, Katherine Gorman and Neil Lawrence, bring you clear conversations with experts in the field, insightful discussions of industry news, and useful answers to your questions.  Machine learning is changing the questions we can ask of the world around us, here we explore how to ask the best questions and what  to do with the answers.]]>
    </description>
    <managingEditor>TheTalkingMachines@gmail.com (Katherine Gorman)</managingEditor>
    <atom:link href="https://rss.art19.com/talking-machines" rel="self" type="application/rss+xml"/>
    <link>https://art19.com/shows/talking-machines</link>
    <itunes:new-feed-url>https://rss.art19.com/talking-machines</itunes:new-feed-url>
    <itunes:owner>
      <itunes:name>Katherine Gorman</itunes:name>
      <itunes:email>TheTalkingMachines@gmail.com</itunes:email>
    </itunes:owner>
    <itunes:author>Tote Bag Productions</itunes:author>
    <itunes:summary>
      <![CDATA[Talking Machines is your window into the world of machine learning. Your hosts, Katherine Gorman and Neil Lawrence, bring you clear conversations with experts in the field, insightful discussions of industry news, and useful answers to your questions.  Machine learning is changing the questions we can ask of the world around us, here we explore how to ask the best questions and what  to do with the answers.]]>
    </itunes:summary>
    <language>en</language>
    <itunes:explicit>no</itunes:explicit>
    <itunes:category text="Technology">
      <itunes:category text="Tech News"/>
    </itunes:category>
    <itunes:keywords>computer science,ML,AIML,research ,AI,artificial intelligence,networks,deep,programming,Intelligence,artificial,computers,learning,machine</itunes:keywords>
    <itunes:type>episodic</itunes:type>
    <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
    
    <item>
      <title>The Pace of Change and The Public View of ML</title>
      <description>
        <![CDATA[<p>In episode ten of season three we talk about the rate of change <a href="http://www.bbc.com/news/business-40673694" target="_blank">(prompted by Tim Harford)</a>, take a listener question about the power of kernels, and talk with <a href="https://royalsociety.org/people/peter-donnelly-11348/" target="_blank">Peter Donnelly</a> in his capacity with the <a href="https://royalsociety.org/about-us/committees/machine-learning-working-group/" target="_blank">Royal Society's Machine Learning Working Group</a> about the work they've done on the <a href="https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf" target="_blank">public's views on AI and ML</a>. </p>]]>
      </description>
      <itunes:title>The Pace of Change and The Public View of ML</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
      <itunes:summary>In episode ten of season three we talk about the rate of change (prompted by Tim Harford), take a listener question about the power of kernels, and talk with Peter Donnelly in his capacity with the Royal Society's Machine Learning Working Group about the work they've done on the public's views on AI and ML. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode ten of season three we talk about the rate of change <a href="http://www.bbc.com/news/business-40673694" target="_blank">(prompted by Tim Harford)</a>, take a listener question about the power of kernels, and talk with <a href="https://royalsociety.org/people/peter-donnelly-11348/" target="_blank">Peter Donnelly</a> in his capacity with the <a href="https://royalsociety.org/about-us/committees/machine-learning-working-group/" target="_blank">Royal Society's Machine Learning Working Group</a> about the work they've done on the <a href="https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf" target="_blank">public's views on AI and ML</a>. </p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/ClpB5T0za57YefM_YqQ_r5uTUGS9cZE__CLxjRq9Wdk</guid>
      <pubDate>Thu, 05 Oct 2017 05:02:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:44:12</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/1b72a61d-287b-4d89-99f9-43e04383cd07.mp3" type="audio/mpeg" length="38596022"/>
    </item>
    <item>
      <title>The Long View and Learning in Person </title>
      <description>
        <![CDATA[<p>In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with <a href="http://air.ug/~jquinn/" target="_blank">John Quinn</a> of  the&nbsp;<a href="http://unglobalpulse.org/" target="_blank">United Nations Global Pulse</a>&nbsp;lab in Kampala, Uganda and&nbsp;<a href="http://mak.ac.ug/" target="_blank">Makerere University</a>'s&nbsp;<a href="http://air.ug/" target="_blank">Artificial Intelligence Research</a>&nbsp;group.</p>]]>
      </description>
      <itunes:title>The Long View and Learning in Person </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
      <itunes:summary>In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with John Quinn of  the United Nations Global Pulse lab in Kampala, Uganda and Makerere University's Artificial Intelligence Research group.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode nine of season three we chat about the difference between models and algorithms, take a listener question about summer schools and learning in person as opposed to learning digitally, and we chat with <a href="http://air.ug/~jquinn/" target="_blank">John Quinn</a> of  the&nbsp;<a href="http://unglobalpulse.org/" target="_blank">United Nations Global Pulse</a>&nbsp;lab in Kampala, Uganda and&nbsp;<a href="http://mak.ac.ug/" target="_blank">Makerere University</a>'s&nbsp;<a href="http://air.ug/" target="_blank">Artificial Intelligence Research</a>&nbsp;group.</p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/1AqUMESyw6HBTGIoK2marmdOqpkqqhrU1fR6S4t6LVQ</guid>
      <pubDate>Thu, 21 Sep 2017 16:52:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>01:09:50</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/d2aff642-30a1-44c5-a222-2179d10bf366.mp3" type="audio/mpeg" length="63201280"/>
    </item>
    <item>
      <title>Machine Learning in the Field and Bayesian Baked Goods </title>
      <description>
        <![CDATA[<p>In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with <a href="http://air.ug/~emwebaze/" target="_blank">Ernest Mwebaze</a> of Makerere University.  </p>]]>
      </description>
      <itunes:title>Machine Learning in the Field and Bayesian Baked Goods </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with Ernest Mwebaze of Makerere University.  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode eight of season three we return to the epic (or maybe not so epic) clash between frequentists and bayesians, take a listener question about the ethical questions generators of machine learning should be asking of themselves (not just their tools) and we hear a conversation with <a href="http://air.ug/~emwebaze/" target="_blank">Ernest Mwebaze</a> of Makerere University.  </p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/UJIu5oKMDRG5HQYTwFbXhg1W6U3ekh6_AUFD4Mrxq6g</guid>
      <pubDate>Fri, 08 Sep 2017 01:40:14 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>01:03:39</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/371c8737-ad08-4e85-8e91-44ca984d5cc4.mp3" type="audio/mpeg" length="57279216"/>
    </item>
    <item>
      <title>Data Science Africa with Dina Machuve</title>
      <description>
        <![CDATA[<p>In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology we cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the <a href="http://www.datascienceafrica.org/" target="_blank">Data Science Africa</a> confrence and workshop.</p><p><br></p><p><br></p>]]>
      </description>
      <itunes:title>Data Science Africa with Dina Machuve</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology we cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the Data Science Africa confrence and workshop.





</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode seven of season three we take a minute to break way from our regular format and feature a conversation with Dina Machuve of the Nelson Mandela African Institute of Science and Technology we cover everything from her work to how cell phone access has changed data patterns. We got to talk with her at the <a href="http://www.datascienceafrica.org/" target="_blank">Data Science Africa</a> confrence and workshop.</p><p><br></p><p><br></p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/BgNcFvjVFxBfI4GgXlK8niXK67N7oWhCNaTFadLa5WM</guid>
      <pubDate>Thu, 10 Aug 2017 23:33:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>00:52:13</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/1070de4c-cb2b-41d8-aa75-3e025e8da322.mp3" type="audio/mpeg" length="46296084"/>
    </item>
    <item>
      <title>The Church of Bayes and Collecting Data</title>
      <description>
        <![CDATA[<p>In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with <a href="http://www2.stat.duke.edu/~kheller/" target="_blank">Katherine Heller of Duke </a></p><p><br></p>]]>
      </description>
      <itunes:title>The Church of Bayes and Collecting Data</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with Katherine Heller of Duke 


</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode six of season three we chat about the difference between frequentists and Bayesians, take a listener question about techniques for panel data, and have an interview with <a href="http://www2.stat.duke.edu/~kheller/" target="_blank">Katherine Heller of Duke </a></p><p><br></p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/Druw9YOMBaBeoy4Px1exEJV8AI5DQdMew628dpHMWX8</guid>
      <pubDate>Fri, 28 Jul 2017 00:05:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,duke,computer science,deep,learning,collecting data,ml,healthcare,health</itunes:keywords>
      <itunes:duration>00:53:36</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/9a71efcf-7df6-49c9-aee7-bb7724ec5cf0.mp3" type="audio/mpeg" length="47626449"/>
    </item>
    <item>
      <title>Getting a Start in ML and Applied AI at Facebook</title>
      <description>
        <![CDATA[<p>In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with <a href="http://quinonero.net/" target="_blank">Joaquin Quiñonero Candela</a>. </p><p>For a great place to get started with foundational ideas in ML, take a look at <a href="https://www.coursera.org/learn/machine-learning" target="_blank">Andrew Ng’s course on Coursera</a>. Then check out <a href="https://www.coursera.org/learn/probabilistic-graphical-models" target="_blank">Daphne Kohler’s course</a>. </p><p><br></p><p>Talking Machines is now working with <a href="http://www.midroll.com/" target="_blank">Midroll</a> to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners. </p><p>If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at <a href="http://podsurvey.com/MACHINES" target="_blank">http://podsurvey.com/MACHINES</a>. </p>]]>
      </description>
      <itunes:title>Getting a Start in ML and Applied AI at Facebook</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with Joaquin Quiñonero Candela. 

For a great place to get started with foundational ideas in ML, take a look at Andrew Ng’s course on Coursera. Then check out Daphne Kohler’s course. 




Talking Machines is now working with Midroll to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners. 

If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at http://podsurvey.com/MACHINES. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode five of season three we compare and contrast AI and data science, take a listener question about getting started in machine learning, and listen to an interview with <a href="http://quinonero.net/" target="_blank">Joaquin Quiñonero Candela</a>. </p><p>For a great place to get started with foundational ideas in ML, take a look at <a href="https://www.coursera.org/learn/machine-learning" target="_blank">Andrew Ng’s course on Coursera</a>. Then check out <a href="https://www.coursera.org/learn/probabilistic-graphical-models" target="_blank">Daphne Kohler’s course</a>. </p><p><br></p><p>Talking Machines is now working with <a href="http://www.midroll.com/" target="_blank">Midroll</a> to source and organize sponsors for our show. In order find sponsors who are a good fit for us, and of worth to you, we’re surveying our listeners. </p><p>If you’d like to help us get a better idea of who makes up the Talking Machines community take the survey at <a href="http://podsurvey.com/MACHINES" target="_blank">http://podsurvey.com/MACHINES</a>. </p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/ciwzYimlqMC--U095ONsYqQYFyUp3Q_x0xmgKUWxugA</guid>
      <pubDate>Thu, 13 Jul 2017 23:14:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>01:01:47</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/ab11e91e-10e9-4281-aca1-e9bb697402b5.mp3" type="audio/mpeg" length="55476140"/>
    </item>
    <item>
      <title>Bias Variance Dilemma for Humans and the Arm Farm</title>
      <description>
        <![CDATA[<p>In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don't get fooled. Our guest for this episode is <a href="https://research.google.com/pubs/jeff.html" target="_blank">Jeff Dean</a>, &nbsp;Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for <a href="https://research.google.com/teams/brain/healthcare/" target="_blank">diabetic retinopathy</a>, and <a href="https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html" target="_blank">equality in data and the community</a>. </p><p>&nbsp;</p><p>Fun Fact: <a href="http://www.cs.toronto.edu/~hinton/" target="_blank">Geoff Hinton’</a>s <a href="https://books.google.com/books?id=8Jx3DAAAQBAJ&amp;pg=PA42&amp;lpg=PA42&amp;dq=Geoff+Hinton+charles+howard+hinton&amp;source=bl&amp;ots=3Bfoq4dxNh&amp;sig=B_psJkcAsvE0O40i9V19SCk29Eo&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwi4pq2uvejUAhXHGz4KHapRBboQ6AEISDAG#v=onepage&amp;q=Geoff%20Hinton%20charles%20howard%20hinton&amp;f=false" target="_blank">distant relative</a> <a href="https://books.google.ca/books?id=txIQAAAAYAAJ" target="_blank">invented the word tesseract</a>. (How cool is that. Seriously.) </p>]]>
      </description>
      <itunes:title>Bias Variance Dilemma for Humans and the Arm Farm</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don't get fooled. Our guest for this episode is Jeff Dean,  Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for diabetic retinopathy, and equality in data and the community. 

 

Fun Fact: Geoff Hinton’s distant relative invented the word tesseract. (How cool is that. Seriously.) </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In episode four of season three Neil introduces us to the ideas behind the bias variance dilemma (and how how we can think about it in our daily lives). Plus, we answer a listener question about how to make sure your neural networks don't get fooled. Our guest for this episode is <a href="https://research.google.com/pubs/jeff.html" target="_blank">Jeff Dean</a>, &nbsp;Google Senior Fellow in the Research Group, where he leads the Google Brain project. We talk about a closet full of robot arms (the arm farm!), image recognition for <a href="https://research.google.com/teams/brain/healthcare/" target="_blank">diabetic retinopathy</a>, and <a href="https://research.googleblog.com/2016/10/equality-of-opportunity-in-machine.html" target="_blank">equality in data and the community</a>. </p><p>&nbsp;</p><p>Fun Fact: <a href="http://www.cs.toronto.edu/~hinton/" target="_blank">Geoff Hinton’</a>s <a href="https://books.google.com/books?id=8Jx3DAAAQBAJ&amp;pg=PA42&amp;lpg=PA42&amp;dq=Geoff+Hinton+charles+howard+hinton&amp;source=bl&amp;ots=3Bfoq4dxNh&amp;sig=B_psJkcAsvE0O40i9V19SCk29Eo&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwi4pq2uvejUAhXHGz4KHapRBboQ6AEISDAG#v=onepage&amp;q=Geoff%20Hinton%20charles%20howard%20hinton&amp;f=false" target="_blank">distant relative</a> <a href="https://books.google.ca/books?id=txIQAAAAYAAJ" target="_blank">invented the word tesseract</a>. (How cool is that. Seriously.) </p>]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/5JHKuGX4O3AHl4fh3hE4n9GV2W0cqDnqolIbZgUMfH0</guid>
      <pubDate>Thu, 29 Jun 2017 16:51:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>00:54:10</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/58aaf562-b735-427e-a3fd-d4fc57798030.mp3" type="audio/mpeg" length="48163526"/>
    </item>
    <item>
      <title>Overfitting and Asking Ecological Questions with ML</title>
      <description>
        <![CDATA[In this episode three of season three of Talking Machines we dive into overfitting, take a listener question about unbalanced data and talk with Professor (Emeritus) Tom Dietterich from Oregon State University.]]>
      </description>
      <itunes:title>Overfitting and Asking Ecological Questions with ML</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In this episode three of season three of Talking Machines we dive into overfitting, take a listener question about unbalanced data and talk with Professor (Emeritus) Tom Dietterich from Oregon State University.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode three of season three of Talking Machines we dive into overfitting, take a listener question about unbalanced data and talk with Professor (Emeritus) Tom Dietterich from Oregon State University.]]>
      </content:encoded>
      <guid isPermaLink="false">gid://art19-episode-locator/V0/s4-Sxf9xi30ajcs8QMxOSsqZdsSPzqSKKEyhYiN93QE</guid>
      <pubDate>Thu, 15 Jun 2017 19:28:14 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,AI,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>00:45:29</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/5f1141c9-57ce-44ba-8175-a5c3048b9f4e.mp3" type="audio/mpeg" length="39829838"/>
    </item>
    <item>
      <title>Graphons and "Inferencing"</title>
      <description>
        <![CDATA[In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft. ]]>
      </description>
      <itunes:title>Graphons and "Inferencing"</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft. </itunes:summary>
      <content:encoded>
        <![CDATA[In episode two of season three Neil takes us through the basics on dropout, we chat about the definition of inference (It's more about context than you think!) and hear an interview with Jennifer Chayes of Microsoft. ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5926f191be65943e98d14d03</guid>
      <pubDate>Thu, 25 May 2017 15:00:27 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,research,aiml,ai,artificial intelligence,networks,intelligence,programming,machine,computers,computer science,deep,learning,ml</itunes:keywords>
      <itunes:duration>00:43:41</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/011aca5d-00eb-42b8-abd9-16cc9700630b.mp3" type="audio/mpeg" length="40029622"/>
    </item>
    <item>
      <title>Hosts of Talking Machines: Neil Lawrence and Ryan Adams</title>
      <description>
        <![CDATA[Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work. ]]>
      </description>
      <itunes:title>Hosts of Talking Machines: Neil Lawrence and Ryan Adams</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work. </itunes:summary>
      <content:encoded>
        <![CDATA[Talking Machines is entering its third season and going through some changes. Our founding host Ryan is moving on and in his place Neil Lawrence of Amazon is taking over as co host. We say thank you and good bye to Ryan with an interview about his work. ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5903434e2994cae8da7e4a35</guid>
      <pubDate>Thu, 27 Apr 2017 13:27:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>lawrence,artificial,neil,AI,artificial intelligence,networks,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,learning,amazon,ryan</itunes:keywords>
      <itunes:duration>00:35:36</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/5e04c7b2-e539-419e-9498-bcd686159d71.mp3" type="audio/mpeg" length="32269374"/>
    </item>
    <item>
      <title>ANGLICAN and Probabilistic Programming</title>
      <description>
        <![CDATA[In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.]]>
      </description>
      <itunes:title>ANGLICAN and Probabilistic Programming</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode seventeen of season two we get an introduction to Min Hashing, talk with Frank Wood the creator of ANGLICAN, about probabilistic programming and his new company, INVREA, and take a listener question about how to choose an architecture when using a neural network.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:581372ae5016e1262b70ff82</guid>
      <pubDate>Thu, 01 Sep 2016 15:45:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,anglican,probabilistic,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:46:13</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/d59ba33f-b94d-4c29-8a0e-675355e410d4.mp3" type="audio/mpeg" length="42460891"/>
    </item>
    <item>
      <title>Eric Lander and Restricted Boltzmann Machines</title>
      <description>
        <![CDATA[In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters,  plus we talk with Eric Lander of the Broad Institute.]]>
      </description>
      <itunes:title>Eric Lander and Restricted Boltzmann Machines</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters,  plus we talk with Eric Lander of the Broad Institute.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode sixteen of season two, we get an introduction to Restricted Boltzmann Machines, we take a listener question about tuning hyperparameters,  plus we talk with Eric Lander of the Broad Institute.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:57d1a1e744024389e5c8c2a1</guid>
      <pubDate>Thu, 18 Aug 2016 17:37:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:55:57</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/b9bc5a6a-048f-455e-9ea9-aa27dcf236af.mp3" type="audio/mpeg" length="51797263"/>
    </item>
    <item>
      <title>Generative Art and Hamiltonian Monte Carlo</title>
      <description>
        <![CDATA[In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.  ]]>
      </description>
      <itunes:title>Generative Art and Hamiltonian Monte Carlo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.  </itunes:summary>
      <content:encoded>
        <![CDATA[In episode fifteen of season two, we talk about Hamiltonian Monte Carlo, we take a listener question about unbalanced data, plus we talk with Doug Eck of Google’s Magenta project.  ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:57b324f944024312ebd52ac2</guid>
      <pubDate>Thu, 04 Aug 2016 14:36:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:49:02</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/92cecb7f-c7c3-4ca1-b755-2c827e973b88.mp3" type="audio/mpeg" length="45168431"/>
    </item>
    <item>
      <title>Perturb-and-MAP and Machine Learning in the Flint Water Crisis</title>
      <description>
        <![CDATA[In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.]]>
      </description>
      <itunes:title>Perturb-and-MAP and Machine Learning in the Flint Water Crisis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode fourteen of season two, we talk about Perturb-and-MAP, we take a listener question about classic artificial intelligence ideas being used in modern machine learning, plus we talk with Jake Abernethy of the University of Michigan about municipal data and his work on the Flint water crisis.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:579887e420099ea1919a7910</guid>
      <pubDate>Thu, 21 Jul 2016 10:07:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:40:26</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/c187a884-7879-46b5-910b-e60cfa75dd0b.mp3" type="audio/mpeg" length="36900362"/>
    </item>
    <item>
      <title>Automatic Translation and t-SNE</title>
      <description>
        <![CDATA[In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)]]>
      </description>
      <itunes:title>Automatic Translation and t-SNE</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)</itunes:summary>
      <content:encoded>
        <![CDATA[In episode thirteen of season two, we talk about t-Distributed Stochastic Neighbor Embedding (t-SNE) we take a listener question about statistical physics, plus we talk with Hal Daume of the University of Maryland. (who is a great follow on Twitter.)]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:577fcfb72994caa793123909</guid>
      <pubDate>Thu, 07 Jul 2016 16:07:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:34:01</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/9ea725fb-4c71-45ef-9632-a41f8616d5cd.mp3" type="audio/mpeg" length="30752182"/>
    </item>
    <item>
      <title>Fantasizing Cats and Data Numbers</title>
      <description>
        <![CDATA[In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.]]>
      </description>
      <itunes:title>Fantasizing Cats and Data Numbers</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode twelve of season two, we talk about generative adversarial networks, we take a listener question about using machine learning to improve or create products, plus we talk with Iain Murray of the University of Edinburgh.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5766cd5e15d5db346159fde6</guid>
      <pubDate>Thu, 16 Jun 2016 16:50:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:51:13</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/251f3845-c9f4-4559-861b-695c08cf88f6.mp3" type="audio/mpeg" length="47249867"/>
    </item>
    <item>
      <title>Spark and ICML</title>
      <description>
        <![CDATA[In episode eleven of season two, we talk about the machine learning toolkit  Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.]]>
      </description>
      <itunes:title>Spark and ICML</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode eleven of season two, we talk about the machine learning toolkit  Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode eleven of season two, we talk about the machine learning toolkit  Spark, we take a listener question about the differences between NIPS and ICML conferences, plus we talk with Sinead Williamson of The University of Texas at Austin.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:57530db8d210b8643f8a6dea</guid>
      <pubDate>Thu, 02 Jun 2016 17:19:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:41:01</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/dc456be5-247d-41ca-a826-81f9e7a0968b.mp3" type="audio/mpeg" length="37467115"/>
    </item>
    <item>
      <title>Computational Learning Theory and Machine Learning for Understanding Cells</title>
      <description>
        <![CDATA[In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.]]>
      </description>
      <itunes:title>Computational Learning Theory and Machine Learning for Understanding Cells</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode ten of season two, we talk about Computational Learning Theory and Probably Approximately Correct Learning originated by Professor Leslie Valiant of SEAS at Harvard, we take a listener question about generative systems, plus we talk with Aviv Regev, Chair of the Faculty and Director of the Klarman Cell Observatory and the Cell Circuits Program at the Broad Institute.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5741bde122482e19cfd88c80</guid>
      <pubDate>Thu, 19 May 2016 14:10:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:42:47</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/33ab857b-b700-4ec4-a6e7-313109998328.mp3" type="audio/mpeg" length="39161939"/>
    </item>
    <item>
      <title>Sparse Coding and MADBITS</title>
      <description>
        <![CDATA[In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.]]>
      </description>
      <itunes:title>Sparse Coding and MADBITS</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode nine of season two, we talk about sparse coding, take a listener question about the next big demonstration for AI after AlphaGo. Plus we talk with Clement Farabet about MADBITS and the work he’s doing at Twitter Cortex.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:572b7e097da24f177d5f7095</guid>
      <pubDate>Thu, 05 May 2016 17:08:22 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:43:25</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/8d9c3c86-5891-4938-8075-6f4e12cba3c7.mp3" type="audio/mpeg" length="39775921"/>
    </item>
    <item>
      <title>Remembering David MacKay</title>
      <description>
        <![CDATA[Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and its impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).]]>
      </description>
      <itunes:title>Remembering David MacKay</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and its impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).</itunes:summary>
      <content:encoded>
        <![CDATA[Recently Professor David MacKay passed away. We’ll spend this episode talking about his extensive body of work and its impacts. We’ll also talk with Philipp Hennig, a research group leader at the Max Planck Institute for Intelligent Systems, who trained in Professor MacKay’s group (with Ryan).]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:571a1a20cf80a18454cc2cce</guid>
      <pubDate>Thu, 21 Apr 2016 12:12:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:55:15</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/1efbf12f-ddbd-45ae-9f69-a25c09d277a6.mp3" type="audio/mpeg" length="51134798"/>
    </item>
    <item>
      <title>Machine Learning and Society</title>
      <description>
        <![CDATA[Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow. ]]>
      </description>
      <itunes:title>Machine Learning and Society</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow. </itunes:summary>
      <content:encoded>
        <![CDATA[Episode seven of season two is a little different than our usual episodes, Ryan and Katherine just returned from a conference where they got to talk with Neil Lawrence of the University of Sheffield about some of the larger issues surrounding machine learning and society. They discuss anthropomorphic intelligence, data ownership, and the ability to empathize. The entire episode is given over to this conversation in hopes that it will spur more discussion of these important issues as the field continues to grow. ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:57072197f699bbcfa155060a</guid>
      <pubDate>Fri, 08 Apr 2016 03:13:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:50:27</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/18cc3602-2228-4004-b016-77e2f063e8f1.mp3" type="audio/mpeg" length="46513423"/>
    </item>
    <item>
      <title>Software and Statistics for Machine Learning</title>
      <description>
        <![CDATA[In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.]]>
      </description>
      <itunes:title>Software and Statistics for Machine Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode six of season two, we talk about how to build software for machine learning (and what the roadblocks are), we take a listener question about how to start exploring a new dataset, plus, we talk with Rob Tibshirani of Stanford University.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56f67d759f72666afb4f8e7f</guid>
      <pubDate>Thu, 24 Mar 2016 12:15:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:41:07</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/bdc6169f-82a4-4e1e-badf-f5c4d03d4f43.mp3" type="audio/mpeg" length="37566589"/>
    </item>
    <item>
      <title>Machine Learning in Healthcare and The AlphaGo Matches</title>
      <description>
        <![CDATA[In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.]]>
      </description>
      <itunes:title>Machine Learning in Healthcare and The AlphaGo Matches</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode five of Season two Ryan walks us through variational inference, we put some listener questions about Go and how to play it to Andy Okun, president of the American Go Association (who is in Seoul South Korea watching the Lee Sedol/AlphaGo games). Plus we hear from Suchi Saria of Johns Hopkins about applying machine learning to understanding health care data.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56e19d1ef699bbc1ef82817d</guid>
      <pubDate>Thu, 10 Mar 2016 16:30:33 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:50:31</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/55565f11-cf32-44b9-acd5-337a86fcd0f9.mp3" type="audio/mpeg" length="46588656"/>
    </item>
    <item>
      <title>AI Safety and The Legacy of Bletchley Park</title>
      <description>
        <![CDATA[In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.]]>
      </description>
      <itunes:title>AI Safety and The Legacy of Bletchley Park</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode four of season two, we talk about some of the major issues in AI safety, (and how they’re not really that different from the questions we ask whenever we create a new tool.) One place you can go for other opinions on AI safety is the Future of Life Institute. We take a listener question about time series and we talk with Nick Patterson of the Broad Institute about everything from ancient DNA to Alan Turing. If you're as excited about AlphaGo playing Lee Sedol at Nick is, you can get details on the match on DeepMind's You Tube channel March 5th through the 15th.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56d06e4a45bf211cff51e652</guid>
      <pubDate>Thu, 25 Feb 2016 15:24:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:50:55</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/a2c6e267-918b-477c-a8c9-d1d7f362aa3d.mp3" type="audio/mpeg" length="46967327"/>
    </item>
    <item>
      <title>Robotics and Machine Learning Music Videos</title>
      <description>
        <![CDATA[In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos. Also not to be missed, Michael’s appearance in the recent Turbotax ad!]]>
      </description>
      <itunes:title>Robotics and Machine Learning Music Videos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos. Also not to be missed, Michael’s appearance in the recent Turbotax ad!</itunes:summary>
      <content:encoded>
        <![CDATA[In episode three of season two Ryan walks us through the Alpha Go results and takes a lister question about using Gaussian processes for classifications. Plus we talk with Michael Littman of Brown University about his work, robots, and making music videos. Also not to be missed, Michael’s appearance in the recent Turbotax ad!]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56bdfa66746fb9f48e7574e7</guid>
      <pubDate>Thu, 11 Feb 2016 16:00:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:42:07</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/0dd8715d-f555-4a43-bbeb-6c9e12c21d0e.mp3" type="audio/mpeg" length="38522880"/>
    </item>
    <item>
      <title>OpenAI and Gaussian Processes</title>
      <description>
        <![CDATA[In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)]]>
      </description>
      <itunes:title>OpenAI and Gaussian Processes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)</itunes:summary>
      <content:encoded>
        <![CDATA[In episode two of season two Ryan introduces us to Gaussian processes, we take a listener question on K-means. Plus, we talk with Ilya Sutskever the director of research for OpenAI. (For more from Ilya, you can listen to our season one interview with him.)]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56aa5a71fb36b15e28b8cded</guid>
      <pubDate>Thu, 28 Jan 2016 18:20:06 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:37:29</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/384d08ac-0792-4e35-8b61-9aa011edda65.mp3" type="audio/mpeg" length="34070778"/>
    </item>
    <item>
      <title>Real Human Actions and Women in Machine Learning</title>
      <description>
        <![CDATA[In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.  Want to learn more about the talks at WiML 2015? Here are the slides from each speaker. Lillian LeeCorinna CortesRaia Hadsell Been Kim]]>
      </description>
      <itunes:title>Real Human Actions and Women in Machine Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.  Want to learn more about the talks at WiML 2015? Here are the slides from each speaker. Lillian LeeCorinna CortesRaia Hadsell Been Kim</itunes:summary>
      <content:encoded>
        <![CDATA[In episode one of season two, we celebrate the 10th anniversary of Women in Machine Learning (WiML) with its co-founder (and our guest host for this episode) Hanna Wallach of Microsoft Research. Hanna and Jenn Wortman Vaughan, who also helped to found the event, tell us how about how the 2015 event went. Lillian Lee (Cornell), Raia Hadsell (Google Deepmind), Been Kim (AI2/University of Washington), and Corinna Cortes (Google Research) gave invited talks at the 2015 event. WiML also released a directory of women in machine learning, if you’d like to listed, want to find a collaborator, or are looking for an expert to take part in an event, it’s an excellent resource. Plus, we talk with Jenn Wortman Vaughan, about the research she is doing at Microsoft Research which examines the assumptions we make about how humans actually act and using that to inform thinking about our interactions with computers.  Want to learn more about the talks at WiML 2015? Here are the slides from each speaker. Lillian LeeCorinna CortesRaia Hadsell Been Kim]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5698d9851f4039a738f2aefd</guid>
      <pubDate>Thu, 14 Jan 2016 11:35:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,wallach,intelligence,programming,machine,computers,hanna,ML,research ,AIML,computer science,deep,science,learning</itunes:keywords>
      <itunes:duration>01:01:31</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/a74ce734-930f-48d2-bc29-cd2aaba8faf2.mp3" type="audio/mpeg" length="57148395"/>
    </item>
    <item>
      <title>Open Source Releases and The End of Season One</title>
      <description>
        <![CDATA[In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year. This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!]]>
      </description>
      <itunes:title>Open Source Releases and The End of Season One</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year. This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!</itunes:summary>
      <content:encoded>
        <![CDATA[In episode twenty four we talk with Ben Vigoda about his work in probabilistic programming (everything from his thesis, to his new company) Ryan talks about Tensor Flow and Autograd for Torch, some open source tools that have been recently releases. Plus we talk a listener question about the biggest thing in machine learning this year. This is the last episode in season one. We want to thanks all our wonderful listeners for supporting the show, asking us questions, and making season two possible! We’ll be back in early January with the beginning of season two!]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:56522452e4b0e332af413d3b</guid>
      <pubDate>Sun, 22 Nov 2015 20:37:47 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:42:40</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/90e77e48-01ff-4af9-b5ae-11c1380106bb.mp3" type="audio/mpeg" length="39044075"/>
    </item>
    <item>
      <title>Probabilistic Programming and Digital Humanities</title>
      <description>
        <![CDATA[In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.]]>
      </description>
      <itunes:title>Probabilistic Programming and Digital Humanities</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode 23 we talk with David Mimno of Cornell University about his work in the digital humanities (and explore what machine learning can tell us about lady zombie ghosts and huge bodies of literature) Ryan introduces us to probabilistic programming and we take a listener question about knowledge transfer between math and machine learning.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:563bcca5e4b08fe45c333134</guid>
      <pubDate>Thu, 05 Nov 2015 21:45:18 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:50:12</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/0d2f028e-5a3f-4aa4-b753-3ae6d12f3384.mp3" type="audio/mpeg" length="46274351"/>
    </item>
    <item>
      <title>Workshops at NIPS and Crowdsourcing in Machine Learning</title>
      <description>
        <![CDATA[In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.]]>
      </description>
      <itunes:title>Workshops at NIPS and Crowdsourcing in Machine Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode twenty two we talk with Adam Kalai of Microsoft Research New England about his work using crowdsourcing in Machine Learning, the language made of shapes of words, and New England Machine Learning Day. We take a look at the workshops being presented at NIPS this year, and we take a listener question about changing the number of features your data has.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:562a2d15e4b050fa52c68e51</guid>
      <pubDate>Thu, 22 Oct 2015 12:53:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:49:45</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/a00055fa-13c2-452b-8f16-7a619d73a010.mp3" type="audio/mpeg" length="45840927"/>
    </item>
    <item>
      <title>Machine Learning Mastery and Cancer Clusters</title>
      <description>
        <![CDATA[In episode twenty one  we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.]]>
      </description>
      <itunes:title>Machine Learning Mastery and Cancer Clusters</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode twenty one  we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode twenty one  we talk with Quaid Morris of the University of Toronto, who is using machine learning to find a better way to treat cancers. Ryan introduces us to expectation maximization and we take a listener question about how to master machine learning.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5617bce9e4b05d708cf3ed47</guid>
      <pubDate>Thu, 08 Oct 2015 13:30:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:28:44</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/bddb8d59-0980-4c2c-a61e-be21d0903f87.mp3" type="audio/mpeg" length="25671053"/>
    </item>
    <item>
      <title>Data from Video Games and The Master Algorithm</title>
      <description>
        <![CDATA[In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)We're in the final hours of our Fundraising Campaign and we need your help! ]]>
      </description>
      <itunes:title>Data from Video Games and The Master Algorithm</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)We're in the final hours of our Fundraising Campaign and we need your help! </itunes:summary>
      <content:encoded>
        <![CDATA[In episode 20 we chat with Pedro Domingos of the University of Washington, he's just published a book The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. We get some insight into Linear Dynamical Systems which the Datta Lab at Harvard Medical School is doing some interesting work with. Plus, we take a listener question about using video games to generate labeled data (spoiler alert, it's an awesome idea!)We're in the final hours of our Fundraising Campaign and we need your help! ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5604650ce4b094761a54a38d</guid>
      <pubDate>Thu, 24 Sep 2015 21:55:35 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:48:17</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/1f97c278-b9c8-49f3-ba75-07b6bbc6f276.mp3" type="audio/mpeg" length="44440764"/>
    </item>
    <item>
      <title>Strong AI and Autoencoders</title>
      <description>
        <![CDATA[In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter. ]]>
      </description>
      <itunes:title>Strong AI and Autoencoders</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter. </itunes:summary>
      <content:encoded>
        <![CDATA[In episode nineteen we chat with Hugo Larochelle about his work on unsupervised learning, the International Conference on Learning Representations (ICLR), and his teaching style. His Youtube courses are not to be missed, and his twitter feed @Hugo_Larochelle is a great source for paper reviews. Ryan introduces us to autoencoders (for more, turn to the work of Richard Zemel) plus we tackle the question of what is standing in the way of strong AI. Talking Machines is beginning development of season two! We need your help! Donate now on Kickstarter. ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55f1fc7ee4b0060469e69f8d</guid>
      <pubDate>Thu, 10 Sep 2015 17:00:00 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:38:03</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/3478b014-8833-45c2-b14d-5a78abbe71db.mp3" type="audio/mpeg" length="34623738"/>
    </item>
    <item>
      <title>Active Learning and Machine Learning in Neuroscience</title>
      <description>
        <![CDATA[In episode eighteen we talk with Sham Kakade, of Microsoft Research New England, about his expansive work which touches on everything from neuroscience to theoretical machine learning. Ryan introduces us to active learning (great tutorial here) and we take a question on evolutionary algorithms. Today we're announcing that season two of Talking Machines is moving into development, but we need your help! In order to raise funds, we've opened the show up to sponsorship and started a Kickstarter and we've got some great nerd cred prizes to thank you with. But more than just getting you a totally sweet mug your donation will fuel journalism about the reality of scientific research, something that is unfortunately hard to find. Lend a hand if you can!  ]]>
      </description>
      <itunes:title>Active Learning and Machine Learning in Neuroscience</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode eighteen we talk with Sham Kakade, of Microsoft Research New England, about his expansive work which touches on everything from neuroscience to theoretical machine learning. Ryan introduces us to active learning (great tutorial here) and we take a question on evolutionary algorithms. Today we're announcing that season two of Talking Machines is moving into development, but we need your help! In order to raise funds, we've opened the show up to sponsorship and started a Kickstarter and we've got some great nerd cred prizes to thank you with. But more than just getting you a totally sweet mug your donation will fuel journalism about the reality of scientific research, something that is unfortunately hard to find. Lend a hand if you can!  </itunes:summary>
      <content:encoded>
        <![CDATA[In episode eighteen we talk with Sham Kakade, of Microsoft Research New England, about his expansive work which touches on everything from neuroscience to theoretical machine learning. Ryan introduces us to active learning (great tutorial here) and we take a question on evolutionary algorithms. Today we're announcing that season two of Talking Machines is moving into development, but we need your help! In order to raise funds, we've opened the show up to sponsorship and started a Kickstarter and we've got some great nerd cred prizes to thank you with. But more than just getting you a totally sweet mug your donation will fuel journalism about the reality of scientific research, something that is unfortunately hard to find. Lend a hand if you can!  ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55df2604e4b0f4a748236288</guid>
      <pubDate>Thu, 27 Aug 2015 15:12:45 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:55:49</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/d9019c77-5367-492c-a8f4-9a8443579b52.mp3" type="audio/mpeg" length="51677309"/>
    </item>
    <item>
      <title>Machine Learning in Biology and Getting into Grad School</title>
      <description>
        <![CDATA[In episode seventeen we talk with Jennifer Listgarten of  Microsoft Research New England about her work using machine learning to answer questions in biology. Recently, With her collaborator Nicolo Fusi, she used machine learning to make CRISPR more efficient and correct for latent population structure in GWAS studies. We take a question from a listener about the development of computational biology and Ryan gives us some great advice on how to get into grad school (Spoiler alert: apply to the lab, not the program.)]]>
      </description>
      <itunes:title>Machine Learning in Biology and Getting into Grad School</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode seventeen we talk with Jennifer Listgarten of  Microsoft Research New England about her work using machine learning to answer questions in biology. Recently, With her collaborator Nicolo Fusi, she used machine learning to make CRISPR more efficient and correct for latent population structure in GWAS studies. We take a question from a listener about the development of computational biology and Ryan gives us some great advice on how to get into grad school (Spoiler alert: apply to the lab, not the program.)</itunes:summary>
      <content:encoded>
        <![CDATA[In episode seventeen we talk with Jennifer Listgarten of  Microsoft Research New England about her work using machine learning to answer questions in biology. Recently, With her collaborator Nicolo Fusi, she used machine learning to make CRISPR more efficient and correct for latent population structure in GWAS studies. We take a question from a listener about the development of computational biology and Ryan gives us some great advice on how to get into grad school (Spoiler alert: apply to the lab, not the program.)]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55cccd6fe4b0f1c896833517</guid>
      <pubDate>Thu, 13 Aug 2015 17:07:47 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:50:26</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/632ffc91-f94b-4997-9b62-8d2c03840c21.mp3" type="audio/mpeg" length="46502138"/>
    </item>
    <item>
      <title>Machine Learning for Sports and Real Time Predictions</title>
      <description>
        <![CDATA[In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.]]>
      </description>
      <itunes:title>Machine Learning for Sports and Real Time Predictions</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode sixteen we chat with Danny Tarlow of Microsoft Research Cambridge (in the UK not MA). Danny (along with Chris Maddison and Tom Minka) won best paper at NIPS 2014 for his paper A* Sampling. We talk with him about his work in applying machine learning to sports and politics. Plus we take a listener question on making real time predictions using machine learning, and we demystify backpropagation. You can use Torch, Theano or Autograd to explore backprop more.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55ba3ccae4b023ae03fee069</guid>
      <pubDate>Thu, 30 Jul 2015 15:06:54 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,sports</itunes:keywords>
      <itunes:duration>00:31:08</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/21fd7f11-2743-4634-8f89-7f1691e73ab4.mp3" type="audio/mpeg" length="27983621"/>
    </item>
    <item>
      <title>Really Really Big Data and Machine Learning in Business</title>
      <description>
        <![CDATA[In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.]]>
      </description>
      <itunes:title>Really Really Big Data and Machine Learning in Business</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode fifteen we talk with Max Welling, of the University of Amsterdam and University of California Irvine. We talk with him about his work with extremely large data and big business and machine learning. Max was program co-chair for NIPS in 2013 when Mark Zuckerberg visited the conference, an event which Max wrote very thoughtfully about. We also take a listener question about the relationship between machine learning and artificial intelligence. Plus, we get an introduction to change point detection. For more on change point detection check out the work of Paul Fearnhead of Lancaster University. Ryan also has a paper on the topic from way back when.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55a7e088e4b05d53975c14bc</guid>
      <pubDate>Thu, 16 Jul 2015 16:57:42 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:25:46</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/70bd202a-802f-47c8-bbd7-a88e6eb7f696.mp3" type="audio/mpeg" length="22821407"/>
    </item>
    <item>
      <title>Solving Intelligence and Machine Learning Fundamentals</title>
      <description>
        <![CDATA[In episode fourteen we talk with Nando de Freitas. He’s a professor of Computer Science at the University of Oxford and a senior staff research scientist Google DeepMind. Right now he’s focusing on solving intelligence. (No biggie) Ryan introduces us to anchor words and how they can help us expand our ability to explore topic models. Plus, we take a question about the fundamentals of tackling a problem with machine learning.]]>
      </description>
      <itunes:title>Solving Intelligence and Machine Learning Fundamentals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode fourteen we talk with Nando de Freitas. He’s a professor of Computer Science at the University of Oxford and a senior staff research scientist Google DeepMind. Right now he’s focusing on solving intelligence. (No biggie) Ryan introduces us to anchor words and how they can help us expand our ability to explore topic models. Plus, we take a question about the fundamentals of tackling a problem with machine learning.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode fourteen we talk with Nando de Freitas. He’s a professor of Computer Science at the University of Oxford and a senior staff research scientist Google DeepMind. Right now he’s focusing on solving intelligence. (No biggie) Ryan introduces us to anchor words and how they can help us expand our ability to explore topic models. Plus, we take a question about the fundamentals of tackling a problem with machine learning.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5595ac6fe4b01a6af5864836</guid>
      <pubDate>Thu, 02 Jul 2015 21:31:12 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:32:11</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/2f2de4bd-708b-4cd7-87b3-1a4c3b0f3205.mp3" type="audio/mpeg" length="28981289"/>
    </item>
    <item>
      <title>Working With Data and Machine Learning in Advertising</title>
      <description>
        <![CDATA[In episode thirteen we talk with Claudia Perlich, Chief Scientist at Dstillery. We talk about her work using machine learning in digital advertising and her approach to data in competitions. We take a look at information leakage in competitions after ImageNet Challenge this year. The New York Times covered the events, and Neil Lawrence has been writing thoughtfully about it and its impact. Plus, we take a listener question about trends in data size.]]>
      </description>
      <itunes:title>Working With Data and Machine Learning in Advertising</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode thirteen we talk with Claudia Perlich, Chief Scientist at Dstillery. We talk about her work using machine learning in digital advertising and her approach to data in competitions. We take a look at information leakage in competitions after ImageNet Challenge this year. The New York Times covered the events, and Neil Lawrence has been writing thoughtfully about it and its impact. Plus, we take a listener question about trends in data size.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode thirteen we talk with Claudia Perlich, Chief Scientist at Dstillery. We talk about her work using machine learning in digital advertising and her approach to data in competitions. We take a look at information leakage in competitions after ImageNet Challenge this year. The New York Times covered the events, and Neil Lawrence has been writing thoughtfully about it and its impact. Plus, we take a listener question about trends in data size.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5582ed2ae4b0a05cee6fafaa</guid>
      <pubDate>Thu, 18 Jun 2015 16:35:42 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:41:11</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/6930e0b7-8e7d-410c-a8ce-9ca450bbdda5.mp3" type="audio/mpeg" length="37632209"/>
    </item>
    <item>
      <title>The Economic Impact of Machine Learning and Using The Kernel Trick on Big Data</title>
      <description>
        <![CDATA[In episode twelve we talk with Andrew Ng, Chief Scientist at Baidu, about how speech recognition is going to explode the way we use mobile devices and his approach to working on the problem. We also discuss why we need to prepare for the economic impacts of machine learning. We’re introduced to Random Features for Large-Scale Kernel Machines, and talk about how using this twist on the Kernel trick can help you dig into big data. Plus, we take a listener question about the size of computing power in machine learning.]]>
      </description>
      <itunes:title>The Economic Impact of Machine Learning and Using The Kernel Trick on Big Data</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode twelve we talk with Andrew Ng, Chief Scientist at Baidu, about how speech recognition is going to explode the way we use mobile devices and his approach to working on the problem. We also discuss why we need to prepare for the economic impacts of machine learning. We’re introduced to Random Features for Large-Scale Kernel Machines, and talk about how using this twist on the Kernel trick can help you dig into big data. Plus, we take a listener question about the size of computing power in machine learning.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode twelve we talk with Andrew Ng, Chief Scientist at Baidu, about how speech recognition is going to explode the way we use mobile devices and his approach to working on the problem. We also discuss why we need to prepare for the economic impacts of machine learning. We’re introduced to Random Features for Large-Scale Kernel Machines, and talk about how using this twist on the Kernel trick can help you dig into big data. Plus, we take a listener question about the size of computing power in machine learning.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:5570570be4b0f24e595941be</guid>
      <pubDate>Thu, 04 Jun 2015 13:57:10 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,andrew,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ng,baidu,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:42:36</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/6f95a426-f529-407c-977f-22b5e6be6684.mp3" type="audio/mpeg" length="38986396"/>
    </item>
    <item>
      <title>How We Think About Privacy and Finding Features in Black Boxes</title>
      <description>
        <![CDATA[In episode eleven we chat with Neil Lawrence from the University of Sheffield. We talk about the problems of privacy in the age of machine learning, the responsibilities that come with using ML tools and making data more open. We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) and take a listener question about finding insights into features in the black boxes of deep learning.]]>
      </description>
      <itunes:title>How We Think About Privacy and Finding Features in Black Boxes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode eleven we chat with Neil Lawrence from the University of Sheffield. We talk about the problems of privacy in the age of machine learning, the responsibilities that come with using ML tools and making data more open. We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) and take a listener question about finding insights into features in the black boxes of deep learning.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode eleven we chat with Neil Lawrence from the University of Sheffield. We talk about the problems of privacy in the age of machine learning, the responsibilities that come with using ML tools and making data more open. We learn about the Markov decision process (and what happens when you use it in the real world and it becomes a partially observable Markov decision process) and take a listener question about finding insights into features in the black boxes of deep learning.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:555e3366e4b0534bc827a2c8</guid>
      <pubDate>Thu, 21 May 2015 19:46:15 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:35:43</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/6d6c8e87-bce1-45b3-8b37-0d0ac624dc07.mp3" type="audio/mpeg" length="32374282"/>
    </item>
    <item>
      <title>Interdisciplinary Data and Helping Humans Be Creative</title>
      <description>
        <![CDATA[In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.]]>
      </description>
      <itunes:title>Interdisciplinary Data and Helping Humans Be Creative</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.</itunes:summary>
      <content:encoded>
        <![CDATA[In Episode 10 we talk with David Blei of Columbia University. We talk about his work on latent dirichlet allocation, topic models, the PhD program in data that he’s helping to create at Columbia and why exploring data is inherently multidisciplinary. We learn about Markov Chain Monte Carlo and take a listener question about how machine learning can make humans more creative.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:554b91e6e4b029f0ef39eaae</guid>
      <pubDate>Thu, 07 May 2015 16:32:54 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:36:17</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/414d1919-3012-436e-b5b4-5fdc5d0b30e6.mp3" type="audio/mpeg" length="32913867"/>
    </item>
    <item>
      <title>Starting Simple and Machine Learning in Meds</title>
      <description>
        <![CDATA[In episode nine we talk with George Dahl, of  the University of Toronto, about his work on the Merck molecular activity challenge on kaggle and speech recognition. George recently successfully defended his thesis at the end of March 2015. (Congrats George!) We learn about how networks and graphs can help us understand latent properties of relationships, and we take a listener question about just how you find the right algorithm to solve a problem (Spoiler: start simple.)  ]]>
      </description>
      <itunes:title>Starting Simple and Machine Learning in Meds</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode nine we talk with George Dahl, of  the University of Toronto, about his work on the Merck molecular activity challenge on kaggle and speech recognition. George recently successfully defended his thesis at the end of March 2015. (Congrats George!) We learn about how networks and graphs can help us understand latent properties of relationships, and we take a listener question about just how you find the right algorithm to solve a problem (Spoiler: start simple.)  </itunes:summary>
      <content:encoded>
        <![CDATA[In episode nine we talk with George Dahl, of  the University of Toronto, about his work on the Merck molecular activity challenge on kaggle and speech recognition. George recently successfully defended his thesis at the end of March 2015. (Congrats George!) We learn about how networks and graphs can help us understand latent properties of relationships, and we take a listener question about just how you find the right algorithm to solve a problem (Spoiler: start simple.)  ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55390165e4b0a334dca4b458</guid>
      <pubDate>Thu, 23 Apr 2015 14:31:58 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:40:24</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/34eac9ff-916c-4fe0-86a5-5a7a4ee68b1c.mp3" type="audio/mpeg" length="36874449"/>
    </item>
    <item>
      <title>Spinning Programming Plates and Creative Algorithms</title>
      <description>
        <![CDATA[On episode eight we talk with Charles Sutton, a professor in the School of Informatics University of Edinburgh about computer programming and using machine learning how to better understand how it’s done well. Ryan introduces us to collaborative filtering, a process that helps to make predictions about taste. Netflix and Amazon use it to recommend movies and items. It's the process that the Netflix Prize competition further helped to hone. Plus, we take a listener question on creativity in algorithms.]]>
      </description>
      <itunes:title>Spinning Programming Plates and Creative Algorithms</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>On episode eight we talk with Charles Sutton, a professor in the School of Informatics University of Edinburgh about computer programming and using machine learning how to better understand how it’s done well. Ryan introduces us to collaborative filtering, a process that helps to make predictions about taste. Netflix and Amazon use it to recommend movies and items. It's the process that the Netflix Prize competition further helped to hone. Plus, we take a listener question on creativity in algorithms.</itunes:summary>
      <content:encoded>
        <![CDATA[On episode eight we talk with Charles Sutton, a professor in the School of Informatics University of Edinburgh about computer programming and using machine learning how to better understand how it’s done well. Ryan introduces us to collaborative filtering, a process that helps to make predictions about taste. Netflix and Amazon use it to recommend movies and items. It's the process that the Netflix Prize competition further helped to hone. Plus, we take a listener question on creativity in algorithms.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55265d0de4b0d5d2f33b7eed</guid>
      <pubDate>Thu, 09 Apr 2015 11:18:47 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,seas,adams,programming,machine,computers,ML,research ,AIML,computer science,deep,science,learning,ryan</itunes:keywords>
      <itunes:duration>00:37:18</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/4f992043-6043-4004-b208-cf562e48744d.mp3" type="audio/mpeg" length="33899415"/>
    </item>
    <item>
      <title>The Automatic Statistician and Electrified Meat</title>
      <description>
        <![CDATA[In episode seven of Talking Machines we talk with Zoubin Ghahramani, professor of Information Engineering in the Department of Engineering at the University of Cambridge. His project, The Automatic Statistician, aims to use machine learning to take raw data and give you statistical reports and natural languages summaries of what trends that data shows. We get really hungry exploring Bayesian Non-parametrics through the stories of the Chinese Restaurant Process and the Indian Buffet Process (but remember, there’s no free lunch). Plus we take a listener question about how much we should rely on ourselves and our ideas about what intelligence in electrified meat looks like when we try to build machine intelligences.]]>
      </description>
      <itunes:title>The Automatic Statistician and Electrified Meat</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode seven of Talking Machines we talk with Zoubin Ghahramani, professor of Information Engineering in the Department of Engineering at the University of Cambridge. His project, The Automatic Statistician, aims to use machine learning to take raw data and give you statistical reports and natural languages summaries of what trends that data shows. We get really hungry exploring Bayesian Non-parametrics through the stories of the Chinese Restaurant Process and the Indian Buffet Process (but remember, there’s no free lunch). Plus we take a listener question about how much we should rely on ourselves and our ideas about what intelligence in electrified meat looks like when we try to build machine intelligences.</itunes:summary>
      <content:encoded>
        <![CDATA[In episode seven of Talking Machines we talk with Zoubin Ghahramani, professor of Information Engineering in the Department of Engineering at the University of Cambridge. His project, The Automatic Statistician, aims to use machine learning to take raw data and give you statistical reports and natural languages summaries of what trends that data shows. We get really hungry exploring Bayesian Non-parametrics through the stories of the Chinese Restaurant Process and the Indian Buffet Process (but remember, there’s no free lunch). Plus we take a listener question about how much we should rely on ourselves and our ideas about what intelligence in electrified meat looks like when we try to build machine intelligences.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55140923e4b0da0f3583f777</guid>
      <pubDate>Thu, 26 Mar 2015 14:15:03 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,computer,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:47:40</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/dfd0ebff-3d80-4d10-8ed3-acd0759e355d.mp3" type="audio/mpeg" length="43850605"/>
    </item>
    <item>
      <title>The Future of Machine Learning from the Inside Out</title>
      <description>
        <![CDATA[We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).]]>
      </description>
      <itunes:title>The Future of Machine Learning from the Inside Out</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).</itunes:summary>
      <content:encoded>
        <![CDATA[We hear the second part of our conversation with with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). They talk with us about this history (and future) of research on neural nets. We explore how to use Determinantal Point Processes. Alex Kulesza  and Ben Taskar (who passed away recently) have done some really exciting work in this area, for more on DPPs check out their paper on the topic. Also, we take a listener question about machine learning and function approximation (spoiler alert: it is, and then again, it isn’t).]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:55035608e4b04b97e275b4ab</guid>
      <pubDate>Fri, 13 Mar 2015 22:16:51 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:30:14</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/c412d84e-36bd-4c92-8154-ec3114a343ea.mp3" type="audio/mpeg" length="27116773"/>
    </item>
    <item>
      <title>The History of Machine Learning from the Inside Out</title>
      <description>
        <![CDATA[In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  ]]>
      </description>
      <itunes:title>The History of Machine Learning from the Inside Out</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  </itunes:summary>
      <content:encoded>
        <![CDATA[In episode five of Talking Machines, we hear the first part of our conversation with Geoffrey Hinton (Google and University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (Facebook and NYU). Ryan introduces us to the ideas in tensor factorization methods for learning latent variable models (which is both a tongue twister and and one of the new tools in ML). To find out more on the topic, the paper Tensor decompositions for learning latent variable models is a good place to start. You can also take a look at the work of Daniel Hsu, Animashree Anandkumar and Sham M. Kakade Plus we take a listener question about just where statistics stops and machine learning begins.  ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:54ef4512e4b0b5ba2b979706</guid>
      <pubDate>Thu, 26 Feb 2015 16:24:21 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:34:36</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/2120d438-ffd5-40f4-b04b-4b70f474d83a.mp3" type="audio/mpeg" length="31305978"/>
    </item>
    <item>
      <title>Using Models in the Wild and Women in Machine Learning</title>
      <description>
        <![CDATA[In episode four we talk with Hanna Wallach, of Microsoft Research. She's also a professor in the Department of Computer Science, University of Massachusetts Amherst and one of the founders of Women in Machine Learning (better known as WiML). We take a listener question about scalability and the size of data sets. And Ryan takes us through topic modeling using Latent Dirichlet allocation (say that five times fast). ]]>
      </description>
      <itunes:title>Using Models in the Wild and Women in Machine Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In episode four we talk with Hanna Wallach, of Microsoft Research. She's also a professor in the Department of Computer Science, University of Massachusetts Amherst and one of the founders of Women in Machine Learning (better known as WiML). We take a listener question about scalability and the size of data sets. And Ryan takes us through topic modeling using Latent Dirichlet allocation (say that five times fast). </itunes:summary>
      <content:encoded>
        <![CDATA[In episode four we talk with Hanna Wallach, of Microsoft Research. She's also a professor in the Department of Computer Science, University of Massachusetts Amherst and one of the founders of Women in Machine Learning (better known as WiML). We take a listener question about scalability and the size of data sets. And Ryan takes us through topic modeling using Latent Dirichlet allocation (say that five times fast). ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:54dcc315e4b082ba49b00032</guid>
      <pubDate>Thu, 12 Feb 2015 15:40:05 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,wiml,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:47:06</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/e747beba-c211-462b-a279-f8e6d8a07811.mp3" type="audio/mpeg" length="43303497"/>
    </item>
    <item>
      <title>Common Sense Problems and Learning about Machine Learning</title>
      <description>
        <![CDATA[On episode three of Talking Machines we sit down with Kevin Murphy who is currently a research scientist at Google. We talk with him about the work he’s doing there on the Knowledge Vault, his textbook, Machine Learning: A Probabilistic Perspective (and its arch nemesis which we won’t link to), and how to learn about machine learning (Metacademy is a great place to start). We tackle a listener question about the dream of a one step solution to strong Artificial Intelligence and if Deep Neural Networks might be it. Plus, Ryan introduces us to a new way of thinking about questions in machine learning from Yoshua Bengio’s Lab at the University of Montreal out lined in their new paper, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, and Katherine brings up Facebook’s release of open source machine learning tools and we talk about what it might mean. If you want to explore some open source tools for machine learning we also recommend giving these a try:Super big list of ML Open Source Projects! Torch Gaussian Process Machine Learning ToolboxPyMCMalletStanWekaTheanoCaffeSpearmint]]>
      </description>
      <itunes:title>Common Sense Problems and Learning about Machine Learning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>On episode three of Talking Machines we sit down with Kevin Murphy who is currently a research scientist at Google. We talk with him about the work he’s doing there on the Knowledge Vault, his textbook, Machine Learning: A Probabilistic Perspective (and its arch nemesis which we won’t link to), and how to learn about machine learning (Metacademy is a great place to start). We tackle a listener question about the dream of a one step solution to strong Artificial Intelligence and if Deep Neural Networks might be it. Plus, Ryan introduces us to a new way of thinking about questions in machine learning from Yoshua Bengio’s Lab at the University of Montreal out lined in their new paper, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, and Katherine brings up Facebook’s release of open source machine learning tools and we talk about what it might mean. If you want to explore some open source tools for machine learning we also recommend giving these a try:Super big list of ML Open Source Projects! Torch Gaussian Process Machine Learning ToolboxPyMCMalletStanWekaTheanoCaffeSpearmint</itunes:summary>
      <content:encoded>
        <![CDATA[On episode three of Talking Machines we sit down with Kevin Murphy who is currently a research scientist at Google. We talk with him about the work he’s doing there on the Knowledge Vault, his textbook, Machine Learning: A Probabilistic Perspective (and its arch nemesis which we won’t link to), and how to learn about machine learning (Metacademy is a great place to start). We tackle a listener question about the dream of a one step solution to strong Artificial Intelligence and if Deep Neural Networks might be it. Plus, Ryan introduces us to a new way of thinking about questions in machine learning from Yoshua Bengio’s Lab at the University of Montreal out lined in their new paper, Identifying and attacking the saddle point problem in high-dimensional non-convex optimization, and Katherine brings up Facebook’s release of open source machine learning tools and we talk about what it might mean. If you want to explore some open source tools for machine learning we also recommend giving these a try:Super big list of ML Open Source Projects! Torch Gaussian Process Machine Learning ToolboxPyMCMalletStanWekaTheanoCaffeSpearmint]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:54ca2403e4b00f96979bb4ed</guid>
      <pubDate>Thu, 29 Jan 2015 14:26:11 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,nets,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:42:55</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/e1157f4d-857c-49fb-abbc-7e65721e96c9.mp3" type="audio/mpeg" length="39289835"/>
    </item>
    <item>
      <title>Machine Learning and Magical Thinking</title>
      <description>
        <![CDATA[Today on Talking Machines we hear from Google researcher Ilya Sutskever about his work, how he became interested in machine learning, and why it takes a little bit of magical thinking. We take your questions, and explore where the line between human programming and computer learning actually is. And we sift through some news from the field, Ryan explains the concepts behind one of the best papers  at NIPS this year, A * Sampling, and Katherine brings up an open letter about research priorities and ethical questions that was recently published.]]>
      </description>
      <itunes:title>Machine Learning and Magical Thinking</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>Today on Talking Machines we hear from Google researcher Ilya Sutskever about his work, how he became interested in machine learning, and why it takes a little bit of magical thinking. We take your questions, and explore where the line between human programming and computer learning actually is. And we sift through some news from the field, Ryan explains the concepts behind one of the best papers  at NIPS this year, A * Sampling, and Katherine brings up an open letter about research priorities and ethical questions that was recently published.</itunes:summary>
      <content:encoded>
        <![CDATA[Today on Talking Machines we hear from Google researcher Ilya Sutskever about his work, how he became interested in machine learning, and why it takes a little bit of magical thinking. We take your questions, and explore where the line between human programming and computer learning actually is. And we sift through some news from the field, Ryan explains the concepts behind one of the best papers  at NIPS this year, A * Sampling, and Katherine brings up an open letter about research priorities and ethical questions that was recently published.]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:54b7b6bee4b0c2baff848162</guid>
      <pubDate>Thu, 15 Jan 2015 13:52:38 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,nets,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:37:10</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/a21ff47b-c84a-4783-b917-bb455b958b46.mp3" type="audio/mpeg" length="33771520"/>
    </item>
    <item>
      <title>Hello World!</title>
      <description>
        <![CDATA[In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of  the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed. ]]>
      </description>
      <itunes:title>Hello World!</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:summary>In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of  the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed. </itunes:summary>
      <content:encoded>
        <![CDATA[In the first episode of Talking Machines we meet our hosts, Katherine Gorman (nerd, journalist) and Ryan Adams (nerd, Harvard computer science professor), and explore some of the interviews you'll be able to hear this season. Today we hear some short clips on big issues, we'll get technical, but today is all about introductions.We start with Kevin Murphy of Google talking about his textbook that has become a standard in the field. Then we turn to Hanna Wallach of Microsoft Research NYC and UMass Amherst and hear about the founding of WiML (Women in Machine Learning). Next we discuss academia's relationship with business with Max Welling from the University of Amsterdam, program co-chair of  the 2013 NIPS conference (Neural Information Processing Systems). Finally, we sit down with three pillars of the field Yann LeCun, Yoshua Bengio, and Geoff Hinton to hear about where the field has been and where it might be headed. ]]>
      </content:encoded>
      <guid isPermaLink="false">54a56ccbe4b0ab38fed9fc81:54a57bffe4b0fe1194167e61:54a57c45e4b039f26fef1c30</guid>
      <pubDate>Thu, 01 Jan 2015 18:09:14 -0000</pubDate>
      <itunes:explicit>no</itunes:explicit>
      <itunes:image href="https://dfkfj8j276wwv.cloudfront.net/images/f2/97/18/9d/f297189d-131a-4848-929f-2895e7073ca5/7070183d90d8b92c09986003e93048acc0b49c6b65a5a802d4f4da5c6af734d7041aa8be1bdf1a4b2b3d1b277cab6c9147648384231a9227545999ba540e4830.jpeg"/>
      <itunes:keywords>artificial,AI,artificial intelligence,networks,intelligence,programming,machine,computers,ML,research ,AIML,computer science,deep,learning</itunes:keywords>
      <itunes:duration>00:43:28</itunes:duration>
      <enclosure url="http://rss.art19.com/episodes/b0e22076-2815-4952-a5d0-3da44401cd03.mp3" type="audio/mpeg" length="39816881"/>
    </item>
  </channel>
</rss>
