<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
  <channel>
    <atom:link href="https://feeds.megaphone.fm/MLN2155636147" rel="self" type="application/rss+xml"/>
    <title>The TWIML AI Podcast (formerly This Week in Machine Learning &amp; Artificial Intelligence)</title>
    <link>https://twimlai.com</link>
    <language>en</language>
    <copyright>All rights reserved</copyright>
    <description>Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.</description>
    
    <itunes:explicit>no</itunes:explicit>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the worlds of machine learning and artificial intelligence. We discuss the latest developments in research, technology, and business, and explore interest</itunes:subtitle>
    <itunes:author>Sam Charrington</itunes:author>
    <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the worlds of machine learning and artificial intelligence. We discuss the latest developments in research, technology, and business, and explore interesting projects from across the web.&#13;
&#13;
Technologies covered include: machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, big data and more.</itunes:summary>
    <content:encoded>
      <![CDATA[<p>Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.</p>]]>
    </content:encoded>
    <itunes:owner>
      <itunes:name>TWIML</itunes:name>
      <itunes:email>team@twimlai.com</itunes:email>
    </itunes:owner>
    <itunes:image href="http://s3.amazonaws.com/twimlai-img/twimlai_logo_1500x1500.png"/>
    
    
    
    <itunes:new-feed-url>https://feeds.megaphone.fm/MLN2155636147</itunes:new-feed-url>
    <itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords><itunes:category text="Technology"/><itunes:category text="Technology"><itunes:category text="Tech News"/></itunes:category><item>
      <title>AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka - #762</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2026-openclaw-agents-reasoning-llms</link>
      <description>In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian’s perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch).



The complete show notes for this episode can be found at https://twimlai.com/go/762.</description>
      <pubDate>Thu, 26 Feb 2026 23:52:00 -0000</pubDate>
      <itunes:title>AI Trends 2026: OpenClaw Agents, Reasoning LLMs, and More with Sebastian Raschka</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>762</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/21a9ae12-1364-11f1-aebe-ef6add9ebd14/image/ae78b995304ba8cfdbf9508eb721e3dc.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian’s perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch).



The complete show notes for this episode can be found at https://twimlai.com/go/762.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Sebastian Raschka, independent LLM researcher and author, joins us to break down how the LLM landscape has changed over the past year and what is likely to matter most in 2026. We discuss the shift from raw model scaling to reasoning-focused post-training, inference-time techniques, and better tool integration. Sebastian explains why methods like self-consistency, self-refinement, and verifiable-reward reinforcement learning have become central to progress in domains like math and coding, and where those approaches still fall short. We also explore agentic workflows in practice, including where multi-agent systems add real value and where reliability constraints still dominate system design. The conversation covers architecture trends such as mixture-of-experts, attention efficiency strategies, and the practical impact of long-context models, alongside persistent challenges like continual learning. We close with Sebastian’s perspective on maintaining strong coding fundamentals in the age of AI assistants and a preview of his new book, Build A Reasoning Model (From Scratch).</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/762"><u>https://twimlai.com/go/762</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4735</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[21a9ae12-1364-11f1-aebe-ef6add9ebd14]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2274417899.mp3?updated=1772147656"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Evolution of Reasoning in Small Language Models with Yejin Choi - #761</title>
      <link>https://twimlai.com/podcast/twimlai/the-evolution-of-reasoning-in-small-language-models/</link>
      <description>Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year.



The complete show notes for this episode can be found at https://twimlai.com/go/761.</description>
      <pubDate>Thu, 29 Jan 2026 21:48:00 -0000</pubDate>
      <itunes:title>The Evolution of Reasoning in Small Language Models with Yejin Choi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>761</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c04aea68-f5ff-11f0-84ca-7b3e7abcd413/image/9fe5e5e2650e415eb54214def8b6c4db.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year.



The complete show notes for this episode can be found at https://twimlai.com/go/761.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Yejin Choi, professor and senior fellow at Stanford University in the Computer Science Department and the Institute for Human-Centered AI (HAI). In this conversation, we explore Yejin’s recent work on making small language models reason more effectively. We discuss how high-quality, diverse data plays a central role in closing the intelligence gap between small and large models, and how combining synthetic data generation, imitation learning, and reinforcement learning can unlock stronger reasoning capabilities in smaller models. Yejin explains the risks of homogeneity in model outputs and mode collapse highlighted in her “Artificial Hivemind” paper, and its impacts on human creativity and knowledge. We also discuss her team's novel approaches, including reinforcement learning as a pre-training objective, where models are incentivized to “think” before predicting the next token, and "Prismatic Synthesis," a gradient-based method for generating diverse synthetic math data while filtering overrepresented examples. Additionally, we cover the societal implications of AI and the concept of pluralistic alignment—ensuring AI reflects the diverse norms and values of humanity. Finally, Yejin shares her mission to democratize AI beyond large organizations and offers her predictions for the coming year.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/761"><u>https://twimlai.com/go/761</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3981</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c04aea68-f5ff-11f0-84ca-7b3e7abcd413]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2256483849.mp3?updated=1769723982"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Intelligent Robots in 2026: Are We There Yet? with Nikita Rudin - #760</title>
      <link>https://twimlai.com/podcast/twimlai/intelligent-robots-in-2026-are-we-there-yet/</link>
      <description>Today, we're joined by Nikita Rudin, co-founder and CEO of Flexion Robotics to discuss the gap between current robotic capabilities and what’s required to deploy fully autonomous robots in the real world. Nikita explains how reinforcement learning and simulation have driven rapid progress in robot locomotion—and why locomotion is still far from “solved.” We dig into the sim2real gap, and how adding visual inputs introduces noise and significantly complicates sim-to-real transfer. We also explore the debate between end-to-end models and modular approaches, and why separating locomotion, planning, and semantics remains a pragmatic approach today. Nikita also introduces the concept of "real-to-sim", which uses real-world data to refine simulation parameters for higher fidelity training, discusses how reinforcement learning, imitation learning, and teleoperation data are combined to train robust policies for both quadruped and humanoid robots, and introduces Flexion's hierarchical approach that utilizes pre-trained Vision-Language Models (VLMs) for high-level task orchestration with Vision-Language-Action (VLA) models and low-level whole-body trackers. Finally, Nikita shares the behind-the-scenes in humanoid robot demos, his take on reinforcement learning in simulation versus the real world, the nuances of reward tuning, and offers practical advice for researchers and practitioners looking to get started in robotics today.



The complete show notes for this episode can be found at https://twimlai.com/go/760.</description>
      <pubDate>Thu, 08 Jan 2026 21:27:00 -0000</pubDate>
      <itunes:title>Intelligent Robots in 2026: Are We There Yet? with Nikita Rudin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>760</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cb993398-ebe1-11f0-b374-c771cf052009/image/6f527ccd7610e50382fed89307a1d588.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Nikita Rudin, co-founder and CEO of Flexion Robotics to discuss the gap between current robotic capabilities and what’s required to deploy fully autonomous robots in the real world. Nikita explains how reinforcement learning and simulation have driven rapid progress in robot locomotion—and why locomotion is still far from “solved.” We dig into the sim2real gap, and how adding visual inputs introduces noise and significantly complicates sim-to-real transfer. We also explore the debate between end-to-end models and modular approaches, and why separating locomotion, planning, and semantics remains a pragmatic approach today. Nikita also introduces the concept of "real-to-sim", which uses real-world data to refine simulation parameters for higher fidelity training, discusses how reinforcement learning, imitation learning, and teleoperation data are combined to train robust policies for both quadruped and humanoid robots, and introduces Flexion's hierarchical approach that utilizes pre-trained Vision-Language Models (VLMs) for high-level task orchestration with Vision-Language-Action (VLA) models and low-level whole-body trackers. Finally, Nikita shares the behind-the-scenes in humanoid robot demos, his take on reinforcement learning in simulation versus the real world, the nuances of reward tuning, and offers practical advice for researchers and practitioners looking to get started in robotics today.



The complete show notes for this episode can be found at https://twimlai.com/go/760.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Nikita Rudin, co-founder and CEO of Flexion Robotics to discuss the gap between current robotic capabilities and what’s required to deploy fully autonomous robots in the real world. Nikita explains how reinforcement learning and simulation have driven rapid progress in robot locomotion—and why locomotion is still far from “solved.” We dig into the sim2real gap, and how adding visual inputs introduces noise and significantly complicates sim-to-real transfer. We also explore the debate between end-to-end models and modular approaches, and why separating locomotion, planning, and semantics remains a pragmatic approach today. Nikita also introduces the concept of "real-to-sim", which uses real-world data to refine simulation parameters for higher fidelity training, discusses how reinforcement learning, imitation learning, and teleoperation data are combined to train robust policies for both quadruped and humanoid robots, and introduces Flexion's hierarchical approach that utilizes pre-trained Vision-Language Models (VLMs) for high-level task orchestration with Vision-Language-Action (VLA) models and low-level whole-body trackers. Finally, Nikita shares the behind-the-scenes in humanoid robot demos, his take on reinforcement learning in simulation versus the real world, the nuances of reward tuning, and offers practical advice for researchers and practitioners looking to get started in robotics today.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/760"><u>https://twimlai.com/go/760</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3997</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cb993398-ebe1-11f0-b374-c771cf052009]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2537286465.mp3?updated=1767908138"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Rethinking Pre-Training for Agentic AI with Aakanksha Chowdhery - #759</title>
      <link>https://twimlai.com/podcast/twimlai/rethinking-pretraining-for-agentic-ai/</link>
      <description>Today, we're joined by Aakanksha Chowdhery, member of technical staff at Reflection, to explore the fundamental shifts required to build true agentic AI. While the industry has largely focused on post-training techniques to improve reasoning, Aakanksha draws on her experience leading pre-training efforts for Google’s PaLM and early Gemini models to argue that pre-training itself must be rethought to move beyond static benchmarks. We explore the limitations of next-token prediction for multi-step workflows and examine how attention mechanisms, loss objectives, and training data must evolve to support long-form reasoning and planning. Aakanksha shares insights on the difference between context retrieval and actual reasoning, the importance of "trajectory" training data, and why scaling remains essential for discovering emergent agentic capabilities like error recovery and dynamic tool learning.



The complete show notes for this episode can be found at https://twimlai.com/go/759.</description>
      <pubDate>Wed, 17 Dec 2025 19:24:00 -0000</pubDate>
      <itunes:title>Rethinking Pre-Training for Agentic AI with Aakanksha Chowdhery</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>759</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/406aa9be-dac0-11f0-b6d1-9f7af6c840cc/image/d03e355721ee2292d3e198a579e4445d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Aakanksha Chowdhery, member of technical staff at Reflection, to explore the fundamental shifts required to build true agentic AI. While the industry has largely focused on post-training techniques to improve reasoning, Aakanksha draws on her experience leading pre-training efforts for Google’s PaLM and early Gemini models to argue that pre-training itself must be rethought to move beyond static benchmarks. We explore the limitations of next-token prediction for multi-step workflows and examine how attention mechanisms, loss objectives, and training data must evolve to support long-form reasoning and planning. Aakanksha shares insights on the difference between context retrieval and actual reasoning, the importance of "trajectory" training data, and why scaling remains essential for discovering emergent agentic capabilities like error recovery and dynamic tool learning.



The complete show notes for this episode can be found at https://twimlai.com/go/759.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Aakanksha Chowdhery, member of technical staff at Reflection, to explore the fundamental shifts required to build true agentic AI. While the industry has largely focused on post-training techniques to improve reasoning, Aakanksha draws on her experience leading pre-training efforts for Google’s PaLM and early Gemini models to argue that pre-training itself must be rethought to move beyond static benchmarks. We explore the limitations of next-token prediction for multi-step workflows and examine how attention mechanisms, loss objectives, and training data must evolve to support long-form reasoning and planning. Aakanksha shares insights on the difference between context retrieval and actual reasoning, the importance of "trajectory" training data, and why scaling remains essential for discovering emergent agentic capabilities like error recovery and dynamic tool learning.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/759"><u>https://twimlai.com/go/759</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3174</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[406aa9be-dac0-11f0-b6d1-9f7af6c840cc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3462034138.mp3?updated=1766003076"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Why Vision Language Models Ignore What They See with Munawar Hayat - #758</title>
      <link>https://twimlai.com/podcast/twimlai/why-vision-language-models-ignore-what-they-see/</link>
      <description>In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, composed retrieval tasks—such as searching via combined text and image queries—without increasing inference costs. Finally, we cover the difficulties generative models face when rendering multiple human subjects, and the new "MultiHuman Testbench" his team created to measure and mitigate issues like identity leakage and attribute blending. Throughout the discussion, we examine how these innovations align with the need for efficient, on-device AI deployment.



The complete show notes for this episode can be found at https://twimlai.com/go/758.</description>
      <pubDate>Tue, 09 Dec 2025 19:46:00 -0000</pubDate>
      <itunes:title>Why Vision Language Models Ignore What They See with Munawar Hayat</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>758</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/37743782-d532-11f0-b1cb-f3d8510d225d/image/0f314a06c986eec0fd32b2e78bca20b1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, composed retrieval tasks—such as searching via combined text and image queries—without increasing inference costs. Finally, we cover the difficulties generative models face when rendering multiple human subjects, and the new "MultiHuman Testbench" his team created to measure and mitigate issues like identity leakage and attribute blending. Throughout the discussion, we examine how these innovations align with the need for efficient, on-device AI deployment.



The complete show notes for this episode can be found at https://twimlai.com/go/758.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, composed retrieval tasks—such as searching via combined text and image queries—without increasing inference costs. Finally, we cover the difficulties generative models face when rendering multiple human subjects, and the new "MultiHuman Testbench" his team created to measure and mitigate issues like identity leakage and attribute blending. Throughout the discussion, we examine how these innovations align with the need for efficient, on-device AI deployment.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/758"><u>https://twimlai.com/go/758</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3460</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[37743782-d532-11f0-b1cb-f3d8510d225d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7251543598.mp3?updated=1765310086"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar - #757</title>
      <link>https://twimlai.com/podcast/twimlai/scaling-agentic-inference-across-heterogeneous-compute/</link>
      <description>In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling.



The complete show notes for this episode can be found at https://twimlai.com/go/757.</description>
      <pubDate>Tue, 02 Dec 2025 22:29:00 -0000</pubDate>
      <itunes:title>Scaling Agentic Inference Across Heterogeneous Compute with Zain Asgar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>757</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/23fdac48-cb0e-11f0-ac58-d3c8d8f2415d/image/f2c932969d121f1e83ea5f9776d71b77.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling.



The complete show notes for this episode can be found at https://twimlai.com/go/757.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Zain Asgar, co-founder and CEO of Gimlet Labs, joins us to discuss the heterogeneous AI inference across diverse hardware. Zain argues that the current industry standard of running all AI workloads on high-end GPUs is unsustainable for agents, which consume significantly more tokens than traditional LLM applications. We explore Gimlet’s approach to heterogeneous inference, which involves disaggregating workloads across a mix of hardware—from H100s to older GPUs and CPUs—to optimize unit economics without sacrificing performance. We dive into their "three-layer cake" architecture: workload disaggregation, a compilation layer that maps models to specific hardware targets, and a novel system that uses LLMs to autonomously rewrite and optimize compute kernels. Finally, we discuss the complexities of networking in heterogeneous environments, the trade-offs between numerical precision and application accuracy, and the future of hardware-aware scheduling.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/757"><u>https://twimlai.com/go/757</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>2924</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[23fdac48-cb0e-11f0-ac58-d3c8d8f2415d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2686987005.mp3?updated=1764715926"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Proactive Agents for the Web with Devi Parikh - #756</title>
      <link>https://twimlai.com/podcast/twimlai/proactive-agents-for-the-web/</link>
      <description>Today, we're joined by Devi Parikh, co-founder and co-CEO of Yutori, to discuss browser use models and a future where we interact with the web through proactive, autonomous agents. We explore the technical challenges of creating reliable web agents, the advantages of visually-grounded models that operate on screenshots rather than the browser’s more brittle document object model, or DOM, and why this counterintuitive choice has proven far more robust and generalizable for handling complex web interfaces. Devi also shares insights into Yutori’s training pipeline, which has evolved from supervised fine-tuning to include rejection sampling and reinforcement learning. Finally, we discuss how Yutori’s “Scouts” agents orchestrate multiple tools and sub-agents to handle complex queries, the importance of background, "ambient" operation for these systems, and what the path looks like from simple monitoring to full task automation on the web.



The complete show notes for this episode can be found at https://twimlai.com/go/756.</description>
      <pubDate>Wed, 19 Nov 2025 01:49:00 -0000</pubDate>
      <itunes:title>Proactive Agents for the Web with Devi Parikh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>756</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1125741e-c4bc-11f0-9627-5778e42202ee/image/f53cc420dfeeb92bc9304289dfa74f31.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Devi Parikh, co-founder and co-CEO of Yutori, to discuss browser use models and a future where we interact with the web through proactive, autonomous agents. We explore the technical challenges of creating reliable web agents, the advantages of visually-grounded models that operate on screenshots rather than the browser’s more brittle document object model, or DOM, and why this counterintuitive choice has proven far more robust and generalizable for handling complex web interfaces. Devi also shares insights into Yutori’s training pipeline, which has evolved from supervised fine-tuning to include rejection sampling and reinforcement learning. Finally, we discuss how Yutori’s “Scouts” agents orchestrate multiple tools and sub-agents to handle complex queries, the importance of background, "ambient" operation for these systems, and what the path looks like from simple monitoring to full task automation on the web.



The complete show notes for this episode can be found at https://twimlai.com/go/756.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Devi Parikh, co-founder and co-CEO of Yutori, to discuss browser use models and a future where we interact with the web through proactive, autonomous agents. We explore the technical challenges of creating reliable web agents, the advantages of visually-grounded models that operate on screenshots rather than the browser’s more brittle document object model, or DOM, and why this counterintuitive choice has proven far more robust and generalizable for handling complex web interfaces. Devi also shares insights into Yutori’s training pipeline, which has evolved from supervised fine-tuning to include rejection sampling and reinforcement learning. Finally, we discuss how Yutori’s “Scouts” agents orchestrate multiple tools and sub-agents to handle complex queries, the importance of background, "ambient" operation for these systems, and what the path looks like from simple monitoring to full task automation on the web.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/756"><u>https://twimlai.com/go/756</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3364</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1125741e-c4bc-11f0-9627-5778e42202ee]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8999995371.mp3?updated=1763502496"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Orchestration for Smart Cities and the Enterprise with Robin Braun and Luke Norris - #755</title>
      <link>https://twimlai.com/podcast/twimlai/ai-orchestration-for-smart-cities-and-the-enterprise/</link>
      <description>Today, we're joined by Robin Braun, VP of AI business development for hybrid cloud at HPE, and Luke Norris, co-founder and CEO of Kamiwaza, to discuss how AI systems can be used to automate complex workflows and unlock value from legacy enterprise data. Robin and Luke detail high-impact use cases from HPE and Kamiwaza’s collaboration on an “Agentic Smart City” project for Vail, Colorado, including remediation and automation of website accessibility for 508 compliance, digitization and understanding of deed restrictions, and combining contextual information with camera feeds for fire detection and risk assessment. Additionally, we discuss the role of private cloud infrastructure in overcoming challenges like cost, data privacy, and compliance. Robin and Luke also share their lessons learned, including the importance of fresh data, and the value of a "mud puddle by mud puddle" approach in achieving practical AI wins.



The complete show notes for this episode can be found at https://twimlai.com/go/755.</description>
      <pubDate>Wed, 12 Nov 2025 20:05:00 -0000</pubDate>
      <itunes:title>AI Orchestration for Smart Cities and the Enterprise with Robin Braun and Luke Norris</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>755</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9064a8be-bf42-11f0-ad9f-cfafdc065d55/image/e71baf2708239fd77f74f13c2221cf73.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Robin Braun, VP of AI business development for hybrid cloud at HPE, and Luke Norris, co-founder and CEO of Kamiwaza, to discuss how AI systems can be used to automate complex workflows and unlock value from legacy enterprise data. Robin and Luke detail high-impact use cases from HPE and Kamiwaza’s collaboration on an “Agentic Smart City” project for Vail, Colorado, including remediation and automation of website accessibility for 508 compliance, digitization and understanding of deed restrictions, and combining contextual information with camera feeds for fire detection and risk assessment. Additionally, we discuss the role of private cloud infrastructure in overcoming challenges like cost, data privacy, and compliance. Robin and Luke also share their lessons learned, including the importance of fresh data, and the value of a "mud puddle by mud puddle" approach in achieving practical AI wins.



The complete show notes for this episode can be found at https://twimlai.com/go/755.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Robin Braun, VP of AI business development for hybrid cloud at HPE, and Luke Norris, co-founder and CEO of Kamiwaza, to discuss how AI systems can be used to automate complex workflows and unlock value from legacy enterprise data. Robin and Luke detail high-impact use cases from HPE and Kamiwaza’s collaboration on an “Agentic Smart City” project for Vail, Colorado, including remediation and automation of website accessibility for 508 compliance, digitization and understanding of deed restrictions, and combining contextual information with camera feeds for fire detection and risk assessment. Additionally, we discuss the role of private cloud infrastructure in overcoming challenges like cost, data privacy, and compliance. Robin and Luke also share their lessons learned, including the importance of fresh data, and the value of a "mud puddle by mud puddle" approach in achieving practical AI wins.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/755"><u>https://twimlai.com/go/755</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3286</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9064a8be-bf42-11f0-ad9f-cfafdc065d55]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7553368301.mp3?updated=1762978440"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building an AI Mathematician with Carina Hong - #754</title>
      <link>https://twimlai.com/podcast/twimlai/building-an-ai-mathematician/</link>
      <description>In this episode, Carina Hong, founder and CEO of Axiom, joins us to discuss her work building an "AI Mathematician." Carina explains why this is a pivotal moment for AI in mathematics, citing a convergence of three key areas: the advanced reasoning capabilities of modern LLMs, the rise of formal proof languages like Lean, and breakthroughs in code generation. We explore the core technical challenges, including the massive data gap between general-purpose code and formal math code, and the difficult problem of "autoformalization," or translating natural language proofs into a machine-verifiable format. Carina also shares Axiom's vision for a self-improving system that uses a self-play loop of conjecturing and proving to discover new mathematical knowledge. Finally, we discuss the broader applications of this technology in areas like formal verification for high-stakes software and hardware.



The complete show notes for this episode can be found at https://twimlai.com/go/754.</description>
      <pubDate>Tue, 04 Nov 2025 21:30:00 -0000</pubDate>
      <itunes:title>Building an AI Mathematician with Carina Hong</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>754</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e04b8fe2-b9ba-11f0-990d-af2986245169/image/03599e78069cd4f0e23125f76648888b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Carina Hong, founder and CEO of Axiom, joins us to discuss her work building an "AI Mathematician." Carina explains why this is a pivotal moment for AI in mathematics, citing a convergence of three key areas: the advanced reasoning capabilities of modern LLMs, the rise of formal proof languages like Lean, and breakthroughs in code generation. We explore the core technical challenges, including the massive data gap between general-purpose code and formal math code, and the difficult problem of "autoformalization," or translating natural language proofs into a machine-verifiable format. Carina also shares Axiom's vision for a self-improving system that uses a self-play loop of conjecturing and proving to discover new mathematical knowledge. Finally, we discuss the broader applications of this technology in areas like formal verification for high-stakes software and hardware.



The complete show notes for this episode can be found at https://twimlai.com/go/754.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Carina Hong, founder and CEO of Axiom, joins us to discuss her work building an "AI Mathematician." Carina explains why this is a pivotal moment for AI in mathematics, citing a convergence of three key areas: the advanced reasoning capabilities of modern LLMs, the rise of formal proof languages like Lean, and breakthroughs in code generation. We explore the core technical challenges, including the massive data gap between general-purpose code and formal math code, and the difficult problem of "autoformalization," or translating natural language proofs into a machine-verifiable format. Carina also shares Axiom's vision for a self-improving system that uses a self-play loop of conjecturing and proving to discover new mathematical knowledge. Finally, we discuss the broader applications of this technology in areas like formal verification for high-stakes software and hardware.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/754"><u>https://twimlai.com/go/754</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3352</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e04b8fe2-b9ba-11f0-990d-af2986245169]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1309151606.mp3?updated=1762355698"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>High-Efficiency Diffusion Models for On-Device Image Generation and Editing with Hung Bui - #753</title>
      <link>https://twimlai.com/podcast/twimlai/high-efficiency-diffusion-models-for-on-device-image-generation-and-editing/</link>
      <description>In this episode, Hung Bui, Technology Vice President at Qualcomm, joins us to explore the latest high-efficiency techniques for running generative AI, particularly diffusion models, on-device. We dive deep into the technical challenges of deploying these models, which are powerful but computationally expensive due to their iterative sampling process. Hung details his team's work on SwiftBrush and SwiftEdit, which enable high-quality text-to-image generation and editing in a single inference step. He explains their novel distillation framework, where a multi-step teacher model guides the training of an efficient, single-step student model. We explore the architecture and training, including the use of a secondary 'coach' network that aligns the student's denoising function with the teacher's, allowing the model to bypass the iterative process entirely. Finally, we discuss how these efficiency breakthroughs pave the way for personalized on-device agents and the challenges of running reasoning models with techniques like inference-time scaling under a fixed compute budget.

The complete show notes for this episode can be found at  https://twimlai.com/go/753.</description>
      <pubDate>Tue, 28 Oct 2025 20:26:00 -0000</pubDate>
      <itunes:title>High-Efficiency Diffusion Models for On-Device Image Generation and Editing with Hung Bui</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>753</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d8c8047a-b42f-11f0-9481-a33e51219ade/image/c5094b8ad570d875950e82f4015be357.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Hung Bui, Technology Vice President at Qualcomm, joins us to explore the latest high-efficiency techniques for running generative AI, particularly diffusion models, on-device. We dive deep into the technical challenges of deploying these models, which are powerful but computationally expensive due to their iterative sampling process. Hung details his team's work on SwiftBrush and SwiftEdit, which enable high-quality text-to-image generation and editing in a single inference step. He explains their novel distillation framework, where a multi-step teacher model guides the training of an efficient, single-step student model. We explore the architecture and training, including the use of a secondary 'coach' network that aligns the student's denoising function with the teacher's, allowing the model to bypass the iterative process entirely. Finally, we discuss how these efficiency breakthroughs pave the way for personalized on-device agents and the challenges of running reasoning models with techniques like inference-time scaling under a fixed compute budget.

The complete show notes for this episode can be found at  https://twimlai.com/go/753.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Hung Bui, Technology Vice President at Qualcomm, joins us to explore the latest high-efficiency techniques for running generative AI, particularly diffusion models, on-device. We dive deep into the technical challenges of deploying these models, which are powerful but computationally expensive due to their iterative sampling process. Hung details his team's work on SwiftBrush and SwiftEdit, which enable high-quality text-to-image generation and editing in a single inference step. He explains their novel distillation framework, where a multi-step teacher model guides the training of an efficient, single-step student model. We explore the architecture and training, including the use of a secondary 'coach' network that aligns the student's denoising function with the teacher's, allowing the model to bypass the iterative process entirely. Finally, we discuss how these efficiency breakthroughs pave the way for personalized on-device agents and the challenges of running reasoning models with techniques like inference-time scaling under a fixed compute budget.</p>
<p><br>The complete show notes for this episode can be found at  <a href="https://twimlai.com/go/753"><u>https://twimlai.com/go/753</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3143</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d8c8047a-b42f-11f0-9481-a33e51219ade]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6593247207.mp3?updated=1761682149"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Vibe Coding's Uncanny Valley with Alexandre Pesant - #752</title>
      <link>https://twimlai.com/podcast/twimlai/vibe-codings-uncanny-valley/</link>
      <description>Today, we're joined by Alexandre Pesant, AI lead at Lovable, who joins us to discuss the evolution and practice of vibe coding. Alex shares his take on how AI is enabling a shift in software development from typing characters to expressing intent, creating a new layer of abstraction similar to how high-level code compiles to machine code. We explore the current capabilities and limitations of coding agents, the importance of context engineering, and the practices that separate successful vibe coders from frustrated ones. Alex also shares Lovable’s technical journey, from an early, complex agent architecture that failed, to a simpler workflow-based system, and back again to an agentic approach as foundation models improved. He also details the company's massive scaling challenges—like accidentally taking down GitHub—and makes the case for why robust evaluations and more expressive user interfaces are the most critical components for AI-native development tools to succeed in the near future.

The complete show notes for this episode can be found at https://twimlai.com/go/752.</description>
      <pubDate>Wed, 22 Oct 2025 15:45:00 -0000</pubDate>
      <itunes:title>Vibe Coding's Uncanny Valley with Alexandre Pesant</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>752</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/eeccbe2e-aeb1-11f0-b0bb-8b05df40f43e/image/3d9f13c8726fdb995440b41dd273ee22.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Alexandre Pesant, AI lead at Lovable, who joins us to discuss the evolution and practice of vibe coding. Alex shares his take on how AI is enabling a shift in software development from typing characters to expressing intent, creating a new layer of abstraction similar to how high-level code compiles to machine code. We explore the current capabilities and limitations of coding agents, the importance of context engineering, and the practices that separate successful vibe coders from frustrated ones. Alex also shares Lovable’s technical journey, from an early, complex agent architecture that failed, to a simpler workflow-based system, and back again to an agentic approach as foundation models improved. He also details the company's massive scaling challenges—like accidentally taking down GitHub—and makes the case for why robust evaluations and more expressive user interfaces are the most critical components for AI-native development tools to succeed in the near future.

The complete show notes for this episode can be found at https://twimlai.com/go/752.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Alexandre Pesant, AI lead at Lovable, who joins us to discuss the evolution and practice of vibe coding. Alex shares his take on how AI is enabling a shift in software development from typing characters to expressing intent, creating a new layer of abstraction similar to how high-level code compiles to machine code. We explore the current capabilities and limitations of coding agents, the importance of context engineering, and the practices that separate successful vibe coders from frustrated ones. Alex also shares Lovable’s technical journey, from an early, complex agent architecture that failed, to a simpler workflow-based system, and back again to an agentic approach as foundation models improved. He also details the company's massive scaling challenges—like accidentally taking down GitHub—and makes the case for why robust evaluations and more expressive user interfaces are the most critical components for AI-native development tools to succeed in the near future.</p>
<p><br>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/752"><u>https://twimlai.com/go/752</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4356</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eeccbe2e-aeb1-11f0-b0bb-8b05df40f43e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2745909763.mp3?updated=1761149815"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Dataflow Computing for AI Inference with Kunle Olukotun - #751</title>
      <link>https://twimlai.com/podcast/twimlai/dataflow-computing-for-ai-inference/</link>
      <description>In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware.



The complete show notes for this episode can be found at https://twimlai.com/go/751.</description>
      <pubDate>Tue, 14 Oct 2025 19:39:00 -0000</pubDate>
      <itunes:title>Dataflow Computing for AI Inference with Kunle Olukotun</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>751</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d2513fac-a92c-11f0-b5a1-ff89f119b8ad/image/2e081f98cabd8b5c95da210760ef8ee7.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware.



The complete show notes for this episode can be found at https://twimlai.com/go/751.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we're joined by Kunle Olukotun, professor of electrical engineering and computer science at Stanford University and co-founder and chief technologist at Sambanova Systems, to discuss reconfigurable dataflow architectures for AI inference. Kunle explains the core idea of building computers that are dynamically configured to match the dataflow graph of an AI model, moving beyond the traditional instruction-fetch paradigm of CPUs and GPUs. We explore how this architecture is well-suited for LLM inference, reducing memory bandwidth bottlenecks and improving performance. Kunle reviews how this system also enables efficient multi-model serving and agentic workflows through its large, tiered memory and fast model-switching capabilities. Finally, we discuss his research into future dynamic reconfigurable architectures, and the use of AI agents to build compilers for new hardware.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/751"><u>https://twimlai.com/go/751</u></a>. </p>]]>
      </content:encoded>
      <itunes:duration>3457</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d2513fac-a92c-11f0-b5a1-ff89f119b8ad]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9142835882.mp3?updated=1762292412"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750 </title>
      <link>https://twimlai.com/podcast/twimlai/recurrence-and-attention-for-long-context-transformers/</link>
      <description>Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention with the linear scaling of recurrence and promises speedups of &gt;10x during training and &gt;100x during inference. We review Manifest AI’s recent open source projects as well: Vidrial—a custom CUDA framework for building highly optimized GPU kernels in Python, and PowerCoder—a 3B-parameter coding model fine-tuned from StarCoder to use power retention. Our chat also covers the use of metrics like in-context learning curves and negative log likelihood to measure context utility, the implications of scaling laws, and the future of long context lengths in AI applications.

The complete show notes for this episode can be found at https://twimlai.com/go/750.</description>
      <pubDate>Tue, 07 Oct 2025 17:37:00 -0000</pubDate>
      <itunes:title>Recurrence and Attention for Long-Context Transformers with Jacob Buckman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>750</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66615bf6-a391-11f0-a2a7-6b02f2fc0440/image/952401c5b9c06f52ca0dfbbcfe6cd279.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention with the linear scaling of recurrence and promises speedups of &gt;10x during training and &gt;100x during inference. We review Manifest AI’s recent open source projects as well: Vidrial—a custom CUDA framework for building highly optimized GPU kernels in Python, and PowerCoder—a 3B-parameter coding model fine-tuned from StarCoder to use power retention. Our chat also covers the use of metrics like in-context learning curves and negative log likelihood to measure context utility, the implications of scaling laws, and the future of long context lengths in AI applications.

The complete show notes for this episode can be found at https://twimlai.com/go/750.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention with the linear scaling of recurrence and promises speedups of &gt;10x during training and &gt;100x during inference. We review Manifest AI’s recent open source projects as well: Vidrial—a custom CUDA framework for building highly optimized GPU kernels in Python, and PowerCoder—a 3B-parameter coding model fine-tuned from StarCoder to use power retention. Our chat also covers the use of metrics like in-context learning curves and negative log likelihood to measure context utility, the implications of scaling laws, and the future of long context lengths in AI applications.</p>
<p><br>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/750"><u>https://twimlai.com/go/750</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3443</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[66615bf6-a391-11f0-a2a7-6b02f2fc0440]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7068202936.mp3?updated=1759858524"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Decentralized Future of Private AI with Illia Polosukhin - #749</title>
      <link>https://twimlai.com/podcast/twimlai/the-decentralized-future-of-private-ai/</link>
      <description>In this episode, Illia Polosukhin, a co-author of the seminal "Attention Is All You Need" paper and co-founder of Near AI, joins us to discuss his vision for building private, decentralized, and user-owned AI. Illia shares his unique journey from developing the Transformer architecture at Google to building the NEAR Protocol blockchain to solve global payment challenges, and now applying those decentralized principles back to AI. We explore how Near AI is creating a decentralized cloud that leverages confidential computing, secure enclaves, and the blockchain to protect both user data and proprietary model weights. Illia also shares his three-part approach to fostering trust: open model training to eliminate hidden biases and "sleeper agents," verifiability of inference to ensure the model runs as intended, and formal verification at the invocation layer to enforce composable guarantees on AI agent actions. Finally, Illia shares his perspective on the future of open research, the role of tokenized incentive models, and the need for formal verification in building compliance and user trust.



The complete show notes for this episode can be found at https://twimlai.com/go/749.</description>
      <pubDate>Tue, 30 Sep 2025 16:22:00 -0000</pubDate>
      <itunes:title>The Decentralized Future of Private AI with Illia Polosukhin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>749</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/831f9e1c-9e12-11f0-884b-e72cdbfb66a3/image/fccca8d2c67aec29cbe36b944434cbc6.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Illia Polosukhin, a co-author of the seminal "Attention Is All You Need" paper and co-founder of Near AI, joins us to discuss his vision for building private, decentralized, and user-owned AI. Illia shares his unique journey from developing the Transformer architecture at Google to building the NEAR Protocol blockchain to solve global payment challenges, and now applying those decentralized principles back to AI. We explore how Near AI is creating a decentralized cloud that leverages confidential computing, secure enclaves, and the blockchain to protect both user data and proprietary model weights. Illia also shares his three-part approach to fostering trust: open model training to eliminate hidden biases and "sleeper agents," verifiability of inference to ensure the model runs as intended, and formal verification at the invocation layer to enforce composable guarantees on AI agent actions. Finally, Illia shares his perspective on the future of open research, the role of tokenized incentive models, and the need for formal verification in building compliance and user trust.



The complete show notes for this episode can be found at https://twimlai.com/go/749.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Illia Polosukhin, a co-author of the seminal "Attention Is All You Need" paper and co-founder of Near AI, joins us to discuss his vision for building private, decentralized, and user-owned AI. Illia shares his unique journey from developing the Transformer architecture at Google to building the NEAR Protocol blockchain to solve global payment challenges, and now applying those decentralized principles back to AI. We explore how Near AI is creating a decentralized cloud that leverages confidential computing, secure enclaves, and the blockchain to protect both user data and proprietary model weights. Illia also shares his three-part approach to fostering trust: open model training to eliminate hidden biases and "sleeper agents," verifiability of inference to ensure the model runs as intended, and formal verification at the invocation layer to enforce composable guarantees on AI agent actions. Finally, Illia shares his perspective on the future of open research, the role of tokenized incentive models, and the need for formal verification in building compliance and user trust.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/749"><u>https://twimlai.com/go/749</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3903</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[831f9e1c-9e12-11f0-884b-e72cdbfb66a3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2189764781.mp3?updated=1762292711"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inside Nano Banana &#127820; and the Future of Vision-Language Models with Oliver Wang - #748</title>
      <link>https://twimlai.com/podcast/twimlai/inside-nano-banana-%f0%9f%8d%8c-and-the-future-of-vision-language-models/</link>
      <description>Today, we’re joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini’s world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images.



The complete show notes for this episode can be found at https://twimlai.com/go/748.</description>
      <pubDate>Tue, 23 Sep 2025 21:45:00 -0000</pubDate>
      <itunes:title>Inside Nano Banana &#127820; and the Future of Vision-Language Models with Oliver Wang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>748</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e2479ca-98aa-11f0-8659-ff329450734a/image/1e4173ea17f5eff7d432ca847eff3a9e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we’re joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini’s world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images.



The complete show notes for this episode can be found at https://twimlai.com/go/748.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we’re joined by Oliver Wang, principal scientist at Google DeepMind and tech lead for Gemini 2.5 Flash Image—better known by its code name, “Nano Banana.” We dive into the development and capabilities of this newly released frontier vision-language model, beginning with the broader shift from specialized image generators to general-purpose multimodal agents that can use both visual and textual data for a variety of tasks. Oliver explains how Nano Banana can generate and iteratively edit images while maintaining consistency, and how its integration with Gemini’s world knowledge expands creative and practical use cases. We discuss the tension between aesthetics and accuracy, the relative maturity of image models compared to text-based LLMs, and scaling as a driver of progress. Oliver also shares surprising emergent behaviors, the challenges of evaluating vision-language models, and the risks of training on AI-generated data. Finally, we look ahead to interactive world models and VLMs that may one day “think” and “reason” in images.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/748"><u>https://twimlai.com/go/748</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3819</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4e2479ca-98aa-11f0-8659-ff329450734a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7289124073.mp3?updated=1758664779"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Is It Time to Rethink LLM Pre-Training? with Aditi Raghunathan - #747</title>
      <link>https://twimlai.com/podcast/twimlai/is-it-time-to-rethink-llm-pre-training/</link>
      <description>Today, we're joined by Aditi Raghunathan, assistant professor at Carnegie Mellon University, to discuss the limitations of LLMs and how we can build more adaptable and creative models. We dig into her ICML 2025 Outstanding Paper Award winner, “Roll the dice &amp; look before you leap: Going beyond the creative limits of next-token prediction,” which examines why LLMs struggle with generating truly novel ideas. We dig into the "Roll the dice" approach, which encourages structured exploration by injecting randomness at the start of generation, and the "Look before you leap" concept, which trains models to take "leaps of thought" using alternative objectives to create more diverse and structured outputs. We also discuss Aditi’s papers exploring the counterintuitive phenomenon of "catastrophic overtraining," where training models on more data improves benchmark performance but degrades their ability to be fine-tuned for new tasks, and dig into her lab's work on creating more controllable and reliable models, including the concept of "memorization sinks," an architectural approach to isolate and enable the targeted unlearning of specific information.



The complete show notes for this episode can be found at https://twimlai.com/go/747.</description>
      <pubDate>Tue, 16 Sep 2025 18:08:00 -0000</pubDate>
      <itunes:title>Is It Time to Rethink LLM Pre-Training? with Aditi Raghunathan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>747</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e96b61a4-931a-11f0-a2a8-3bcaf6a01846/image/5a22d10169a909509876fcc98a5d7556.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Aditi Raghunathan, assistant professor at Carnegie Mellon University, to discuss the limitations of LLMs and how we can build more adaptable and creative models. We dig into her ICML 2025 Outstanding Paper Award winner, “Roll the dice &amp; look before you leap: Going beyond the creative limits of next-token prediction,” which examines why LLMs struggle with generating truly novel ideas. We dig into the "Roll the dice" approach, which encourages structured exploration by injecting randomness at the start of generation, and the "Look before you leap" concept, which trains models to take "leaps of thought" using alternative objectives to create more diverse and structured outputs. We also discuss Aditi’s papers exploring the counterintuitive phenomenon of "catastrophic overtraining," where training models on more data improves benchmark performance but degrades their ability to be fine-tuned for new tasks, and dig into her lab's work on creating more controllable and reliable models, including the concept of "memorization sinks," an architectural approach to isolate and enable the targeted unlearning of specific information.



The complete show notes for this episode can be found at https://twimlai.com/go/747.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Aditi Raghunathan, assistant professor at Carnegie Mellon University, to discuss the limitations of LLMs and how we can build more adaptable and creative models. We dig into her ICML 2025 Outstanding Paper Award winner, “Roll the dice &amp; look before you leap: Going beyond the creative limits of next-token prediction,” which examines why LLMs struggle with generating truly novel ideas. We dig into the "Roll the dice" approach, which encourages structured exploration by injecting randomness at the start of generation, and the "Look before you leap" concept, which trains models to take "leaps of thought" using alternative objectives to create more diverse and structured outputs. We also discuss Aditi’s papers exploring the counterintuitive phenomenon of "catastrophic overtraining," where training models on more data improves benchmark performance but degrades their ability to be fine-tuned for new tasks, and dig into her lab's work on creating more controllable and reliable models, including the concept of "memorization sinks," an architectural approach to isolate and enable the targeted unlearning of specific information.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/747"><u>https://twimlai.com/go/747</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3506</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e96b61a4-931a-11f0-a2a8-3bcaf6a01846]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5916308473.mp3?updated=1758046985"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building an Immune System for AI Generated Software with Animesh Koratana - #746</title>
      <link>https://twimlai.com/podcast/twimlai/building-an-immune-system-for-ai-generated-software/</link>
      <description>Today, we're joined by Animesh Koratana, founder and CEO of PlayerZero to discuss his team’s approach to making agentic and AI-assisted coding tools production-ready at scale. Animesh explains how rapid advances in AI-assisted coding have created an “asymmetry” where the speed of code output outpaces the maturity of processes for maintenance and support. We explore PlayerZero’s debugging and code verification platform, which uses code simulations to build a "memory bank" of past bugs and leverages an ensemble of LLMs and agents to proactively simulate and verify changes, predicting potential failures. Animesh also unpacks the underlying technology, including a semantic graph that analyzes code bases, ticketing systems, and telemetry to trace and reason through complex systems, test hypotheses, and apply reinforcement learning techniques to create an “immune system” for software. Finally, Animesh shares his perspective on the future of the software development lifecycle (SDLC), rethinking organizational workflows, and ensuring security as AI-driven tools continue to mature.

The complete show notes for this episode can be found at https://twimlai.com/go/746.</description>
      <pubDate>Tue, 09 Sep 2025 22:18:00 -0000</pubDate>
      <itunes:title>Building an Immune System for AI Generated Software with Animesh Koratana</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>746</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f6fbd3b2-8dc5-11f0-9233-afac45e8d917/image/142cdc49c3b362e40d953cc5008ba89a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Animesh Koratana, founder and CEO of PlayerZero to discuss his team’s approach to making agentic and AI-assisted coding tools production-ready at scale. Animesh explains how rapid advances in AI-assisted coding have created an “asymmetry” where the speed of code output outpaces the maturity of processes for maintenance and support. We explore PlayerZero’s debugging and code verification platform, which uses code simulations to build a "memory bank" of past bugs and leverages an ensemble of LLMs and agents to proactively simulate and verify changes, predicting potential failures. Animesh also unpacks the underlying technology, including a semantic graph that analyzes code bases, ticketing systems, and telemetry to trace and reason through complex systems, test hypotheses, and apply reinforcement learning techniques to create an “immune system” for software. Finally, Animesh shares his perspective on the future of the software development lifecycle (SDLC), rethinking organizational workflows, and ensuring security as AI-driven tools continue to mature.

The complete show notes for this episode can be found at https://twimlai.com/go/746.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Animesh Koratana, founder and CEO of PlayerZero to discuss his team’s approach to making agentic and AI-assisted coding tools production-ready at scale. Animesh explains how rapid advances in AI-assisted coding have created an “asymmetry” where the speed of code output outpaces the maturity of processes for maintenance and support. We explore PlayerZero’s debugging and code verification platform, which uses code simulations to build a "memory bank" of past bugs and leverages an ensemble of LLMs and agents to proactively simulate and verify changes, predicting potential failures. Animesh also unpacks the underlying technology, including a semantic graph that analyzes code bases, ticketing systems, and telemetry to trace and reason through complex systems, test hypotheses, and apply reinforcement learning techniques to create an “immune system” for software. Finally, Animesh shares his perspective on the future of the software development lifecycle (SDLC), rethinking organizational workflows, and ensuring security as AI-driven tools continue to mature.</p>
<p><br>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/746"><u>https://twimlai.com/go/746</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3911</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f6fbd3b2-8dc5-11f0-9233-afac45e8d917]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6993481718.mp3?updated=1757456846"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Autoformalization and Verifiable Superintelligence with Christian Szegedy - #745</title>
      <link>https://twimlai.com/podcast/twimlai/autoformalization-and-verifiable-superintelligence/</link>
      <description>In this episode, Christian Szegedy, Chief Scientist at Morph Labs, joins us to discuss how the application of formal mathematics and reasoning enables the creation of more robust and safer AI systems. A pioneer behind concepts like the Inception architecture and adversarial examples, Christian now focuses on autoformalization—the AI-driven process of translating mathematical concepts from their human-readable form into rigorously formal, machine-verifiable logic. We explore the critical distinction between the informal reasoning of current LLMs, which can be prone to errors and subversion, and the provably correct reasoning enabled by formal systems. Christian outlines how this approach provides a robust path toward AI safety and also creates the high-quality, verifiable data needed to train models capable of surpassing human scientists in specialized domains. We also delve into his predictions for achieving this superintelligence and his ultimate vision for AI as a tool that helps humanity understand itself.



The complete show notes for this episode can be found at https://twimlai.com/go/745.</description>
      <pubDate>Tue, 02 Sep 2025 20:31:00 -0000</pubDate>
      <itunes:title>Autoformalization and Verifiable Superintelligence with Christian Szegedy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>745</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aeb3128a-8827-11f0-bde2-038c4673fae8/image/24129a047a43f11c4bf8224488eccc5b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Christian Szegedy, Chief Scientist at Morph Labs, joins us to discuss how the application of formal mathematics and reasoning enables the creation of more robust and safer AI systems. A pioneer behind concepts like the Inception architecture and adversarial examples, Christian now focuses on autoformalization—the AI-driven process of translating mathematical concepts from their human-readable form into rigorously formal, machine-verifiable logic. We explore the critical distinction between the informal reasoning of current LLMs, which can be prone to errors and subversion, and the provably correct reasoning enabled by formal systems. Christian outlines how this approach provides a robust path toward AI safety and also creates the high-quality, verifiable data needed to train models capable of surpassing human scientists in specialized domains. We also delve into his predictions for achieving this superintelligence and his ultimate vision for AI as a tool that helps humanity understand itself.



The complete show notes for this episode can be found at https://twimlai.com/go/745.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Christian Szegedy, Chief Scientist at Morph Labs, joins us to discuss how the application of formal mathematics and reasoning enables the creation of more robust and safer AI systems. A pioneer behind concepts like the Inception architecture and adversarial examples, Christian now focuses on autoformalization—the AI-driven process of translating mathematical concepts from their human-readable form into rigorously formal, machine-verifiable logic. We explore the critical distinction between the informal reasoning of current LLMs, which can be prone to errors and subversion, and the provably correct reasoning enabled by formal systems. Christian outlines how this approach provides a robust path toward AI safety and also creates the high-quality, verifiable data needed to train models capable of surpassing human scientists in specialized domains. We also delve into his predictions for achieving this superintelligence and his ultimate vision for AI as a tool that helps humanity understand itself.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/745"><u>https://twimlai.com/go/745</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4308</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aeb3128a-8827-11f0-bde2-038c4673fae8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7915517336.mp3?updated=1756837327"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multimodal AI Models on Apple Silicon with MLX with Prince Canuma - #744</title>
      <link>https://twimlai.com/podcast/twimlai/multimodal-ai-models-on-apple-silicon-with-mlx/</link>
      <description>Today, we're joined by Prince Canuma, an ML engineer and open-source developer focused on optimizing AI inference on Apple Silicon devices. Prince shares his journey to becoming one of the most prolific contributors to Apple’s MLX ecosystem, having published over 1,000 models and libraries that make open, multimodal AI accessible and performant on Apple devices. We explore his workflow for adapting new models in MLX, the trade-offs between the GPU and Neural Engine, and how optimization methods like pruning and quantization enhance performance. We also cover his work on "Fusion," a weight-space method for combining model behaviors without retraining, and his popular packages—MLX-Audio, MLX-Embeddings, and MLX-VLM—which streamline the use of MLX across different modalities. Finally, Prince introduces Marvis, a real-time speech-to-speech voice agent, and shares his vision for the future of AI, emphasizing the move towards "media models" that can handle multiple modalities, and more.



The complete show notes for this episode can be found at https://twimlai.com/go/744.</description>
      <pubDate>Tue, 26 Aug 2025 16:55:00 -0000</pubDate>
      <itunes:title>Multimodal AI Models on Apple Silicon with MLX with Prince Canuma</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>744</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9be320a4-8297-11f0-a7c2-a3b945ad5e7d/image/40a4181855d5f8c19fb041952676e76d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Prince Canuma, an ML engineer and open-source developer focused on optimizing AI inference on Apple Silicon devices. Prince shares his journey to becoming one of the most prolific contributors to Apple’s MLX ecosystem, having published over 1,000 models and libraries that make open, multimodal AI accessible and performant on Apple devices. We explore his workflow for adapting new models in MLX, the trade-offs between the GPU and Neural Engine, and how optimization methods like pruning and quantization enhance performance. We also cover his work on "Fusion," a weight-space method for combining model behaviors without retraining, and his popular packages—MLX-Audio, MLX-Embeddings, and MLX-VLM—which streamline the use of MLX across different modalities. Finally, Prince introduces Marvis, a real-time speech-to-speech voice agent, and shares his vision for the future of AI, emphasizing the move towards "media models" that can handle multiple modalities, and more.



The complete show notes for this episode can be found at https://twimlai.com/go/744.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Prince Canuma, an ML engineer and open-source developer focused on optimizing AI inference on Apple Silicon devices. Prince shares his journey to becoming one of the most prolific contributors to Apple’s MLX ecosystem, having published over 1,000 models and libraries that make open, multimodal AI accessible and performant on Apple devices. We explore his workflow for adapting new models in MLX, the trade-offs between the GPU and Neural Engine, and how optimization methods like pruning and quantization enhance performance. We also cover his work on "Fusion," a weight-space method for combining model behaviors without retraining, and his popular packages—MLX-Audio, MLX-Embeddings, and MLX-VLM—which streamline the use of MLX across different modalities. Finally, Prince introduces Marvis, a real-time speech-to-speech voice agent, and shares his vision for the future of AI, emphasizing the move towards "media models" that can handle multiple modalities, and more.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/744"><u>https://twimlai.com/go/744</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4220</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9be320a4-8297-11f0-a7c2-a3b945ad5e7d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1859645173.mp3?updated=1756231100"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter - #743</title>
      <link>https://twimlai.com/podcast/twimlai/genie-3-a-new-frontier-for-world-models/</link>
      <description>Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model’s scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3’s visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research.



The complete show notes for this episode can be found at https://twimlai.com/go/743.</description>
      <pubDate>Tue, 19 Aug 2025 17:57:00 -0000</pubDate>
      <itunes:title>Genie 3: A New Frontier for World Models with Jack Parker-Holder and Shlomi Fruchter</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>743</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c874d28-7d14-11f0-add0-63adb93cac66/image/8ce44476a4d61bd36800c392279b6e5a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model’s scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3’s visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research.



The complete show notes for this episode can be found at https://twimlai.com/go/743.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Jack Parker-Holder and Shlomi Fruchter, researchers at Google DeepMind, to discuss the recent release of Genie 3, a model capable of generating “playable” virtual worlds. We dig into the evolution of the Genie project and review the current model’s scaled-up capabilities, including creating real-time, interactive, and high-resolution environments. Jack and Shlomi share their perspectives on what defines a world model, the model's architecture, and key technical challenges and breakthroughs, including Genie 3’s visual memory and ability to handle “promptable world events.” Jack, Shlomi, and Sam share their favorite Genie 3 demos, and discuss its potential as a dynamic training environment for embodied AI agents. Finally, we will explore future directions for Genie research.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/743"><u>https://twimlai.com/go/743</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3661</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5c874d28-7d14-11f0-add0-63adb93cac66]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4297409814.mp3?updated=1755626878"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Closing the Loop Between AI Training and Inference with Lin Qiao - #742</title>
      <link>https://twimlai.com/podcast/twimlai/closing-the-loop-between-ai-training-and-inference/</link>
      <description>In this episode, we're joined by Lin Qiao, CEO and co-founder of Fireworks AI. Drawing on key lessons from her time building PyTorch, Lin shares her perspective on the modern generative AI development lifecycle. She explains why aligning training and inference systems is essential for creating a seamless, fast-moving production pipeline, preventing the friction that often stalls deployment. We explore the strategic shift from treating models as commodities to viewing them as core product assets. Lin details how post-training methods, like reinforcement fine-tuning (RFT), allow teams to leverage their own proprietary data to continuously improve these assets. Lin also breaks down the complex challenge of what she calls "3D optimization"—balancing cost, latency, and quality—and emphasizes the role of clear evaluation criteria to guide this process, moving beyond unreliable methods like "vibe checking." Finally, we discuss the path toward the future of AI development: designing a closed-loop system for automated model improvement, a vision made more attainable by the exciting convergence of open and closed-source model capabilities.

The complete show notes for this episode can be found at https://twimlai.com/go/742.</description>
      <pubDate>Tue, 12 Aug 2025 19:00:00 -0000</pubDate>
      <itunes:title>Closing the Loop Between AI Training and Inference with Lin Qiao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>742</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e3df0136-77aa-11f0-9715-ff676f931d90/image/e8e41a200d3177eec2a749438faefa39.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, we're joined by Lin Qiao, CEO and co-founder of Fireworks AI. Drawing on key lessons from her time building PyTorch, Lin shares her perspective on the modern generative AI development lifecycle. She explains why aligning training and inference systems is essential for creating a seamless, fast-moving production pipeline, preventing the friction that often stalls deployment. We explore the strategic shift from treating models as commodities to viewing them as core product assets. Lin details how post-training methods, like reinforcement fine-tuning (RFT), allow teams to leverage their own proprietary data to continuously improve these assets. Lin also breaks down the complex challenge of what she calls "3D optimization"—balancing cost, latency, and quality—and emphasizes the role of clear evaluation criteria to guide this process, moving beyond unreliable methods like "vibe checking." Finally, we discuss the path toward the future of AI development: designing a closed-loop system for automated model improvement, a vision made more attainable by the exciting convergence of open and closed-source model capabilities.

The complete show notes for this episode can be found at https://twimlai.com/go/742.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we're joined by Lin Qiao, CEO and co-founder of Fireworks AI. Drawing on key lessons from her time building PyTorch, Lin shares her perspective on the modern generative AI development lifecycle. She explains why aligning training and inference systems is essential for creating a seamless, fast-moving production pipeline, preventing the friction that often stalls deployment. We explore the strategic shift from treating models as commodities to viewing them as core product assets. Lin details how post-training methods, like reinforcement fine-tuning (RFT), allow teams to leverage their own proprietary data to continuously improve these assets. Lin also breaks down the complex challenge of what she calls "3D optimization"—balancing cost, latency, and quality—and emphasizes the role of clear evaluation criteria to guide this process, moving beyond unreliable methods like "vibe checking." Finally, we discuss the path toward the future of AI development: designing a closed-loop system for automated model improvement, a vision made more attainable by the exciting convergence of open and closed-source model capabilities.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/742"><u>https://twimlai.com/go/742</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3671</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e3df0136-77aa-11f0-9715-ff676f931d90]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4252780923.mp3?updated=1755024730"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Context Engineering for Productive AI Agents with Filip Kozera - #741</title>
      <link>https://twimlai.com/podcast/twimlai/context-engineering-for-productive-ai-agents/</link>
      <description>In this episode, Filip Kozera, founder and CEO of Wordware, explains his approach to building agentic workflows where natural language serves as the new programming interface. Filip breaks down the architecture of these "background agents," explaining how they use a reflection loop and tool-calling to execute complex tasks. He discusses the current limitations of agent protocols like MCPs and how developers can extend them to handle the required context and authority. The conversation challenges the idea that more powerful models lead to more autonomous agents, arguing instead for "graceful recovery" systems that proactively bring humans into the loop when the agent "knows what it doesn't know." We also get into the "application layer" fight, exploring how SaaS platforms are creating data silos and what this means for the future of interoperable AI agents. Filip also shares his vision for the "word artisan"—the non-technical user who can now build and manage a fleet of AI agents, fundamentally changing the nature of knowledge work.

The complete show notes for this episode can be found at https://twimlai.com/go/741.</description>
      <pubDate>Tue, 29 Jul 2025 19:37:00 -0000</pubDate>
      <itunes:title>Context Engineering for Productive AI Agents with Filip Kozera</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>741</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/267d0706-6caf-11f0-beab-6355b28dea24/image/4975cfd666824f4260e3978e9199d2d4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Filip Kozera, founder and CEO of Wordware, explains his approach to building agentic workflows where natural language serves as the new programming interface. Filip breaks down the architecture of these "background agents," explaining how they use a reflection loop and tool-calling to execute complex tasks. He discusses the current limitations of agent protocols like MCPs and how developers can extend them to handle the required context and authority. The conversation challenges the idea that more powerful models lead to more autonomous agents, arguing instead for "graceful recovery" systems that proactively bring humans into the loop when the agent "knows what it doesn't know." We also get into the "application layer" fight, exploring how SaaS platforms are creating data silos and what this means for the future of interoperable AI agents. Filip also shares his vision for the "word artisan"—the non-technical user who can now build and manage a fleet of AI agents, fundamentally changing the nature of knowledge work.

The complete show notes for this episode can be found at https://twimlai.com/go/741.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Filip Kozera, founder and CEO of Wordware, explains his approach to building agentic workflows where natural language serves as the new programming interface. Filip breaks down the architecture of these "background agents," explaining how they use a reflection loop and tool-calling to execute complex tasks. He discusses the current limitations of agent protocols like MCPs and how developers can extend them to handle the required context and authority. The conversation challenges the idea that more powerful models lead to more autonomous agents, arguing instead for "graceful recovery" systems that proactively bring humans into the loop when the agent "knows what it doesn't know." We also get into the "application layer" fight, exploring how SaaS platforms are creating data silos and what this means for the future of interoperable AI agents. Filip also shares his vision for the "word artisan"—the non-technical user who can now build and manage a fleet of AI agents, fundamentally changing the nature of knowledge work.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/741"><u>https://twimlai.com/go/741</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>2761</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[267d0706-6caf-11f0-beab-6355b28dea24]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9863568954.mp3?updated=1753817752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Infrastructure Scaling and Compound AI Systems with Jared Quincy Davis - #740</title>
      <link>https://twimlai.com/podcast/twimlai/infrastructure-scaling-and-compound-ai-systems/</link>
      <description>In this episode, Jared Quincy Davis, founder and CEO at Foundry, introduces the concept of "compound AI systems," which allows users to create powerful, efficient applications by composing multiple, often diverse, AI models and services. We discuss how these "networks of networks" can push the Pareto frontier, delivering results that are simultaneously faster, more accurate, and even cheaper than single-model approaches. Using examples like "laconic decoding," Jared explains the practical techniques for building these systems and the underlying principles of inference-time scaling. The conversation also delves into the critical role of co-design, where the evolution of AI algorithms and the underlying cloud infrastructure are deeply intertwined, shaping the future of agentic AI and the compute landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/740.</description>
      <pubDate>Tue, 22 Jul 2025 16:00:00 -0000</pubDate>
      <itunes:title>Infrastructure Scaling and Compound AI Systems with Jared Quincy Davis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>740</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b262691a-665a-11f0-8e69-472891e5cf1e/image/6f7a7c96784eb46ef291542496dbade3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Jared Quincy Davis, founder and CEO at Foundry, introduces the concept of "compound AI systems," which allows users to create powerful, efficient applications by composing multiple, often diverse, AI models and services. We discuss how these "networks of networks" can push the Pareto frontier, delivering results that are simultaneously faster, more accurate, and even cheaper than single-model approaches. Using examples like "laconic decoding," Jared explains the practical techniques for building these systems and the underlying principles of inference-time scaling. The conversation also delves into the critical role of co-design, where the evolution of AI algorithms and the underlying cloud infrastructure are deeply intertwined, shaping the future of agentic AI and the compute landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/740.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Jared Quincy Davis, founder and CEO at Foundry, introduces the concept of "compound AI systems," which allows users to create powerful, efficient applications by composing multiple, often diverse, AI models and services. We discuss how these "networks of networks" can push the Pareto frontier, delivering results that are simultaneously faster, more accurate, and even cheaper than single-model approaches. Using examples like "laconic decoding," Jared explains the practical techniques for building these systems and the underlying principles of inference-time scaling. The conversation also delves into the critical role of co-design, where the evolution of AI algorithms and the underlying cloud infrastructure are deeply intertwined, shaping the future of agentic AI and the compute landscape.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/740"><u>https://twimlai.com/go/740</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4382</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b262691a-665a-11f0-8e69-472891e5cf1e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2657285858.mp3?updated=1753151737"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Voice AI Agents That Don’t Suck with Kwindla Kramer - #739</title>
      <link>https://twimlai.com/podcast/twimlai/building-voice-ai-agents-that-dont-suck/</link>
      <description>In this episode, Kwindla Kramer, co-founder and CEO of Daily and creator of the open source Pipecat framework, joins us to discuss the architecture and challenges of building real-time, production-ready conversational voice AI. Kwin breaks down the full stack for voice agents—from the models and APIs to the critical orchestration layer that manages the complexities of multi-turn conversations. We explore why many production systems favor a modular, multi-model approach over the end-to-end models demonstrated by large AI labs, and how this impacts everything from latency and cost to observability and evaluation. Kwin also digs into the core challenges of interruption handling, turn-taking, and creating truly natural conversational dynamics, and how to overcome them. We discuss use cases, thoughts on where the technology is headed, the move toward hybrid edge-cloud pipelines, and the exciting future of real-time video avatars, and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/739.</description>
      <pubDate>Tue, 15 Jul 2025 21:04:00 -0000</pubDate>
      <itunes:title>Building Voice AI Agents That Don’t Suck with Kwindla Kramer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>739</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1a1eb49a-61b1-11f0-a767-ab14001687e1/image/b4fb43641d940374841b20d14607054b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Kwindla Kramer, co-founder and CEO of Daily and creator of the open source Pipecat framework, joins us to discuss the architecture and challenges of building real-time, production-ready conversational voice AI. Kwin breaks down the full stack for voice agents—from the models and APIs to the critical orchestration layer that manages the complexities of multi-turn conversations. We explore why many production systems favor a modular, multi-model approach over the end-to-end models demonstrated by large AI labs, and how this impacts everything from latency and cost to observability and evaluation. Kwin also digs into the core challenges of interruption handling, turn-taking, and creating truly natural conversational dynamics, and how to overcome them. We discuss use cases, thoughts on where the technology is headed, the move toward hybrid edge-cloud pipelines, and the exciting future of real-time video avatars, and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/739.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Kwindla Kramer, co-founder and CEO of Daily and creator of the open source Pipecat framework, joins us to discuss the architecture and challenges of building real-time, production-ready conversational voice AI. Kwin breaks down the full stack for voice agents—from the models and APIs to the critical orchestration layer that manages the complexities of multi-turn conversations. We explore why many production systems favor a modular, multi-model approach over the end-to-end models demonstrated by large AI labs, and how this impacts everything from latency and cost to observability and evaluation. Kwin also digs into the core challenges of interruption handling, turn-taking, and creating truly natural conversational dynamics, and how to overcome them. We discuss use cases, thoughts on where the technology is headed, the move toward hybrid edge-cloud pipelines, and the exciting future of real-time video avatars, and much more.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/739"><u>https://twimlai.com/go/739</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4382</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1a1eb49a-61b1-11f0-a767-ab14001687e1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9079687304.mp3?updated=1752614441"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli - #738</title>
      <link>https://twimlai.com/podcast/twimlai/distilling-transformers-and-diffusion-models-for-robust-edge-use-cases/</link>
      <description>Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm’s on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant.

The complete show notes for this episode can be found at https://twimlai.com/go/738.</description>
      <pubDate>Wed, 09 Jul 2025 15:53:00 -0000</pubDate>
      <itunes:title>Distilling Transformers and Diffusion Models for Robust Edge Use Cases with Fatih Porikli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>738</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d70cbc8a-575e-11f0-9728-abd35ebb5cc0/image/1fddfe2039af5a89d3e97efe3dc2aa63.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm’s on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant.

The complete show notes for this episode can be found at https://twimlai.com/go/738.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year’s CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm’s on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/738"><u>https://twimlai.com/go/738</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3629</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d70cbc8a-575e-11f0-9728-abd35ebb5cc0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4056797871.mp3?updated=1752077099"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building the Internet of Agents with Vijoy Pandey - #737</title>
      <link>https://twimlai.com/podcast/twimlai/building-the-internet-of-agents/</link>
      <description>Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all develop their own agentic systems, integrating them creates a complex, probabilistic, and noisy environment, a stark contrast to the deterministic APIs of the past. Vijoy introduces Cisco's vision for an "Internet of Agents," a platform to manage this new reality, and its open-source implementation, AGNTCY. We explore the four phases of agent collaboration—discovery, composition, deployment, and evaluation—and dive deep into the communication stack, from syntactic protocols like A2A, ACP, and MCP to the deeper semantic challenges of creating a shared understanding between agents. Vijoy also unveils SLIM (Secure Low-Latency Interactive Messaging), a novel transport layer designed to make agent-to-agent communication quantum-safe, real-time, and efficient for multi-modal workloads.

The complete show notes for this episode can be found at ⁠https://twimlai.com/go/737.</description>
      <pubDate>Tue, 24 Jun 2025 15:15:00 -0000</pubDate>
      <itunes:title>Building the Internet of Agents with Vijoy Pandey</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>737</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1b24bf3c-510e-11f0-979c-6f659e2e29e6/image/52b6c01aaeed49cdb6ffbb1d943e5417.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all develop their own agentic systems, integrating them creates a complex, probabilistic, and noisy environment, a stark contrast to the deterministic APIs of the past. Vijoy introduces Cisco's vision for an "Internet of Agents," a platform to manage this new reality, and its open-source implementation, AGNTCY. We explore the four phases of agent collaboration—discovery, composition, deployment, and evaluation—and dive deep into the communication stack, from syntactic protocols like A2A, ACP, and MCP to the deeper semantic challenges of creating a shared understanding between agents. Vijoy also unveils SLIM (Secure Low-Latency Interactive Messaging), a novel transport layer designed to make agent-to-agent communication quantum-safe, real-time, and efficient for multi-modal workloads.

The complete show notes for this episode can be found at ⁠https://twimlai.com/go/737.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Vijoy Pandey, SVP and general manager at Outshift by Cisco to discuss a foundational challenge for the enterprise: how do we make specialized agents from different vendors collaborate effectively? As companies like Salesforce, Workday, and Microsoft all develop their own agentic systems, integrating them creates a complex, probabilistic, and noisy environment, a stark contrast to the deterministic APIs of the past. Vijoy introduces Cisco's vision for an "Internet of Agents," a platform to manage this new reality, and its open-source implementation, AGNTCY. We explore the four phases of agent collaboration—discovery, composition, deployment, and evaluation—and dive deep into the communication stack, from syntactic protocols like A2A, ACP, and MCP to the deeper semantic challenges of creating a shared understanding between agents. Vijoy also unveils SLIM (Secure Low-Latency Interactive Messaging), a novel transport layer designed to make agent-to-agent communication quantum-safe, real-time, and efficient for multi-modal workloads.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/737">⁠<u>https://twimlai.com/go/737.</u></a></p>]]>
      </content:encoded>
      <itunes:duration>3373</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1b24bf3c-510e-11f0-979c-6f659e2e29e6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6554854566.mp3?updated=1750778902"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington - #736</title>
      <link>https://twimlai.com/podcast/twimlai/llms-for-equities-feature-forecasting-at-two-sigma/</link>
      <description>Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build predictive models to forecast market behavior and asset prices for trading and investment. We explore the firm's platform-centric approach to managing an extensive portfolio of features and models, the impact of multimodal LLMs on accelerating the process of extracting novel features, the importance of strict data timestamping to prevent temporal leakage, and the way they consider build vs. buy decisions in a rapidly evolving landscape. Lastly, Ben also shares insights on leveraging open-source models and the future of agentic AI in quantitative finance.

The complete show notes for this episode can be found at https://twimlai.com/go/736.</description>
      <pubDate>Tue, 17 Jun 2025 19:33:00 -0000</pubDate>
      <itunes:title>LLMs for Equities Feature Forecasting at Two Sigma with Ben Wellington</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>736</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e7a27954-4bb0-11f0-992a-4be6ee67394b/image/547bf1c44315ddd0ed76aba65fcd8d5f.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build predictive models to forecast market behavior and asset prices for trading and investment. We explore the firm's platform-centric approach to managing an extensive portfolio of features and models, the impact of multimodal LLMs on accelerating the process of extracting novel features, the importance of strict data timestamping to prevent temporal leakage, and the way they consider build vs. buy decisions in a rapidly evolving landscape. Lastly, Ben also shares insights on leveraging open-source models and the future of agentic AI in quantitative finance.

The complete show notes for this episode can be found at https://twimlai.com/go/736.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Ben Wellington, deputy head of feature forecasting at Two Sigma. We dig into the team’s end-to-end approach to leveraging AI in equities feature forecasting, covering how they identify and create features, collect and quantify historical data, and build predictive models to forecast market behavior and asset prices for trading and investment. We explore the firm's platform-centric approach to managing an extensive portfolio of features and models, the impact of multimodal LLMs on accelerating the process of extracting novel features, the importance of strict data timestamping to prevent temporal leakage, and the way they consider build vs. buy decisions in a rapidly evolving landscape. Lastly, Ben also shares insights on leveraging open-source models and the future of agentic AI in quantitative finance.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/736"><u>https://twimlai.com/go/736</u>.</a></p>]]>
      </content:encoded>
      <itunes:duration>3571</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e7a27954-4bb0-11f0-992a-4be6ee67394b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1374470913.mp3?updated=1750189327"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Zero-Shot Auto-Labeling: The End of Annotation for Computer Vision with Jason Corso - #735</title>
      <link>https://twimlai.com/podcast/twimlai/zero-shot-auto-labeling-the-end-of-annotation-for-computer-vision/</link>
      <description>Today, we're joined by Jason Corso, co-founder of Voxel51 and professor at the University of Michigan, to explore automated labeling in computer vision. Jason introduces FiftyOne, an open-source platform for visualizing datasets, analyzing models, and improving data quality. We focus on Voxel51’s recent research report, “Zero-shot auto-labeling rivals human performance,” which demonstrates how zero-shot auto-labeling with foundation models can yield to significant cost and time savings compared to traditional human annotation. Jason explains how auto-labels, despite being "noisier" at lower confidence thresholds, can lead to better downstream model performance. We also cover Voxel51's "verified auto-labeling" approach, which utilizes a "stoplight" QA workflow (green, yellow, red light) to minimize human review. Finally, we discuss the challenges of handling decision boundary uncertainty and out-of-domain classes, the differences between synthetic data generation in vision and language domains, and the potential of agentic labeling.

The complete show notes for this episode can be found at https://twimlai.com/go/735.</description>
      <pubDate>Tue, 10 Jun 2025 16:54:00 -0000</pubDate>
      <itunes:title>Zero-Shot Auto-Labeling: The End of Annotation for Computer Vision with Jason Corso</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>735</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/13f0a338-460b-11f0-8c5d-4fe017b9585e/image/ee6ae2aac7ba85afde73fd7b3e6d4e04.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Jason Corso, co-founder of Voxel51 and professor at the University of Michigan, to explore automated labeling in computer vision. Jason introduces FiftyOne, an open-source platform for visualizing datasets, analyzing models, and improving data quality. We focus on Voxel51’s recent research report, “Zero-shot auto-labeling rivals human performance,” which demonstrates how zero-shot auto-labeling with foundation models can yield to significant cost and time savings compared to traditional human annotation. Jason explains how auto-labels, despite being "noisier" at lower confidence thresholds, can lead to better downstream model performance. We also cover Voxel51's "verified auto-labeling" approach, which utilizes a "stoplight" QA workflow (green, yellow, red light) to minimize human review. Finally, we discuss the challenges of handling decision boundary uncertainty and out-of-domain classes, the differences between synthetic data generation in vision and language domains, and the potential of agentic labeling.

The complete show notes for this episode can be found at https://twimlai.com/go/735.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Jason Corso, co-founder of Voxel51 and professor at the University of Michigan, to explore automated labeling in computer vision. Jason introduces FiftyOne, an open-source platform for visualizing datasets, analyzing models, and improving data quality. We focus on Voxel51’s recent research report, “Zero-shot auto-labeling rivals human performance,” which demonstrates how zero-shot auto-labeling with foundation models can yield to significant cost and time savings compared to traditional human annotation. Jason explains how auto-labels, despite being "noisier" at lower confidence thresholds, can lead to better downstream model performance. We also cover Voxel51's "verified auto-labeling" approach, which utilizes a "stoplight" QA workflow (green, yellow, red light) to minimize human review. Finally, we discuss the challenges of handling decision boundary uncertainty and out-of-domain classes, the differences between synthetic data generation in vision and language domains, and the potential of agentic labeling.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/735"><u>https://twimlai.com/go/735</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3405</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[13f0a338-460b-11f0-8c5d-4fe017b9585e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5226479586.mp3?updated=1749575933"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin - #734</title>
      <link>https://twimlai.com/podcast/twimlai/grokking-generalization-collapse-and-the-dynamics-of-training-deep-neural-networks/</link>
      <description>Today, we're joined by Charles Martin, founder of Calculation Consulting, to discuss Weight Watcher, an open-source tool for analyzing and improving Deep Neural Networks (DNNs) based on principles from theoretical physics. We explore the foundations of the Heavy-Tailed Self-Regularization (HTSR) theory that underpins it, which combines random matrix theory and renormalization group ideas to uncover deep insights about model training dynamics. Charles walks us through WeightWatcher’s ability to detect three distinct learning phases—underfitting, grokking, and generalization collapse—and how its signature “layer quality” metric reveals whether individual layers are underfit, overfit, or optimally tuned. Additionally, we dig into the complexities involved in fine-tuning models, the surprising correlation between model optimality and hallucination, the often-underestimated challenges of search relevance, and their implications for RAG. Finally, Charles shares his insights into real-world applications of generative AI and his lessons learned from working in the field.



The complete show notes for this episode can be found at https://twimlai.com/go/734.</description>
      <pubDate>Thu, 05 Jun 2025 00:10:00 -0000</pubDate>
      <itunes:title>Grokking, Generalization Collapse, and the Dynamics of Training Deep Neural Networks with Charles Martin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>734</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c5a155f6-4196-11f0-a319-8b003675a9c3/image/9534aa695b7b3a33da1040b280f66bc6.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Charles Martin, founder of Calculation Consulting, to discuss Weight Watcher, an open-source tool for analyzing and improving Deep Neural Networks (DNNs) based on principles from theoretical physics. We explore the foundations of the Heavy-Tailed Self-Regularization (HTSR) theory that underpins it, which combines random matrix theory and renormalization group ideas to uncover deep insights about model training dynamics. Charles walks us through WeightWatcher’s ability to detect three distinct learning phases—underfitting, grokking, and generalization collapse—and how its signature “layer quality” metric reveals whether individual layers are underfit, overfit, or optimally tuned. Additionally, we dig into the complexities involved in fine-tuning models, the surprising correlation between model optimality and hallucination, the often-underestimated challenges of search relevance, and their implications for RAG. Finally, Charles shares his insights into real-world applications of generative AI and his lessons learned from working in the field.



The complete show notes for this episode can be found at https://twimlai.com/go/734.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Charles Martin, founder of Calculation Consulting, to discuss Weight Watcher, an open-source tool for analyzing and improving Deep Neural Networks (DNNs) based on principles from theoretical physics. We explore the foundations of the Heavy-Tailed Self-Regularization (HTSR) theory that underpins it, which combines random matrix theory and renormalization group ideas to uncover deep insights about model training dynamics. Charles walks us through WeightWatcher’s ability to detect three distinct learning phases—underfitting, grokking, and generalization collapse—and how its signature “layer quality” metric reveals whether individual layers are underfit, overfit, or optimally tuned. Additionally, we dig into the complexities involved in fine-tuning models, the surprising correlation between model optimality and hallucination, the often-underestimated challenges of search relevance, and their implications for RAG. Finally, Charles shares his insights into real-world applications of generative AI and his lessons learned from working in the field.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/734"><u>https://twimlai.com/go/734</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>5121</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c5a155f6-4196-11f0-a319-8b003675a9c3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4861884526.mp3?updated=1749083459"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Google I/O 2025 Special Edition - #733</title>
      <link>https://twimlai.com/podcast/twimlai/google-i-o-2025-special-edition/</link>
      <description>Today, I’m excited to share a special crossover edition of the podcast recorded live from Google I/O 2025! In this episode, I join Shawn Wang aka Swyx from the Latent Space Podcast, to interview Logan Kilpatrick and Shrestha Basu Mallick, PMs at Google DeepMind working on AI Studio and the Gemini API, along with Kwindla Kramer, CEO of Daily and creator of the Pipecat open source project. We cover all the highlights from the event, including enhancements to the Gemini models like thinking budgets and thought summaries, native audio output for expressive voice AI, and the new URL Context tool for research agents. The discussion also digs into the Gemini Live API, covering its architecture, the challenges of building real-time voice applications (such as latency and voice activity detection), and new features like proactive audio and asynchronous function calling. Finally, don’t miss our guests’ wish lists for next year’s I/O!



The complete show notes for this episode can be found at https://twimlai.com/go/733.</description>
      <pubDate>Wed, 28 May 2025 20:59:00 -0000</pubDate>
      <itunes:title>Google I/O 2025 Special Edition</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>733</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/39a3d21e-3bfa-11f0-9f87-23c9304c52e7/image/c6d8eff0ad0ab40cd25a7e4e85401a6b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, I’m excited to share a special crossover edition of the podcast recorded live from Google I/O 2025! In this episode, I join Shawn Wang aka Swyx from the Latent Space Podcast, to interview Logan Kilpatrick and Shrestha Basu Mallick, PMs at Google DeepMind working on AI Studio and the Gemini API, along with Kwindla Kramer, CEO of Daily and creator of the Pipecat open source project. We cover all the highlights from the event, including enhancements to the Gemini models like thinking budgets and thought summaries, native audio output for expressive voice AI, and the new URL Context tool for research agents. The discussion also digs into the Gemini Live API, covering its architecture, the challenges of building real-time voice applications (such as latency and voice activity detection), and new features like proactive audio and asynchronous function calling. Finally, don’t miss our guests’ wish lists for next year’s I/O!



The complete show notes for this episode can be found at https://twimlai.com/go/733.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, I’m excited to share a special crossover edition of the podcast recorded live from Google I/O 2025! In this episode, I join Shawn Wang aka Swyx from the Latent Space Podcast, to interview Logan Kilpatrick and Shrestha Basu Mallick, PMs at Google DeepMind working on AI Studio and the Gemini API, along with Kwindla Kramer, CEO of Daily and creator of the Pipecat open source project. We cover all the highlights from the event, including enhancements to the Gemini models like thinking budgets and thought summaries, native audio output for expressive voice AI, and the new URL Context tool for research agents. The discussion also digs into the Gemini Live API, covering its architecture, the challenges of building real-time voice applications (such as latency and voice activity detection), and new features like proactive audio and asynchronous function calling. Finally, don’t miss our guests’ wish lists for next year’s I/O!</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/733"><u>https://twimlai.com/go/733</u></a>. </p>]]>
      </content:encoded>
      <itunes:duration>1581</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[39a3d21e-3bfa-11f0-9f87-23c9304c52e7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4033526197.mp3?updated=1748466377"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann - #732</title>
      <link>https://twimlai.com/podcast/twimlai/rag-risks-why-retrieval-augmented-llms-are-not-safer/</link>
      <description>Today, we're joined by Sebastian Gehrmann, head of responsible AI in the Office of the CTO at Bloomberg, to discuss AI safety in retrieval-augmented generation (RAG) systems and generative AI in high-stakes domains like financial services. We explore how RAG, contrary to some expectations, can inadvertently degrade model safety. We cover examples of unsafe outputs that can emerge from these systems, different approaches to evaluating these safety risks, and the potential reasons behind this counterintuitive behavior. Shifting to the application of generative AI in financial services, Sebastian outlines a domain-specific safety taxonomy designed for the industry's unique needs. We also explore the critical role of governance and regulatory frameworks in addressing these concerns, the role of prompt engineering in bolstering safety, Bloomberg’s multi-layered mitigation strategies, and vital areas for further work in improving AI safety within specialized domains.

The complete show notes for this episode can be found at https://twimlai.com/go/732.</description>
      <pubDate>Wed, 21 May 2025 18:14:00 -0000</pubDate>
      <itunes:title>RAG Risks: Why Retrieval-Augmented LLMs are Not Safer with Sebastian Gehrmann</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>732</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c8baec9a-366d-11f0-b3cd-cfa03f7fbfbe/image/118dc6af136fc424b9eba61c67b584ed.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Sebastian Gehrmann, head of responsible AI in the Office of the CTO at Bloomberg, to discuss AI safety in retrieval-augmented generation (RAG) systems and generative AI in high-stakes domains like financial services. We explore how RAG, contrary to some expectations, can inadvertently degrade model safety. We cover examples of unsafe outputs that can emerge from these systems, different approaches to evaluating these safety risks, and the potential reasons behind this counterintuitive behavior. Shifting to the application of generative AI in financial services, Sebastian outlines a domain-specific safety taxonomy designed for the industry's unique needs. We also explore the critical role of governance and regulatory frameworks in addressing these concerns, the role of prompt engineering in bolstering safety, Bloomberg’s multi-layered mitigation strategies, and vital areas for further work in improving AI safety within specialized domains.

The complete show notes for this episode can be found at https://twimlai.com/go/732.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Sebastian Gehrmann, head of responsible AI in the Office of the CTO at Bloomberg, to discuss AI safety in retrieval-augmented generation (RAG) systems and generative AI in high-stakes domains like financial services. We explore how RAG, contrary to some expectations, can inadvertently degrade model safety. We cover examples of unsafe outputs that can emerge from these systems, different approaches to evaluating these safety risks, and the potential reasons behind this counterintuitive behavior. Shifting to the application of generative AI in financial services, Sebastian outlines a domain-specific safety taxonomy designed for the industry's unique needs. We also explore the critical role of governance and regulatory frameworks in addressing these concerns, the role of prompt engineering in bolstering safety, Bloomberg’s multi-layered mitigation strategies, and vital areas for further work in improving AI safety within specialized domains.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/732"><u>https://twimlai.com/go/732</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3429</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c8baec9a-366d-11f0-b3cd-cfa03f7fbfbe]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9960265469.mp3?updated=1747852071"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731</title>
      <link>https://twimlai.com/podcast/twimlai/from-prompts-to-policies-how-rl-builds-better-ai-agents/</link>
      <description>Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they’ve used, and Bespoke Labs’ open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA.

The complete show notes for this episode can be found at https://twimlai.com/go/731.</description>
      <pubDate>Tue, 13 May 2025 22:10:00 -0000</pubDate>
      <itunes:title>From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>731</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ce537fdc-3041-11f0-a38b-1f8a0aa52e0f/image/bf4f8c0f75699704cce967a3aff78d03.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they’ve used, and Bespoke Labs’ open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA.

The complete show notes for this episode can be found at https://twimlai.com/go/731.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they’ve used, and Bespoke Labs’ open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/731"><u>https://twimlai.com/go/731</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3685</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ce537fdc-3041-11f0-a38b-1f8a0aa52e0f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2707985480.mp3?updated=1747175429"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How OpenAI Builds AI Agents That Think and Act with Josh Tobin - #730</title>
      <link>https://twimlai.com/podcast/twimlai/how-openai-builds-ai-agents-that-think-and-act/</link>
      <description>Today, we're joined by Josh Tobin, member of technical staff at OpenAI, to discuss the company’s approach to building AI agents. We cover OpenAI's three agentic offerings—Deep Research for comprehensive web research, Operator for website navigation, and Codex CLI for local code execution. We explore OpenAI’s shift from simple LLM workflows to reasoning models specifically trained for multi-step tasks through reinforcement learning, and how that enables agents to more easily recover from failures while executing complex processes. Josh shares insights on the practical applications of these agents, including some unexpected use cases. We also discuss the future of human-AI collaboration in software development, such as with "vibe coding," the integration of tools through the Model Control Protocol (MCP), and the significance of context management in AI-enabled IDEs. Additionally, we highlight the challenges of ensuring trust and safety as AI agents become more powerful and autonomous.



The complete show notes for this episode can be found at https://twimlai.com/go/730.</description>
      <pubDate>Tue, 06 May 2025 22:50:00 -0000</pubDate>
      <itunes:title>How OpenAI Builds AI Agents That Think and Act with Josh Tobin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>730</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3c6eb696-2abe-11f0-9e04-0768f0fe95f2/image/723035b6f3dc6285ca7f7a7507b9c393.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Josh Tobin, member of technical staff at OpenAI, to discuss the company’s approach to building AI agents. We cover OpenAI's three agentic offerings—Deep Research for comprehensive web research, Operator for website navigation, and Codex CLI for local code execution. We explore OpenAI’s shift from simple LLM workflows to reasoning models specifically trained for multi-step tasks through reinforcement learning, and how that enables agents to more easily recover from failures while executing complex processes. Josh shares insights on the practical applications of these agents, including some unexpected use cases. We also discuss the future of human-AI collaboration in software development, such as with "vibe coding," the integration of tools through the Model Control Protocol (MCP), and the significance of context management in AI-enabled IDEs. Additionally, we highlight the challenges of ensuring trust and safety as AI agents become more powerful and autonomous.



The complete show notes for this episode can be found at https://twimlai.com/go/730.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Josh Tobin, member of technical staff at OpenAI, to discuss the company’s approach to building AI agents. We cover OpenAI's three agentic offerings—Deep Research for comprehensive web research, Operator for website navigation, and Codex CLI for local code execution. We explore OpenAI’s shift from simple LLM workflows to reasoning models specifically trained for multi-step tasks through reinforcement learning, and how that enables agents to more easily recover from failures while executing complex processes. Josh shares insights on the practical applications of these agents, including some unexpected use cases. We also discuss the future of human-AI collaboration in software development, such as with "vibe coding," the integration of tools through the Model Control Protocol (MCP), and the significance of context management in AI-enabled IDEs. Additionally, we highlight the challenges of ensuring trust and safety as AI agents become more powerful and autonomous.</p>
<p><br></p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/730"><u>https://twimlai.com/go/730</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>4047</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3c6eb696-2abe-11f0-9e04-0768f0fe95f2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6458824254.mp3?updated=1746809590"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi - #729</title>
      <link>https://twimlai.com/podcast/twimlai/ctibench-evaluating-llms-in-cyber-threat-intelligence/</link>
      <description>Today, we're joined by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to discuss Cyber Threat Intelligence (CTI), focusing on her recent project CTIBench—a benchmark for evaluating LLMs on real-world CTI tasks. Nidhi explains the evolution of AI in cybersecurity, from rule-based systems to LLMs that accelerate analysis by providing critical context for threat detection and defense. We dig into the advantages and challenges of using LLMs in CTI, how techniques like Retrieval-Augmented Generation (RAG) are essential for keeping LLMs up-to-date with emerging threats, and how CTIBench measures LLMs’ ability to perform a set of real-world tasks of the cybersecurity analyst. We unpack the process of building the benchmark, the tasks it covers, and key findings from benchmarking various LLMs. Finally, Nidhi shares the importance of benchmarks in exposing model limitations and blind spots, the challenges of large-scale benchmarking, and the future directions of her AI4Sec Research Lab, including developing reliable mitigation techniques, monitoring "concept drift" in threat detection models, improving explainability in cybersecurity, and more.

The complete show notes for this episode can be found at https://twimlai.com/go/729.</description>
      <pubDate>Wed, 30 Apr 2025 07:21:00 -0000</pubDate>
      <itunes:title>CTIBench: Evaluating LLMs in Cyber Threat Intelligence with Nidhi Rastogi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>729</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e3e35aac-2538-11f0-8261-9b987302e363/image/b3b4365c4ba4908f128b7fbc216f5642.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to discuss Cyber Threat Intelligence (CTI), focusing on her recent project CTIBench—a benchmark for evaluating LLMs on real-world CTI tasks. Nidhi explains the evolution of AI in cybersecurity, from rule-based systems to LLMs that accelerate analysis by providing critical context for threat detection and defense. We dig into the advantages and challenges of using LLMs in CTI, how techniques like Retrieval-Augmented Generation (RAG) are essential for keeping LLMs up-to-date with emerging threats, and how CTIBench measures LLMs’ ability to perform a set of real-world tasks of the cybersecurity analyst. We unpack the process of building the benchmark, the tasks it covers, and key findings from benchmarking various LLMs. Finally, Nidhi shares the importance of benchmarks in exposing model limitations and blind spots, the challenges of large-scale benchmarking, and the future directions of her AI4Sec Research Lab, including developing reliable mitigation techniques, monitoring "concept drift" in threat detection models, improving explainability in cybersecurity, and more.

The complete show notes for this episode can be found at https://twimlai.com/go/729.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Nidhi Rastogi, assistant professor at Rochester Institute of Technology to discuss Cyber Threat Intelligence (CTI), focusing on her recent project CTIBench—a benchmark for evaluating LLMs on real-world CTI tasks. Nidhi explains the evolution of AI in cybersecurity, from rule-based systems to LLMs that accelerate analysis by providing critical context for threat detection and defense. We dig into the advantages and challenges of using LLMs in CTI, how techniques like Retrieval-Augmented Generation (RAG) are essential for keeping LLMs up-to-date with emerging threats, and how CTIBench measures LLMs’ ability to perform a set of real-world tasks of the cybersecurity analyst. We unpack the process of building the benchmark, the tasks it covers, and key findings from benchmarking various LLMs. Finally, Nidhi shares the importance of benchmarks in exposing model limitations and blind spots, the challenges of large-scale benchmarking, and the future directions of her AI4Sec Research Lab, including developing reliable mitigation techniques, monitoring "concept drift" in threat detection models, improving explainability in cybersecurity, and more.</p>
<p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/729"><u>https://twimlai.com/go/729</u></a>.</p>]]>
      </content:encoded>
      <itunes:duration>3378</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e3e35aac-2538-11f0-8261-9b987302e363]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6726544546.mp3?updated=1746051537"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Generative Benchmarking with Kelly Hong - #728</title>
      <link>https://twimlai.com/podcast/twimlai/generative-benchmarking/</link>
      <description>In this episode, Kelly Hong, a researcher at Chroma, joins us to discuss "Generative Benchmarking," a novel approach to evaluating retrieval systems, like RAG applications, using synthetic data. Kelly explains how traditional benchmarks like MTEB fail to represent real-world query patterns and how embedding models that perform well on public benchmarks often underperform in production. The conversation explores the two-step process of Generative Benchmarking: filtering documents to focus on relevant content and generating queries that mimic actual user behavior. Kelly shares insights from applying this approach to Weights &amp; Biases' technical support bot, revealing how domain-specific evaluation provides more accurate assessments of embedding model performance. We also discuss the importance of aligning LLM judges with human preferences, the impact of chunking strategies on retrieval effectiveness, and how production queries differ from benchmark queries in ambiguity and style. Throughout the episode, Kelly emphasizes the need for systematic evaluation approaches that go beyond "vibe checks" to help developers build more effective RAG applications.

The complete show notes for this episode can be found at https://twimlai.com/go/728.</description>
      <pubDate>Wed, 23 Apr 2025 22:09:00 -0000</pubDate>
      <itunes:title>Generative Benchmarking with Kelly Hong</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>728</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/870e33c6-2054-11f0-bcd7-6be99e3e06db/image/4cc1d60b0ed2f375cb7a0cebaefb2c43.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Kelly Hong, a researcher at Chroma, joins us to discuss "Generative Benchmarking," a novel approach to evaluating retrieval systems, like RAG applications, using synthetic data. Kelly explains how traditional benchmarks like MTEB fail to represent real-world query patterns and how embedding models that perform well on public benchmarks often underperform in production. The conversation explores the two-step process of Generative Benchmarking: filtering documents to focus on relevant content and generating queries that mimic actual user behavior. Kelly shares insights from applying this approach to Weights &amp; Biases' technical support bot, revealing how domain-specific evaluation provides more accurate assessments of embedding model performance. We also discuss the importance of aligning LLM judges with human preferences, the impact of chunking strategies on retrieval effectiveness, and how production queries differ from benchmark queries in ambiguity and style. Throughout the episode, Kelly emphasizes the need for systematic evaluation approaches that go beyond "vibe checks" to help developers build more effective RAG applications.

The complete show notes for this episode can be found at https://twimlai.com/go/728.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Kelly Hong, a researcher at Chroma, joins us to discuss "Generative Benchmarking," a novel approach to evaluating retrieval systems, like RAG applications, using synthetic data. Kelly explains how traditional benchmarks like MTEB fail to represent real-world query patterns and how embedding models that perform well on public benchmarks often underperform in production. The conversation explores the two-step process of Generative Benchmarking: filtering documents to focus on relevant content and generating queries that mimic actual user behavior. Kelly shares insights from applying this approach to Weights &amp; Biases' technical support bot, revealing how domain-specific evaluation provides more accurate assessments of embedding model performance. We also discuss the importance of aligning LLM judges with human preferences, the impact of chunking strategies on retrieval effectiveness, and how production queries differ from benchmark queries in ambiguity and style. Throughout the episode, Kelly emphasizes the need for systematic evaluation approaches that go beyond "vibe checks" to help developers build more effective RAG applications.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/728">https://twimlai.com/go/728</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3257</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[870e33c6-2054-11f0-bcd7-6be99e3e06db]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1226891394.mp3?updated=1745421739"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen - #727</title>
      <link>https://twimlai.com/podcast/twimlai/exploring-the-biology-of-llms-with-circuit-tracing/</link>
      <description>In this episode, Emmanuel Ameisen, a research engineer at Anthropic, returns to discuss two recent papers: "Circuit Tracing: Revealing Language Model Computational Graphs" and "On the Biology of a Large Language Model." Emmanuel explains how his team developed mechanistic interpretability methods to understand the internal workings of Claude by replacing dense neural network components with sparse, interpretable alternatives. The conversation explores several fascinating discoveries about large language models, including how they plan ahead when writing poetry (selecting the rhyming word "rabbit" before crafting the sentence leading to it), perform mathematical calculations using unique algorithms, and process concepts across multiple languages using shared neural representations. Emmanuel details how the team can intervene in model behavior by manipulating specific neural pathways, revealing how concepts are distributed throughout the network's MLPs and attention mechanisms. The discussion highlights both capabilities and limitations of LLMs, showing how hallucinations occur through separate recognition and recall circuits, and demonstrates why chain-of-thought explanations aren't always faithful representations of the model's actual reasoning. This research ultimately supports Anthropic's safety strategy by providing a deeper understanding of how these AI systems actually work.

The complete show notes for this episode can be found at https://twimlai.com/go/727.</description>
      <pubDate>Mon, 14 Apr 2025 19:40:00 -0000</pubDate>
      <itunes:title>Exploring the Biology of LLMs with Circuit Tracing with Emmanuel Ameisen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>727</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e1b47e8-1968-11f0-981f-ab6b34bc3918/image/bfaceda9194d6dcc689b339c74850c2a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this episode, Emmanuel Ameisen, a research engineer at Anthropic, returns to discuss two recent papers: "Circuit Tracing: Revealing Language Model Computational Graphs" and "On the Biology of a Large Language Model." Emmanuel explains how his team developed mechanistic interpretability methods to understand the internal workings of Claude by replacing dense neural network components with sparse, interpretable alternatives. The conversation explores several fascinating discoveries about large language models, including how they plan ahead when writing poetry (selecting the rhyming word "rabbit" before crafting the sentence leading to it), perform mathematical calculations using unique algorithms, and process concepts across multiple languages using shared neural representations. Emmanuel details how the team can intervene in model behavior by manipulating specific neural pathways, revealing how concepts are distributed throughout the network's MLPs and attention mechanisms. The discussion highlights both capabilities and limitations of LLMs, showing how hallucinations occur through separate recognition and recall circuits, and demonstrates why chain-of-thought explanations aren't always faithful representations of the model's actual reasoning. This research ultimately supports Anthropic's safety strategy by providing a deeper understanding of how these AI systems actually work.

The complete show notes for this episode can be found at https://twimlai.com/go/727.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, Emmanuel Ameisen, a research engineer at Anthropic, returns to discuss two recent papers: "Circuit Tracing: Revealing Language Model Computational Graphs" and "On the Biology of a Large Language Model." Emmanuel explains how his team developed mechanistic interpretability methods to understand the internal workings of Claude by replacing dense neural network components with sparse, interpretable alternatives. The conversation explores several fascinating discoveries about large language models, including how they plan ahead when writing poetry (selecting the rhyming word "rabbit" before crafting the sentence leading to it), perform mathematical calculations using unique algorithms, and process concepts across multiple languages using shared neural representations. Emmanuel details how the team can intervene in model behavior by manipulating specific neural pathways, revealing how concepts are distributed throughout the network's MLPs and attention mechanisms. The discussion highlights both capabilities and limitations of LLMs, showing how hallucinations occur through separate recognition and recall circuits, and demonstrates why chain-of-thought explanations aren't always faithful representations of the model's actual reasoning. This research ultimately supports Anthropic's safety strategy by providing a deeper understanding of how these AI systems actually work.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/727">https://twimlai.com/go/727</a>.</p>]]>
      </content:encoded>
      <itunes:duration>5646</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6e1b47e8-1968-11f0-981f-ab6b34bc3918]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9142792837.mp3?updated=1744660777"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen - #726</title>
      <link>https://twimlai.com/podcast/twimlai/teaching-llms-to-self-reflect-with-reinforcement-learning/</link>
      <description>Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research.

The complete show notes for this episode can be found at https://twimlai.com/go/726.</description>
      <pubDate>Tue, 08 Apr 2025 07:38:00 -0000</pubDate>
      <itunes:title>Teaching LLMs to Self-Reflect with Reinforcement Learning with Maohao Shen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>726</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/13c0079c-13f6-11f0-b371-5f9286ba2701/image/58daee23e5572b8d5cda0aff4c16305d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research.

The complete show notes for this episode can be found at https://twimlai.com/go/726.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Maohao Shen, PhD student at MIT to discuss his paper, “Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search.” We dig into how Satori leverages reinforcement learning to improve language model reasoning—enabling model self-reflection, self-correction, and exploration of alternative solutions. We explore the Chain-of-Action-Thought (COAT) approach, which uses special tokens—continue, reflect, and explore—to guide the model through distinct reasoning actions, allowing it to navigate complex reasoning tasks without external supervision. We also break down Satori’s two-stage training process: format tuning, which teaches the model to understand and utilize the special action tokens, and reinforcement learning, which optimizes reasoning through trial-and-error self-improvement. We cover key techniques such “restart and explore,” which allows the model to self-correct and generalize beyond its training domain. Finally, Maohao reviews Satori’s performance and how it compares to other models, the reward design, the benchmarks used, and the surprising observations made during the research.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/726">https://twimlai.com/go/726</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3105</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[13c0079c-13f6-11f0-b371-5f9286ba2701]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2559598700.mp3?updated=1744061785"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Waymo's Foundation Model for Autonomous Driving with Drago Anguelov - #725</title>
      <link>https://twimlai.com/podcast/twimlai/waymos-foundation-model-for-autonomous-driving/</link>
      <description>Today, we're joined by Drago Anguelov, head of AI foundations at Waymo, for a deep dive into the role of foundation models in autonomous driving. Drago shares how Waymo is leveraging large-scale machine learning, including vision-language models and generative AI techniques to improve perception, planning, and simulation for its self-driving vehicles. The conversation explores the evolution of Waymo’s research stack, their custom “Waymo Foundation Model,” and how they’re incorporating multimodal sensor data like lidar, radar, and camera into advanced AI systems. Drago also discusses how Waymo ensures safety at scale with rigorous validation frameworks, predictive world models, and realistic simulation environments. Finally, we touch on the challenges of generalization across cities, freeway driving, end-to-end learning vs. modular architectures, and the future of AV testing through ML-powered simulation.

The complete show notes for this episode can be found at https://twimlai.com/go/725.</description>
      <pubDate>Mon, 31 Mar 2025 19:46:00 -0000</pubDate>
      <itunes:title>Waymo's Foundation Model for Autonomous Driving with Drago Anguelov</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>725</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c13d297a-0e67-11f0-ae82-bf5e34aafff0/image/0cf262f2d527918851e41eeeb6aeffa1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Drago Anguelov, head of AI foundations at Waymo, for a deep dive into the role of foundation models in autonomous driving. Drago shares how Waymo is leveraging large-scale machine learning, including vision-language models and generative AI techniques to improve perception, planning, and simulation for its self-driving vehicles. The conversation explores the evolution of Waymo’s research stack, their custom “Waymo Foundation Model,” and how they’re incorporating multimodal sensor data like lidar, radar, and camera into advanced AI systems. Drago also discusses how Waymo ensures safety at scale with rigorous validation frameworks, predictive world models, and realistic simulation environments. Finally, we touch on the challenges of generalization across cities, freeway driving, end-to-end learning vs. modular architectures, and the future of AV testing through ML-powered simulation.

The complete show notes for this episode can be found at https://twimlai.com/go/725.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Drago Anguelov, head of AI foundations at Waymo, for a deep dive into the role of foundation models in autonomous driving. Drago shares how Waymo is leveraging large-scale machine learning, including vision-language models and generative AI techniques to improve perception, planning, and simulation for its self-driving vehicles. The conversation explores the evolution of Waymo’s research stack, their custom “Waymo Foundation Model,” and how they’re incorporating multimodal sensor data like lidar, radar, and camera into advanced AI systems. Drago also discusses how Waymo ensures safety at scale with rigorous validation frameworks, predictive world models, and realistic simulation environments. Finally, we touch on the challenges of generalization across cities, freeway driving, end-to-end learning vs. modular architectures, and the future of AV testing through ML-powered simulation.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/725">https://twimlai.com/go/725</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4147</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c13d297a-0e67-11f0-ae82-bf5e34aafff0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6369365749.mp3?updated=1743451085"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini - #724</title>
      <link>https://twimlai.com/podcast/twimlai/dynamic-token-merging-for-efficient-byte-level-language-models/</link>
      <description>Today, we're joined by Julie Kallini, PhD student at Stanford University to discuss her recent papers, “MrT5: Dynamic Token Merging for Efficient Byte-level Language Models” and “Mission: Impossible Language Models.” For the MrT5 paper, we explore the importance and failings of tokenization in large language models—including inefficient compression rates for under-resourced languages—and dig into byte-level modeling as an alternative. We discuss the architecture of MrT5, its ability to learn language-specific compression rates, its performance on multilingual benchmarks and character-level manipulation tasks, and its performance and efficiency. For the “Mission: Impossible Language Models” paper, we review the core idea behind the research, the definition and creation of impossible languages, the creation of impossible language training datasets, and explore the bias of language model architectures towards natural language.

The complete show notes for this episode can be found at https://twimlai.com/go/724.</description>
      <pubDate>Mon, 24 Mar 2025 19:42:00 -0000</pubDate>
      <itunes:title>Dynamic Token Merging for Efficient Byte-level Language Models with Julie Kallini</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>724</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3bee63c8-08e6-11f0-9b9c-670926e7c0c0/image/402eff8e705c707aeddb1123616286b1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Julie Kallini, PhD student at Stanford University to discuss her recent papers, “MrT5: Dynamic Token Merging for Efficient Byte-level Language Models” and “Mission: Impossible Language Models.” For the MrT5 paper, we explore the importance and failings of tokenization in large language models—including inefficient compression rates for under-resourced languages—and dig into byte-level modeling as an alternative. We discuss the architecture of MrT5, its ability to learn language-specific compression rates, its performance on multilingual benchmarks and character-level manipulation tasks, and its performance and efficiency. For the “Mission: Impossible Language Models” paper, we review the core idea behind the research, the definition and creation of impossible languages, the creation of impossible language training datasets, and explore the bias of language model architectures towards natural language.

The complete show notes for this episode can be found at https://twimlai.com/go/724.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Julie Kallini, PhD student at Stanford University to discuss her recent papers, “MrT5: Dynamic Token Merging for Efficient Byte-level Language Models” and “Mission: Impossible Language Models.” For the MrT5 paper, we explore the importance and failings of tokenization in large language models—including inefficient compression rates for under-resourced languages—and dig into byte-level modeling as an alternative. We discuss the architecture of MrT5, its ability to learn language-specific compression rates, its performance on multilingual benchmarks and character-level manipulation tasks, and its performance and efficiency. For the “Mission: Impossible Language Models” paper, we review the core idea behind the research, the definition and creation of impossible languages, the creation of impossible language training datasets, and explore the bias of language model architectures towards natural language.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/724">https://twimlai.com/go/724</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3032</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3bee63c8-08e6-11f0-9b9c-670926e7c0c0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6993632573.mp3?updated=1742845563"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping - #723</title>
      <link>https://twimlai.com/podcast/twimlai/scaling-up-test-time-compute-with-latent-reasoning/</link>
      <description>Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.

The complete show notes for this episode can be found at https://twimlai.com/go/723.</description>
      <pubDate>Mon, 17 Mar 2025 15:37:00 -0000</pubDate>
      <itunes:title>Scaling Up Test-Time Compute with Latent Reasoning with Jonas Geiping</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>723</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75faa452-0344-11f0-8e79-1fe3636bd481/image/d5ee05ea185ebd110fc59c22311e5fab.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.

The complete show notes for this episode can be found at https://twimlai.com/go/723.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Jonas Geiping, research group leader at Ellis Institute and the Max Planck Institute for Intelligent Systems to discuss his recent paper, “Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach.” This paper proposes a novel language model architecture which uses recurrent depth to enable “thinking in latent space.” We dig into “internal reasoning” versus “verbalized reasoning”—analogous to non-verbalized and verbalized thinking in humans, and discuss how the model searches in latent space to predict the next token and dynamically allocates more compute based on token difficulty. We also explore how the recurrent depth architecture simplifies LLMs, the parallels to diffusion models, the model's performance on reasoning tasks, the challenges of comparing models with varying compute budgets, and architectural advantages such as zero-shot adaptive exits and natural speculative decoding.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/723">https://twimlai.com/go/723</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3518</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[75faa452-0344-11f0-8e79-1fe3636bd481]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5952508288.mp3?updated=1742226663"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li - #722</title>
      <link>https://twimlai.com/podcast/twimlai/imagine-while-reasoning-in-space-multimodal-visualization-of-thought/</link>
      <description>Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motivations behind MVoT, its connection to prior work like TopViewRS, and its relation to cognitive science principles such as dual coding theory. We dig into the MVoT framework along with its various task environments—maze, mini-behavior, and frozen lake. We explore token discrepancy loss, a technique designed to align language and visual embeddings, ensuring accurate and meaningful visual representations. Additionally, we cover the data collection and training process, reasoning over relative spatial relations between different entities, and dynamic spatial reasoning. Lastly, Chengzu shares insights from experiments with MVoT, focusing on the lessons learned and the potential for applying these models in real-world scenarios like robotics and architectural design.

The complete show notes for this episode can be found at https://twimlai.com/go/722.</description>
      <pubDate>Mon, 10 Mar 2025 17:44:00 -0000</pubDate>
      <itunes:title>Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with Chengzu Li</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>722</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/396e69e2-fdd6-11ef-82ab-87ca2358621d/image/8df54bf3daaefe7a97e670094c3eb799.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motivations behind MVoT, its connection to prior work like TopViewRS, and its relation to cognitive science principles such as dual coding theory. We dig into the MVoT framework along with its various task environments—maze, mini-behavior, and frozen lake. We explore token discrepancy loss, a technique designed to align language and visual embeddings, ensuring accurate and meaningful visual representations. Additionally, we cover the data collection and training process, reasoning over relative spatial relations between different entities, and dynamic spatial reasoning. Lastly, Chengzu shares insights from experiments with MVoT, focusing on the lessons learned and the potential for applying these models in real-world scenarios like robotics and architectural design.

The complete show notes for this episode can be found at https://twimlai.com/go/722.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Chengzu Li, PhD student at the University of Cambridge to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motivations behind MVoT, its connection to prior work like TopViewRS, and its relation to cognitive science principles such as dual coding theory. We dig into the MVoT framework along with its various task environments—maze, mini-behavior, and frozen lake. We explore token discrepancy loss, a technique designed to align language and visual embeddings, ensuring accurate and meaningful visual representations. Additionally, we cover the data collection and training process, reasoning over relative spatial relations between different entities, and dynamic spatial reasoning. Lastly, Chengzu shares insights from experiments with MVoT, focusing on the lessons learned and the potential for applying these models in real-world scenarios like robotics and architectural design.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/722">https://twimlai.com/go/722</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2531</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[396e69e2-fdd6-11ef-82ab-87ca2358621d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3172764469.mp3?updated=1741629167"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff - #721</title>
      <link>https://twimlai.com/podcast/twimlai/inside-s1-an-o1-style-reasoning-model-that-cost-under-50-to-train/</link>
      <description>Today, we're joined by Niklas Muennighoff, a PhD student at Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares to OpenAI's O1 and DeepSeek's R1 models. We dig into the different approaches to test-time scaling, including parallel and sequential scaling, as well as S1’s data curation process, its training recipe, and its use of model distillation from Google Gemini and DeepSeek R1. We explore the novel "budget forcing" technique developed in the paper, allowing it to think longer for harder problems and optimize test-time compute for better performance. Additionally, we cover the evaluation benchmarks used, the comparison between supervised fine-tuning and reinforcement learning, and similar projects like the Hugging Face Open R1 project. Finally, we discuss the open-sourcing of S1 and its future directions.

The complete show notes for this episode can be found at https://twimlai.com/go/721.</description>
      <pubDate>Mon, 03 Mar 2025 23:56:03 -0000</pubDate>
      <itunes:title>Inside s1: An o1-Style Reasoning Model That Cost Under $50 to Train with Niklas Muennighoff</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>721</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/38f2bdb6-f885-11ef-b7ce-8f85e73103b8/image/7218b1de422d4bab24685209bc980a41.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Niklas Muennighoff, a PhD student at Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares to OpenAI's O1 and DeepSeek's R1 models. We dig into the different approaches to test-time scaling, including parallel and sequential scaling, as well as S1’s data curation process, its training recipe, and its use of model distillation from Google Gemini and DeepSeek R1. We explore the novel "budget forcing" technique developed in the paper, allowing it to think longer for harder problems and optimize test-time compute for better performance. Additionally, we cover the evaluation benchmarks used, the comparison between supervised fine-tuning and reinforcement learning, and similar projects like the Hugging Face Open R1 project. Finally, we discuss the open-sourcing of S1 and its future directions.

The complete show notes for this episode can be found at https://twimlai.com/go/721.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Niklas Muennighoff, a PhD student at Stanford University, to discuss his paper, “S1: Simple Test-Time Scaling.” We explore the motivations behind S1, as well as how it compares to OpenAI's O1 and DeepSeek's R1 models. We dig into the different approaches to test-time scaling, including parallel and sequential scaling, as well as S1’s data curation process, its training recipe, and its use of model distillation from Google Gemini and DeepSeek R1. We explore the novel "budget forcing" technique developed in the paper, allowing it to think longer for harder problems and optimize test-time compute for better performance. Additionally, we cover the evaluation benchmarks used, the comparison between supervised fine-tuning and reinforcement learning, and similar projects like the Hugging Face Open R1 project. Finally, we discuss the open-sourcing of S1 and its future directions.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/721">https://twimlai.com/go/721</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2969</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[38f2bdb6-f885-11ef-b7ce-8f85e73103b8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4321517135.mp3?updated=1741045497"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant - #720</title>
      <link>https://twimlai.com/podcast/twimlai/accelerating-ai-training-and-inference-with-aws-trainium2/</link>
      <description>Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 chip. We explore the architectural differences between Trainium and GPUs, highlighting its systolic array-based compute design, and how it balances performance across key dimensions like compute, memory bandwidth, memory capacity, and network bandwidth. We also discuss the Trainium tooling ecosystem including the Neuron SDK, Neuron Compiler, and Neuron Kernel Interface (NKI). We also dig into the various ways Trainum2 is offered, including Trn2 instances, UltraServers, and UltraClusters, and access through managed services like AWS Bedrock. Finally, we cover sparsity optimizations, customer adoption, performance benchmarks, support for Mixture of Experts (MoE) models, and what’s next for Trainium.

The complete show notes for this episode can be found at https://twimlai.com/go/720.</description>
      <pubDate>Mon, 24 Feb 2025 18:01:00 -0000</pubDate>
      <itunes:title>Accelerating AI Training and Inference with AWS Trainium2 with Ron Diamant</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>720</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/49fb6a42-f2d4-11ef-87ac-53e9229ffa78/image/d033362136d14c116f71393dce7c08a0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 chip. We explore the architectural differences between Trainium and GPUs, highlighting its systolic array-based compute design, and how it balances performance across key dimensions like compute, memory bandwidth, memory capacity, and network bandwidth. We also discuss the Trainium tooling ecosystem including the Neuron SDK, Neuron Compiler, and Neuron Kernel Interface (NKI). We also dig into the various ways Trainum2 is offered, including Trn2 instances, UltraServers, and UltraClusters, and access through managed services like AWS Bedrock. Finally, we cover sparsity optimizations, customer adoption, performance benchmarks, support for Mixture of Experts (MoE) models, and what’s next for Trainium.

The complete show notes for this episode can be found at https://twimlai.com/go/720.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 chip. We explore the architectural differences between Trainium and GPUs, highlighting its systolic array-based compute design, and how it balances performance across key dimensions like compute, memory bandwidth, memory capacity, and network bandwidth. We also discuss the Trainium tooling ecosystem including the Neuron SDK, Neuron Compiler, and Neuron Kernel Interface (NKI). We also dig into the various ways Trainum2 is offered, including Trn2 instances, UltraServers, and UltraClusters, and access through managed services like AWS Bedrock. Finally, we cover sparsity optimizations, customer adoption, performance benchmarks, support for Mixture of Experts (MoE) models, and what’s next for Trainium.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/720">https://twimlai.com/go/720</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4025</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[49fb6a42-f2d4-11ef-87ac-53e9229ffa78]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1031417331.mp3?updated=1740420711"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>π0: A Foundation Model for Robotics with Sergey Levine - #719</title>
      <link>https://twimlai.com/podcast/twimlai/%cf%800-a-foundation-model-for-robotics/</link>
      <description>Today, we're joined by Sergey Levine, associate professor at UC Berkeley and co-founder of Physical Intelligence, to discuss π0 (pi-zero), a general-purpose robotic foundation model. We dig into the model architecture, which pairs a vision language model (VLM) with a diffusion-based action expert, and the model training "recipe," emphasizing the roles of pre-training and post-training with a diverse mixture of real-world data to ensure robust and intelligent robot learning. We review the data collection approach, which uses human operators and teleoperation rigs, the potential of synthetic data and reinforcement learning in enhancing robotic capabilities, and much more. We also introduce the team’s new FAST tokenizer, which opens the door to a fully Transformer-based model and significant improvements in learning and generalization. Finally, we cover the open-sourcing of π0 and future directions for their research.

The complete show notes for this episode can be found at https://twimlai.com/go/719.</description>
      <pubDate>Tue, 18 Feb 2025 07:46:21 -0000</pubDate>
      <itunes:title>π0: A Foundation Model for Robotics with Sergey Levine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>719</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/af625406-ed82-11ef-ba20-478ca757bf5d/image/6560059b02d879ad17e5d1b4c314acf6.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Sergey Levine, associate professor at UC Berkeley and co-founder of Physical Intelligence, to discuss π0 (pi-zero), a general-purpose robotic foundation model. We dig into the model architecture, which pairs a vision language model (VLM) with a diffusion-based action expert, and the model training "recipe," emphasizing the roles of pre-training and post-training with a diverse mixture of real-world data to ensure robust and intelligent robot learning. We review the data collection approach, which uses human operators and teleoperation rigs, the potential of synthetic data and reinforcement learning in enhancing robotic capabilities, and much more. We also introduce the team’s new FAST tokenizer, which opens the door to a fully Transformer-based model and significant improvements in learning and generalization. Finally, we cover the open-sourcing of π0 and future directions for their research.

The complete show notes for this episode can be found at https://twimlai.com/go/719.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Sergey Levine, associate professor at UC Berkeley and co-founder of Physical Intelligence, to discuss π0 (pi-zero), a general-purpose robotic foundation model. We dig into the model architecture, which pairs a vision language model (VLM) with a diffusion-based action expert, and the model training "recipe," emphasizing the roles of pre-training and post-training with a diverse mixture of real-world data to ensure robust and intelligent robot learning. We review the data collection approach, which uses human operators and teleoperation rigs, the potential of synthetic data and reinforcement learning in enhancing robotic capabilities, and much more. We also introduce the team’s new FAST tokenizer, which opens the door to a fully Transformer-based model and significant improvements in learning and generalization. Finally, we cover the open-sourcing of π0 and future directions for their research.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/719">https://twimlai.com/go/719</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3150</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[af625406-ed82-11ef-ba20-478ca757bf5d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2066349156.mp3?updated=1739834128"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia - #718</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2025-ai-agents-and-multi-agent-systems/</link>
      <description>Today we’re joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond. In this episode, we discuss the unique abilities that set AI agents apart from traditional software systems–reasoning, acting, communicating, and adapting. We also examine the rise of agentic foundation models, the emergence of interface agents like Claude with Computer Use and OpenAI Operator, the shift from simple task chains to complex workflows, and the growing range of enterprise use cases. Victor shares insights into emerging design patterns for autonomous multi-agent systems, including graph and message-driven architectures, the advantages of the “actor model” pattern as implemented in Microsoft’s AutoGen, and guidance on how users should approach the ”build vs. buy” decision when working with AI agent frameworks. We also address the challenges of evaluating end-to-end agent performance, the complexities of benchmarking agentic systems, and the implications of our reliance on LLMs as judges. Finally, we look ahead to the future of AI agents in 2025 and beyond, discuss emerging HCI challenges, their potential for impact on the workforce, and how they are poised to reshape fields like software engineering.

The complete show notes for this episode can be found at https://twimlai.com/go/718.</description>
      <pubDate>Mon, 10 Feb 2025 18:12:00 -0000</pubDate>
      <itunes:title>AI Trends 2025: AI Agents and Multi-Agent Systems with Victor Dibia</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>718</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1a6ae728-e7d8-11ef-901c-5bc62f6435fd/image/5a197803a85201275f40a8ba16184e25.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond. In this episode, we discuss the unique abilities that set AI agents apart from traditional software systems–reasoning, acting, communicating, and adapting. We also examine the rise of agentic foundation models, the emergence of interface agents like Claude with Computer Use and OpenAI Operator, the shift from simple task chains to complex workflows, and the growing range of enterprise use cases. Victor shares insights into emerging design patterns for autonomous multi-agent systems, including graph and message-driven architectures, the advantages of the “actor model” pattern as implemented in Microsoft’s AutoGen, and guidance on how users should approach the ”build vs. buy” decision when working with AI agent frameworks. We also address the challenges of evaluating end-to-end agent performance, the complexities of benchmarking agentic systems, and the implications of our reliance on LLMs as judges. Finally, we look ahead to the future of AI agents in 2025 and beyond, discuss emerging HCI challenges, their potential for impact on the workforce, and how they are poised to reshape fields like software engineering.

The complete show notes for this episode can be found at https://twimlai.com/go/718.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Victor Dibia, principal research software engineer at Microsoft Research, to explore the key trends and advancements in AI agents and multi-agent systems shaping 2025 and beyond. In this episode, we discuss the unique abilities that set AI agents apart from traditional software systems–reasoning, acting, communicating, and adapting. We also examine the rise of agentic foundation models, the emergence of interface agents like Claude with Computer Use and OpenAI Operator, the shift from simple task chains to complex workflows, and the growing range of enterprise use cases. Victor shares insights into emerging design patterns for autonomous multi-agent systems, including graph and message-driven architectures, the advantages of the “actor model” pattern as implemented in Microsoft’s AutoGen, and guidance on how users should approach the ”build vs. buy” decision when working with AI agent frameworks. We also address the challenges of evaluating end-to-end agent performance, the complexities of benchmarking agentic systems, and the implications of our reliance on LLMs as judges. Finally, we look ahead to the future of AI agents in 2025 and beyond, discuss emerging HCI challenges, their potential for impact on the workforce, and how they are poised to reshape fields like software engineering.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/718">https://twimlai.com/go/718</a>.</p>]]>
      </content:encoded>
      <itunes:duration>6299</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1a6ae728-e7d8-11ef-901c-5bc62f6435fd]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9635186515.mp3?updated=1739211936"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Speculative Decoding and Efficient LLM Inference with Chris Lott - #717</title>
      <link>https://twimlai.com/podcast/twimlai/speculative-decoding-and-efficient-llm-inference/</link>
      <description>Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator.

The complete show notes for this episode can be found at https://twimlai.com/go/717.</description>
      <pubDate>Tue, 04 Feb 2025 07:23:33 -0000</pubDate>
      <itunes:title>Speculative Decoding and Efficient LLM Inference with Chris Lott</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>717</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/dbc81efc-e271-11ef-83d6-fba112900069/image/6c59af1beaeb8ab41fbe3b9eac877c88.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator.

The complete show notes for this episode can be found at https://twimlai.com/go/717.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Chris Lott, senior director of engineering at Qualcomm AI Research to discuss accelerating large language model inference. We explore the challenges presented by the LLM encoding and decoding (aka generation) and how these interact with various hardware constraints such as FLOPS, memory footprint and memory bandwidth to limit key inference metrics such as time-to-first-token, tokens per second, and tokens per joule. We then dig into a variety of techniques that can be used to accelerate inference such as KV compression, quantization, pruning, speculative decoding, and leveraging small language models (SLMs). We also discuss future directions for enabling on-device agentic experiences such as parallel generation and software tools like Qualcomm AI Orchestrator.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/717">https://twimlai.com/go/717</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4590</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dbc81efc-e271-11ef-83d6-fba112900069]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3928866342.mp3?updated=1738618160"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Ensuring Privacy for Any LLM with Patricia Thaine - #716</title>
      <link>https://twimlai.com/podcast/twimlai/ensuring-privacy-for-any-llm/</link>
      <description>Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence.

The complete show notes for this episode can be found at https://twimlai.com/go/716.</description>
      <pubDate>Tue, 28 Jan 2025 22:31:50 -0000</pubDate>
      <itunes:title>Ensuring Privacy for Any LLM with Patricia Thaine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>716</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c3432674-ddbc-11ef-8779-6385e7f8227c/image/9f07b213b272351dd6be5fac4159a17b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence.

The complete show notes for this episode can be found at https://twimlai.com/go/716.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Patricia Thaine, co-founder and CEO of Private AI to discuss techniques for ensuring privacy, data minimization, and compliance when using 3rd-party large language models (LLMs) and other AI services. We explore the risks of data leakage from LLMs and embeddings, the complexities of identifying and redacting personal information across various data flows, and the approach Private AI has taken to mitigate these risks. We also dig into the challenges of entity recognition in multimodal systems including OCR files, documents, images, and audio, and the importance of data quality and model accuracy. Additionally, Patricia shares insights on the limitations of data anonymization, the benefits of balancing real-world and synthetic data in model training and development, and the relationship between privacy and bias in AI. Finally, we touch on the evolving landscape of AI regulations like GDPR, CPRA, and the EU AI Act, and the future of privacy in artificial intelligence.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/716">https://twimlai.com/go/716</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3093</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c3432674-ddbc-11ef-8779-6385e7f8227c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9741894801.mp3?updated=1738101399"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Engineering Pitfalls with Chip Huyen - #715</title>
      <link>https://twimlai.com/podcast/twimlai/ai-engineering-pitfalls/</link>
      <description>Today, we're joined by Chip Huyen, independent researcher and writer to discuss her new book, “AI Engineering.” We dig into the definition of AI engineering, its key differences from traditional machine learning engineering, the common pitfalls encountered in engineering AI systems, and strategies to overcome them. We also explore how Chip defines AI agents, their current limitations and capabilities, and the critical role of effective planning and tool utilization in these systems. Additionally, Chip shares insights on the importance of evaluation in AI systems, highlighting the need for systematic processes, human oversight, and rigorous metrics and benchmarks. Finally, we touch on the impact of open-source models, the potential of synthetic data, and Chip’s predictions for the year ahead.

The complete show notes for this episode can be found at https://twimlai.com/go/715.</description>
      <pubDate>Tue, 21 Jan 2025 22:26:00 -0000</pubDate>
      <itunes:title>AI Engineering Pitfalls with Chip Huyen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>715</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/72ed2c20-d843-11ef-b622-cb0636c63cf3/image/f7f10be84ec6f3f25739baedfc44e122.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Chip Huyen, independent researcher and writer to discuss her new book, “AI Engineering.” We dig into the definition of AI engineering, its key differences from traditional machine learning engineering, the common pitfalls encountered in engineering AI systems, and strategies to overcome them. We also explore how Chip defines AI agents, their current limitations and capabilities, and the critical role of effective planning and tool utilization in these systems. Additionally, Chip shares insights on the importance of evaluation in AI systems, highlighting the need for systematic processes, human oversight, and rigorous metrics and benchmarks. Finally, we touch on the impact of open-source models, the potential of synthetic data, and Chip’s predictions for the year ahead.

The complete show notes for this episode can be found at https://twimlai.com/go/715.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Chip Huyen, independent researcher and writer to discuss her new book, “AI Engineering.” We dig into the definition of AI engineering, its key differences from traditional machine learning engineering, the common pitfalls encountered in engineering AI systems, and strategies to overcome them. We also explore how Chip defines AI agents, their current limitations and capabilities, and the critical role of effective planning and tool utilization in these systems. Additionally, Chip shares insights on the importance of evaluation in AI systems, highlighting the need for systematic processes, human oversight, and rigorous metrics and benchmarks. Finally, we touch on the impact of open-source models, the potential of synthetic data, and Chip’s predictions for the year ahead.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/715">https://twimlai.com/go/715</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3457</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[72ed2c20-d843-11ef-b622-cb0636c63cf3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3302347327.mp3?updated=1737498763"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Evolving MLOps Platforms for Generative AI and Agents with Abhijit Bose - #714</title>
      <link>https://twimlai.com/podcast/twimlai/evolving-mlops-platforms-for-generative-ai-and-agents/</link>
      <description>Today, we're joined by Abhijit Bose, head of enterprise AI and ML platforms at Capital One to discuss the evolution of the company’s approach and insights on Generative AI and platform best practices. In this episode, we dig into the company’s platform-centric approach to AI, and how they’ve been evolving their existing MLOps and data platforms to support the new challenges and opportunities presented by generative AI workloads and AI agents. We explore their use of cloud-based infrastructure—in this case on AWS—to provide a foundation upon which they then layer open-source and proprietary services and tools. We cover their use of Llama 3 and open-weight models, their approach to fine-tuning, their observability tooling for Gen AI applications, their use of inference optimization techniques like quantization, and more. Finally, Abhijit shares the future of agentic workflows in the enterprise, the application of OpenAI o1-style reasoning in models, and the new roles and skillsets required in the evolving GenAI landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/714.</description>
      <pubDate>Mon, 13 Jan 2025 22:25:00 -0000</pubDate>
      <itunes:title>Evolving MLOps Platforms for Generative AI and Agents with Abhijit Bose</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>714</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d236e30-d1f8-11ef-bb2f-377df2276ee8/image/c1751e22d1566f4f669c146168ebb4bd.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Abhijit Bose, head of enterprise AI and ML platforms at Capital One to discuss the evolution of the company’s approach and insights on Generative AI and platform best practices. In this episode, we dig into the company’s platform-centric approach to AI, and how they’ve been evolving their existing MLOps and data platforms to support the new challenges and opportunities presented by generative AI workloads and AI agents. We explore their use of cloud-based infrastructure—in this case on AWS—to provide a foundation upon which they then layer open-source and proprietary services and tools. We cover their use of Llama 3 and open-weight models, their approach to fine-tuning, their observability tooling for Gen AI applications, their use of inference optimization techniques like quantization, and more. Finally, Abhijit shares the future of agentic workflows in the enterprise, the application of OpenAI o1-style reasoning in models, and the new roles and skillsets required in the evolving GenAI landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/714.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Abhijit Bose, head of enterprise AI and ML platforms at Capital One to discuss the evolution of the company’s approach and insights on Generative AI and platform best practices. In this episode, we dig into the company’s platform-centric approach to AI, and how they’ve been evolving their existing MLOps and data platforms to support the new challenges and opportunities presented by generative AI workloads and AI agents. We explore their use of cloud-based infrastructure—in this case on AWS—to provide a foundation upon which they then layer open-source and proprietary services and tools. We cover their use of Llama 3 and open-weight models, their approach to fine-tuning, their observability tooling for Gen AI applications, their use of inference optimization techniques like quantization, and more. Finally, Abhijit shares the future of agentic workflows in the enterprise, the application of OpenAI o1-style reasoning in models, and the new roles and skillsets required in the evolving GenAI landscape.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/714">https://twimlai.com/go/714</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3488</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7d236e30-d1f8-11ef-bb2f-377df2276ee8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4141194502.mp3?updated=1736914659"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Why Agents Are Stupid &amp; What We Can Do About It with Dan Jeffries - #713</title>
      <link>https://twimlai.com/podcast/twimlai/why-agents-are-stupid-what-we-can-do-about-it/</link>
      <description>Today, we're joined by Dan Jeffries, founder and CEO of Kentauros AI to discuss the challenges currently faced by those developing advanced AI agents. We dig into how Dan defines agents and distinguishes them from other similar uses of LLM, explore various use cases for them, and dig into ways to create smarter agentic systems. Dan shared his “big brain, little brain, tool brain” approach to tackling real-world challenges in agents, the trade-offs in leveraging general-purpose vs. task-specific models, and his take on LLM reasoning. We also cover the way he thinks about model selection for agents, along with the need for new tools and platforms for deploying them. Finally, Dan emphasizes the importance of open source in advancing AI, shares the new products they’re working on, and explores the future directions in the agentic era.

The complete show notes for this episode can be found at https://twimlai.com/go/713.</description>
      <pubDate>Mon, 16 Dec 2024 20:47:07 -0000</pubDate>
      <itunes:title>Why Agents Are Stupid &amp; What We Can Do About It with Dan Jeffries</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>713</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aa266980-bbed-11ef-9c84-8f9a4ef2523f/image/3d6a944e75f26353ce913dc3a73e1748.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Dan Jeffries, founder and CEO of Kentauros AI to discuss the challenges currently faced by those developing advanced AI agents. We dig into how Dan defines agents and distinguishes them from other similar uses of LLM, explore various use cases for them, and dig into ways to create smarter agentic systems. Dan shared his “big brain, little brain, tool brain” approach to tackling real-world challenges in agents, the trade-offs in leveraging general-purpose vs. task-specific models, and his take on LLM reasoning. We also cover the way he thinks about model selection for agents, along with the need for new tools and platforms for deploying them. Finally, Dan emphasizes the importance of open source in advancing AI, shares the new products they’re working on, and explores the future directions in the agentic era.

The complete show notes for this episode can be found at https://twimlai.com/go/713.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Dan Jeffries, founder and CEO of Kentauros AI to discuss the challenges currently faced by those developing advanced AI agents. We dig into how Dan defines agents and distinguishes them from other similar uses of LLM, explore various use cases for them, and dig into ways to create smarter agentic systems. Dan shared his “big brain, little brain, tool brain” approach to tackling real-world challenges in agents, the trade-offs in leveraging general-purpose vs. task-specific models, and his take on LLM reasoning. We also cover the way he thinks about model selection for agents, along with the need for new tools and platforms for deploying them. Finally, Dan emphasizes the importance of open source in advancing AI, shares the new products they’re working on, and explores the future directions in the agentic era.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/713">https://twimlai.com/go/713</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4129</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aa266980-bbed-11ef-9c84-8f9a4ef2523f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8293963679.mp3?updated=1734382445"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automated Reasoning to Prevent LLM Hallucination with Byron Cook - #712</title>
      <link>https://twimlai.com/podcast/twimlai/automated-reasoning-to-prevent-llm-hallucination/</link>
      <description>Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they’ve applied, the use of techniques like ‘constrained coding’ and ‘backtracking,’ and the future co-evolution of automated reasoning and generative AI.

The complete show notes for this episode can be found at https://twimlai.com/go/712.</description>
      <pubDate>Mon, 09 Dec 2024 20:18:32 -0000</pubDate>
      <itunes:title>Automated Reasoning to Prevent LLM Hallucination with Byron Cook</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>712</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d21e3458-b662-11ef-ab6a-cbeec55dd22e/image/2b3bc137be84b11b8428519e75271073.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they’ve applied, the use of techniques like ‘constrained coding’ and ‘backtracking,’ and the future co-evolution of automated reasoning and generative AI.

The complete show notes for this episode can be found at https://twimlai.com/go/712.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they’ve applied, the use of techniques like ‘constrained coding’ and ‘backtracking,’ and the future co-evolution of automated reasoning and generative AI.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/712">https://twimlai.com/go/712</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3408</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d21e3458-b662-11ef-ab6a-cbeec55dd22e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7157710802.mp3?updated=1733775633"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI at the Edge: Qualcomm AI Research at NeurIPS 2024 with Arash Behboodi - #711</title>
      <link>https://twimlai.com/podcast/twimlai/ai-at-the-edge-qualcomm-ai-research-at-neurips-2024/</link>
      <description>Today, we're joined by Arash Behboodi, director of engineering at Qualcomm AI Research to discuss the papers and workshops Qualcomm will be presenting at this year’s NeurIPS conference. We dig into the challenges and opportunities presented by differentiable simulation in wireless systems, the sciences, and beyond. We also explore recent work that ties conformal prediction to information theory, yielding a novel approach to incorporating uncertainty quantification directly into machine learning models. Finally, we review several papers enabling the efficient use of LoRA (Low-Rank Adaptation) on mobile devices (Hollowed Net, ShiRA, FouRA). Arash also previews the demos Qualcomm will be hosting at NeurIPS, including new video editing diffusion and 3D content generation models running on-device, Qualcomm's AI Hub, and more!

The complete show notes for this episode can be found at https://twimlai.com/go/711.</description>
      <pubDate>Tue, 03 Dec 2024 18:13:00 -0000</pubDate>
      <itunes:title>AI at the Edge: Qualcomm AI Research at NeurIPS 2024 with Arash Behboodi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>711</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ee78d2c4-b19f-11ef-ab17-5f637e959dd5/image/e7927a37e19a3337b4e7b4d9c4329cdb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Arash Behboodi, director of engineering at Qualcomm AI Research to discuss the papers and workshops Qualcomm will be presenting at this year’s NeurIPS conference. We dig into the challenges and opportunities presented by differentiable simulation in wireless systems, the sciences, and beyond. We also explore recent work that ties conformal prediction to information theory, yielding a novel approach to incorporating uncertainty quantification directly into machine learning models. Finally, we review several papers enabling the efficient use of LoRA (Low-Rank Adaptation) on mobile devices (Hollowed Net, ShiRA, FouRA). Arash also previews the demos Qualcomm will be hosting at NeurIPS, including new video editing diffusion and 3D content generation models running on-device, Qualcomm's AI Hub, and more!

The complete show notes for this episode can be found at https://twimlai.com/go/711.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Arash Behboodi, director of engineering at Qualcomm AI Research to discuss the papers and workshops Qualcomm will be presenting at this year’s NeurIPS conference. We dig into the challenges and opportunities presented by differentiable simulation in wireless systems, the sciences, and beyond. We also explore recent work that ties conformal prediction to information theory, yielding a novel approach to incorporating uncertainty quantification directly into machine learning models. Finally, we review several papers enabling the efficient use of LoRA (Low-Rank Adaptation) on mobile devices (Hollowed Net, ShiRA, FouRA). Arash also previews the demos Qualcomm will be hosting at NeurIPS, including new video editing diffusion and 3D content generation models running on-device, Qualcomm's AI Hub, and more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/711">https://twimlai.com/go/711</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3287</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ee78d2c4-b19f-11ef-ab17-5f637e959dd5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6480318768.mp3?updated=1733249837"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Network Management with Shirley Wu - #710</title>
      <link>https://twimlai.com/podcast/twimlai/ai-for-network-management/</link>
      <description>Today, we're joined by Shirley Wu, senior director of software engineering at Juniper Networks to discuss how machine learning and artificial intelligence are transforming network management. We explore various use cases where AI and ML are applied to enhance the quality, performance, and efficiency of networks across Juniper’s customers, including diagnosing cable degradation, proactive monitoring for coverage gaps, and real-time fault detection. We also dig into the complexities of integrating data science into networking, the trade-offs between traditional methods and ML-based solutions, the role of feature engineering and data in networking, the applicability of large language models, and Juniper’s approach to using smaller, specialized ML models to optimize speed, latency, and cost. Finally, Shirley shares some future directions for Juniper Mist such as proactive network testing and end-user self-service.

The complete show notes for this episode can be found at https://twimlai.com/go/710.</description>
      <pubDate>Tue, 19 Nov 2024 10:53:53 -0000</pubDate>
      <itunes:title>AI for Network Management with Shirley Wu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>710</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cae3bc6c-a604-11ef-9055-bba90df3e25c/image/bd6730f6e977527818eff0dd9764a832.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Shirley Wu, senior director of software engineering at Juniper Networks to discuss how machine learning and artificial intelligence are transforming network management. We explore various use cases where AI and ML are applied to enhance the quality, performance, and efficiency of networks across Juniper’s customers, including diagnosing cable degradation, proactive monitoring for coverage gaps, and real-time fault detection. We also dig into the complexities of integrating data science into networking, the trade-offs between traditional methods and ML-based solutions, the role of feature engineering and data in networking, the applicability of large language models, and Juniper’s approach to using smaller, specialized ML models to optimize speed, latency, and cost. Finally, Shirley shares some future directions for Juniper Mist such as proactive network testing and end-user self-service.

The complete show notes for this episode can be found at https://twimlai.com/go/710.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Shirley Wu, senior director of software engineering at Juniper Networks to discuss how machine learning and artificial intelligence are transforming network management. We explore various use cases where AI and ML are applied to enhance the quality, performance, and efficiency of networks across Juniper’s customers, including diagnosing cable degradation, proactive monitoring for coverage gaps, and real-time fault detection. We also dig into the complexities of integrating data science into networking, the trade-offs between traditional methods and ML-based solutions, the role of feature engineering and data in networking, the applicability of large language models, and Juniper’s approach to using smaller, specialized ML models to optimize speed, latency, and cost. Finally, Shirley shares some future directions for Juniper Mist such as proactive network testing and end-user self-service.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/710">https://twimlai.com/go/710</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3224</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cae3bc6c-a604-11ef-9055-bba90df3e25c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4238471231.mp3?updated=1731974193"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Why Your RAG System Is Broken, and How to Fix It with Jason Liu - #709</title>
      <link>https://twimlai.com/podcast/twimlai/why-your-rag-pipeline-is-broken-and-how-to-fix-it/</link>
      <description>Today, we're joined by Jason Liu, freelance AI consultant, advisor, and creator of the Instructor library to discuss all things retrieval-augmented generation (RAG). We dig into the tactical and strategic challenges companies face with their RAG system, the different signs Jason looks for to identify looming problems, the issues he most commonly encounters, and the steps he takes to diagnose these issues. We also cover the significance of building out robust test datasets, data-driven experimentation, evaluation tools, and metrics for different use cases. We also touched on fine-tuning strategies for RAG systems, the effectiveness of different chunking strategies, the use of collaboration tools like Braintrust, and how future models will change the game. Lastly, we cover Jason’s interest in teaching others how to capitalize on their own AI experience via his AI consulting course.

The complete show notes for this episode can be found at https://twimlai.com/go/709.</description>
      <pubDate>Mon, 11 Nov 2024 15:55:00 -0000</pubDate>
      <itunes:title>Why Your RAG System Is Broken, and How to Fix It with Jason Liu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>709</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fc282120-a062-11ef-9927-3fc53c6945a7/image/36ea193981f78d26af814bb4f3bd459c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Jason Liu, freelance AI consultant, advisor, and creator of the Instructor library to discuss all things retrieval-augmented generation (RAG). We dig into the tactical and strategic challenges companies face with their RAG system, the different signs Jason looks for to identify looming problems, the issues he most commonly encounters, and the steps he takes to diagnose these issues. We also cover the significance of building out robust test datasets, data-driven experimentation, evaluation tools, and metrics for different use cases. We also touched on fine-tuning strategies for RAG systems, the effectiveness of different chunking strategies, the use of collaboration tools like Braintrust, and how future models will change the game. Lastly, we cover Jason’s interest in teaching others how to capitalize on their own AI experience via his AI consulting course.

The complete show notes for this episode can be found at https://twimlai.com/go/709.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Jason Liu, freelance AI consultant, advisor, and creator of the Instructor library to discuss all things retrieval-augmented generation (RAG). We dig into the tactical and strategic challenges companies face with their RAG system, the different signs Jason looks for to identify looming problems, the issues he most commonly encounters, and the steps he takes to diagnose these issues. We also cover the significance of building out robust test datasets, data-driven experimentation, evaluation tools, and metrics for different use cases. We also touched on fine-tuning strategies for RAG systems, the effectiveness of different chunking strategies, the use of collaboration tools like Braintrust, and how future models will change the game. Lastly, we cover Jason’s interest in teaching others how to capitalize on their own AI experience via his <a href="https://maven.com/indie-consulting/ai-consultant-accelerator?promoCode=TWIML">AI consulting course</a>.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/709">https://twimlai.com/go/709</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3483</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fc282120-a062-11ef-9927-3fc53c6945a7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3653850871.mp3?updated=1731384027"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>An Agentic Mixture of Experts for DevOps with Sunil Mallya - #708</title>
      <link>https://twimlai.com/podcast/twimlai/an-agentic-mixture-of-experts-for-devops/</link>
      <description>Today we're joined by Sunil Mallya, CTO and co-founder of Flip AI. We discuss Flip’s incident debugging system for DevOps, which was built using a custom mixture of experts (MoE) large language model (LLM) trained on a novel "CoMELT" observability dataset which combines traditional MELT data—metrics, events, logs, and traces—with code to efficiently identify root failure causes in complex software systems. We discuss the challenges of integrating time-series data with LLMs and their multi-decoder architecture designed for this purpose. Sunil describes their system's agent-based design, focusing on clear roles and boundaries to ensure reliability. We examine their "chaos gym," a reinforcement learning environment used for testing and improving the system's robustness. Finally, we discuss the practical considerations of deploying such a system at scale in diverse environments and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/708.</description>
      <pubDate>Mon, 04 Nov 2024 13:53:00 -0000</pubDate>
      <itunes:title>An Agentic Mixture of Experts for DevOps with Sunil Mallya</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>708</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/29b7cf32-9aeb-11ef-a4ec-9b42fce72343/image/a3c29fc54188bcf14c7172c483cb41d6.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Sunil Mallya, CTO and co-founder of Flip AI. We discuss Flip’s incident debugging system for DevOps, which was built using a custom mixture of experts (MoE) large language model (LLM) trained on a novel "CoMELT" observability dataset which combines traditional MELT data—metrics, events, logs, and traces—with code to efficiently identify root failure causes in complex software systems. We discuss the challenges of integrating time-series data with LLMs and their multi-decoder architecture designed for this purpose. Sunil describes their system's agent-based design, focusing on clear roles and boundaries to ensure reliability. We examine their "chaos gym," a reinforcement learning environment used for testing and improving the system's robustness. Finally, we discuss the practical considerations of deploying such a system at scale in diverse environments and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/708.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Sunil Mallya, CTO and co-founder of Flip AI. We discuss Flip’s incident debugging system for DevOps, which was built using a custom mixture of experts (MoE) large language model (LLM) trained on a novel "CoMELT" observability dataset which combines traditional MELT data—metrics, events, logs, and traces—with code to efficiently identify root failure causes in complex software systems. We discuss the challenges of integrating time-series data with LLMs and their multi-decoder architecture designed for this purpose. Sunil describes their system's agent-based design, focusing on clear roles and boundaries to ensure reliability. We examine their "chaos gym," a reinforcement learning environment used for testing and improving the system's robustness. Finally, we discuss the practical considerations of deploying such a system at scale in diverse environments and much more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/708">https://twimlai.com/go/708</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4509</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[29b7cf32-9aeb-11ef-a4ec-9b42fce72343]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8491913296.mp3?updated=1730753189"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building AI Voice Agents with Scott Stephenson - #707</title>
      <link>https://twimlai.com/podcast/twimlai/building-ai-voice-agents/</link>
      <description>Today, we're joined by Scott Stephenson, co-founder and CEO of Deepgram to discuss voice AI agents. We explore the importance of perception, understanding, and interaction and how these key components work together in building intelligent AI voice agents. We discuss the role of multimodal LLMs as well as speech-to-text and text-to-speech models in building AI voice agents, and dig into the benefits and limitations of text-based approaches to voice interactions. We dig into what’s required to deliver real-time voice interactions and the promise of closed-loop, continuously improving, federated learning agents. Finally, Scott shares practical applications of AI voice agents at Deepgram and provides an overview of their newly released agent toolkit.

The complete show notes for this episode can be found at https://twimlai.com/go/707.</description>
      <pubDate>Mon, 28 Oct 2024 16:36:00 -0000</pubDate>
      <itunes:title>Building AI Voice Agents with Scott Stephenson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>707</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ee4e0d7a-9569-11ef-943c-171f6bdb5df9/image/23aa5a82be7275f09485f688a07bcc85.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Scott Stephenson, co-founder and CEO of Deepgram to discuss voice AI agents. We explore the importance of perception, understanding, and interaction and how these key components work together in building intelligent AI voice agents. We discuss the role of multimodal LLMs as well as speech-to-text and text-to-speech models in building AI voice agents, and dig into the benefits and limitations of text-based approaches to voice interactions. We dig into what’s required to deliver real-time voice interactions and the promise of closed-loop, continuously improving, federated learning agents. Finally, Scott shares practical applications of AI voice agents at Deepgram and provides an overview of their newly released agent toolkit.

The complete show notes for this episode can be found at https://twimlai.com/go/707.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Scott Stephenson, co-founder and CEO of Deepgram to discuss voice AI agents. We explore the importance of perception, understanding, and interaction and how these key components work together in building intelligent AI voice agents. We discuss the role of multimodal LLMs as well as speech-to-text and text-to-speech models in building AI voice agents, and dig into the benefits and limitations of text-based approaches to voice interactions. We dig into what’s required to deliver real-time voice interactions and the promise of closed-loop, continuously improving, federated learning agents. Finally, Scott shares practical applications of AI voice agents at Deepgram and provides an overview of their newly released agent toolkit.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/707">https://twimlai.com/go/707</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3704</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ee4e0d7a-9569-11ef-943c-171f6bdb5df9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6815731992.mp3?updated=1730147923"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706</title>
      <link>https://twimlai.com/podcast/twimlai/is-artificial-superintelligence-imminent/</link>
      <description>Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, “Artificial Intelligence: 10 Things You Should Know.” We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim’s recent research projects such as “Promptbreeder,” “Debating with More Persuasive LLMs Leads to More Truthful Answers,” and more.

The complete show notes for this episode can be found at https://twimlai.com/go/706.</description>
      <pubDate>Mon, 21 Oct 2024 21:25:14 -0000</pubDate>
      <itunes:title>Is Artificial Superintelligence Imminent? with Tim Rocktäschel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>706</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4fae9adc-8feb-11ef-9d1b-cf47215ab0d2/image/eae31b6460af478d56e4172f21e4bc67.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, “Artificial Intelligence: 10 Things You Should Know.” We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim’s recent research projects such as “Promptbreeder,” “Debating with More Persuasive LLMs Leads to More Truthful Answers,” and more.

The complete show notes for this episode can be found at https://twimlai.com/go/706.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, “<a href="https://geni.us/ArtificialIntelligence">Artificial Intelligence: 10 Things You Should Know</a>.” We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim’s recent research projects such as “Promptbreeder,” “Debating with More Persuasive LLMs Leads to More Truthful Answers,” and more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/706">https://twimlai.com/go/706</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3352</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4fae9adc-8feb-11ef-9d1b-cf47215ab0d2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1506929068.mp3?updated=1729546190"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Models for Safety-Critical Systems with Lucas García - #705</title>
      <link>https://twimlai.com/podcast/twimlai/ml-models-for-safety-critical-systems/</link>
      <description>Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role of verification and validation (V&amp;V) in these applications. We review the popular V-model for engineering critical systems and then dig into the “W” adaptation that’s been proposed for incorporating ML models. Next, we discuss the complexities of applying deep learning neural networks in safety-critical applications using the aviation industry as an example, and talk through the importance of factors such as data quality, model stability, robustness, interpretability, and accuracy. We also explore formal verification methods, abstract transformer layers, transformer-based architectures, and the application of various software testing techniques. Lucas also introduces the field of constrained deep learning and convex neural networks and its benefits and trade-offs.

The complete show notes for this episode can be found at https://twimlai.com/go/705.</description>
      <pubDate>Mon, 14 Oct 2024 19:29:00 -0000</pubDate>
      <itunes:title>ML Models for Safety-Critical Systems with Lucas García</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>705</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a8c504c-8a5d-11ef-9b67-7fed48f2e92d/image/f06842a161f037680a4bc90138844c00.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role of verification and validation (V&amp;V) in these applications. We review the popular V-model for engineering critical systems and then dig into the “W” adaptation that’s been proposed for incorporating ML models. Next, we discuss the complexities of applying deep learning neural networks in safety-critical applications using the aviation industry as an example, and talk through the importance of factors such as data quality, model stability, robustness, interpretability, and accuracy. We also explore formal verification methods, abstract transformer layers, transformer-based architectures, and the application of various software testing techniques. Lucas also introduces the field of constrained deep learning and convex neural networks and its benefits and trade-offs.

The complete show notes for this episode can be found at https://twimlai.com/go/705.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role of verification and validation (V&amp;V) in these applications. We review the popular V-model for engineering critical systems and then dig into the “W” adaptation that’s been proposed for incorporating ML models. Next, we discuss the complexities of applying deep learning neural networks in safety-critical applications using the aviation industry as an example, and talk through the importance of factors such as data quality, model stability, robustness, interpretability, and accuracy. We also explore formal verification methods, abstract transformer layers, transformer-based architectures, and the application of various software testing techniques. Lucas also introduces the field of constrained deep learning and convex neural networks and its benefits and trade-offs.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/705">https://twimlai.com/go/705</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4566</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3a8c504c-8a5d-11ef-9b67-7fed48f2e92d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4250685356.mp3?updated=1728934919"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Agents: Substance or Snake Oil with Arvind Narayanan - #704</title>
      <link>https://twimlai.com/podcast/twimlai/ai-agents-substance-or-snake-oil/</link>
      <description>Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap’, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI’s catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on CORE-Bench, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks.

The complete show notes for this episode can be found at https://twimlai.com/go/704.</description>
      <pubDate>Mon, 07 Oct 2024 15:32:00 -0000</pubDate>
      <itunes:title>AI Agents: Substance or Snake Oil with Arvind Narayanan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>704</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a0e55b32-84da-11ef-8cb1-f78eb269474c/image/8ee229c8788f45de25562ea4c927de50.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap’, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI’s catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on CORE-Bench, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks.

The complete show notes for this episode can be found at https://twimlai.com/go/704.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, <a href="https://arxiv.org/abs/2407.01502">AI Agents That Matter</a> and <a href="https://www.aisnakeoil.com/">AI Snake Oil</a>. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap’, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI’s catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on <a href="https://arxiv.org/abs/2409.11363">CORE-Bench</a>, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/704">https://twimlai.com/go/704</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3262</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a0e55b32-84da-11ef-8cb1-f78eb269474c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2654619504.mp3?updated=1728326821"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Agents for Data Analysis with Shreya Shankar - #703</title>
      <link>https://twimlai.com/podcast/twimlai/ai-agents-for-data-analysis/</link>
      <description>Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, the intricacies of building agentic systems for data processing, the current landscape of benchmarks for data processing tasks, how these differ from reasoning-based benchmarks, and the need for robust evaluation methods for human-in-the-loop LLM workflows. Additionally, Shreya shares real-world applications of DocETL, the importance of effective validation prompts, and building robust and fault-tolerant agentic systems. Lastly, we cover the need for benchmarks tailored to LLM-powered data processing tasks and the future directions for DocETL.

The complete show notes for this episode can be found at https://twimlai.com/go/703.</description>
      <pubDate>Mon, 30 Sep 2024 13:09:00 -0000</pubDate>
      <itunes:title>AI Agents for Data Analysis with Shreya Shankar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>703</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/de00d4be-7f77-11ef-9313-37ce3680f955/image/96356ec7e975f3e8d356601b48b347c2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, the intricacies of building agentic systems for data processing, the current landscape of benchmarks for data processing tasks, how these differ from reasoning-based benchmarks, and the need for robust evaluation methods for human-in-the-loop LLM workflows. Additionally, Shreya shares real-world applications of DocETL, the importance of effective validation prompts, and building robust and fault-tolerant agentic systems. Lastly, we cover the need for benchmarks tailored to LLM-powered data processing tasks and the future directions for DocETL.

The complete show notes for this episode can be found at https://twimlai.com/go/703.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss <a href="https://www.docetl.com/">DocETL</a>, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, the intricacies of building agentic systems for data processing, the current landscape of benchmarks for data processing tasks, how these differ from reasoning-based benchmarks, and the need for robust evaluation methods for human-in-the-loop LLM workflows. Additionally, Shreya shares real-world applications of DocETL, the importance of effective validation prompts, and building robust and fault-tolerant agentic systems. Lastly, we cover the need for benchmarks tailored to LLM-powered data processing tasks and the future directions for DocETL.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/703">https://twimlai.com/go/703</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2904</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[de00d4be-7f77-11ef-9313-37ce3680f955]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9363051879.mp3?updated=1727745514"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Stealing Part of a Production Language Model with Nicholas Carlini - #702</title>
      <link>https://twimlai.com/podcast/twimlai/stealing-part-of-a-production-language-model/</link>
      <description>Today, we're joined by Nicholas Carlini, research scientist at Google DeepMind to discuss adversarial machine learning and model security, focusing on his 2024 ICML best paper winner, “Stealing part of a production language model.” We dig into this work, which demonstrated the ability to successfully steal the last layer of production language models including ChatGPT and PaLM-2. Nicholas shares the current landscape of AI security research in the age of LLMs, the implications of model stealing, ethical concerns surrounding model privacy, how the attack works, and the significance of the embedding layer in language models. We also discuss the remediation strategies implemented by OpenAI and Google, and the future directions in the field of AI security. Plus, we also cover his other ICML 2024 best paper, “Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining,” which questions the use and promotion of differential privacy in conjunction with pre-trained models.

The complete show notes for this episode can be found at https://twimlai.com/go/702.</description>
      <pubDate>Mon, 23 Sep 2024 19:21:00 -0000</pubDate>
      <itunes:title>Stealing Part of a Production Language Model with Nicholas Carlini</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>702</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41409b92-79e0-11ef-a827-27086b3859e9/image/95fc8f5a4d582fe6db1a6236f272b07e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Nicholas Carlini, research scientist at Google DeepMind to discuss adversarial machine learning and model security, focusing on his 2024 ICML best paper winner, “Stealing part of a production language model.” We dig into this work, which demonstrated the ability to successfully steal the last layer of production language models including ChatGPT and PaLM-2. Nicholas shares the current landscape of AI security research in the age of LLMs, the implications of model stealing, ethical concerns surrounding model privacy, how the attack works, and the significance of the embedding layer in language models. We also discuss the remediation strategies implemented by OpenAI and Google, and the future directions in the field of AI security. Plus, we also cover his other ICML 2024 best paper, “Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining,” which questions the use and promotion of differential privacy in conjunction with pre-trained models.

The complete show notes for this episode can be found at https://twimlai.com/go/702.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Nicholas Carlini, research scientist at Google DeepMind to discuss adversarial machine learning and model security, focusing on his 2024 ICML best paper winner, “<a href="https://arxiv.org/abs/2403.06634">Stealing part of a production language model</a>.” We dig into this work, which demonstrated the ability to successfully steal the last layer of production language models including ChatGPT and PaLM-2. Nicholas shares the current landscape of AI security research in the age of LLMs, the implications of model stealing, ethical concerns surrounding model privacy, how the attack works, and the significance of the embedding layer in language models. We also discuss the remediation strategies implemented by OpenAI and Google, and the future directions in the field of AI security. Plus, we also cover his other ICML 2024 best paper, “<a href="https://arxiv.org/abs/2212.06470">Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining</a>,” which questions the use and promotion of differential privacy in conjunction with pre-trained models.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/702">https://twimlai.com/go/702</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3810</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[41409b92-79e0-11ef-a827-27086b3859e9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7516431304.mp3?updated=1727119766"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison - #701</title>
      <link>https://twimlai.com/podcast/twimlai/supercharging-developer-productivity-with-chatgpt-and-claude/</link>
      <description>Today, we're joined by Simon Willison, independent researcher and creator of Datasette to discuss the many ways software developers and engineers can take advantage of large language models (LLMs) to boost their productivity. We dig into Simon’s own workflows and how he uses popular models like ChatGPT and Anthropic’s Claude to write and test hundreds of lines of code while out walking his dog. We review Simon’s favorite prompting and debugging techniques, his strategies for sidestepping the limitations of contemporary models, how he uses Claude’s Artifacts feature for rapid prototyping, his thoughts on the use and impact of vision models, the role he sees for open source models and local LLMs, and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/701.</description>
      <pubDate>Mon, 16 Sep 2024 22:24:00 -0000</pubDate>
      <itunes:title>Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>701</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d552de32-7466-11ef-b401-7b4884349685/image/dfdc1be1f5753529775a71406d7672f0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Simon Willison, independent researcher and creator of Datasette to discuss the many ways software developers and engineers can take advantage of large language models (LLMs) to boost their productivity. We dig into Simon’s own workflows and how he uses popular models like ChatGPT and Anthropic’s Claude to write and test hundreds of lines of code while out walking his dog. We review Simon’s favorite prompting and debugging techniques, his strategies for sidestepping the limitations of contemporary models, how he uses Claude’s Artifacts feature for rapid prototyping, his thoughts on the use and impact of vision models, the role he sees for open source models and local LLMs, and much more.

The complete show notes for this episode can be found at https://twimlai.com/go/701.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Simon Willison, independent researcher and creator of Datasette to discuss the many ways software developers and engineers can take advantage of large language models (LLMs) to boost their productivity. We dig into Simon’s own workflows and how he uses popular models like ChatGPT and Anthropic’s Claude to write and test hundreds of lines of code while out walking his dog. We review Simon’s favorite prompting and debugging techniques, his strategies for sidestepping the limitations of contemporary models, how he uses Claude’s Artifacts feature for rapid prototyping, his thoughts on the use and impact of vision models, the role he sees for open source models and local LLMs, and much more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/701">https://twimlai.com/go/701</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4455</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d552de32-7466-11ef-b401-7b4884349685]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2990950712.mp3?updated=1726528312"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automated Design of Agentic Systems with Shengran Hu - #700</title>
      <link>https://twimlai.com/podcast/twimlai/automated-design-of-agentic-systems/</link>
      <description>Today, we're joined by Shengran Hu, a PhD student at the University of British Columbia, to discuss Automated Design of Agentic Systems (ADAS), an approach focused on automatically creating agentic system designs. We explore the spectrum of agentic behaviors, the motivation for learning all aspects of agentic system design, the key components of the ADAS approach, and how it uses LLMs to design novel agent architectures in code. We also cover the iterative process of ADAS, its potential to shed light on the behavior of foundation models, the higher-level meta-behaviors that emerge in agentic systems, and how ADAS uncovers novel design patterns through emergent behaviors, particularly in complex tasks like the ARC challenge. Finally, we touch on the practical applications of ADAS and its potential use in system optimization for real-world tasks.

The complete show notes for this episode can be found at https://twimlai.com/go/700.</description>
      <pubDate>Mon, 02 Sep 2024 20:30:00 -0000</pubDate>
      <itunes:title>Automated Design of Agentic Systems with Shengran Hu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>700</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/008aea54-6a2b-11ef-b146-4b66c854276c/image/91b2425ec04b2e17535e35b5dce9f23b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Shengran Hu, a PhD student at the University of British Columbia, to discuss Automated Design of Agentic Systems (ADAS), an approach focused on automatically creating agentic system designs. We explore the spectrum of agentic behaviors, the motivation for learning all aspects of agentic system design, the key components of the ADAS approach, and how it uses LLMs to design novel agent architectures in code. We also cover the iterative process of ADAS, its potential to shed light on the behavior of foundation models, the higher-level meta-behaviors that emerge in agentic systems, and how ADAS uncovers novel design patterns through emergent behaviors, particularly in complex tasks like the ARC challenge. Finally, we touch on the practical applications of ADAS and its potential use in system optimization for real-world tasks.

The complete show notes for this episode can be found at https://twimlai.com/go/700.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Shengran Hu, a PhD student at the University of British Columbia, to discuss <a href="https://arxiv.org/abs/2408.08435">Automated Design of Agentic Systems (ADAS)</a>, an approach focused on automatically creating agentic system designs. We explore the spectrum of agentic behaviors, the motivation for learning all aspects of agentic system design, the key components of the ADAS approach, and how it uses LLMs to design novel agent architectures in code. We also cover the iterative process of ADAS, its potential to shed light on the behavior of foundation models, the higher-level meta-behaviors that emerge in agentic systems, and how ADAS uncovers novel design patterns through emergent behaviors, particularly in complex tasks like the ARC challenge. Finally, we touch on the practical applications of ADAS and its potential use in system optimization for real-world tasks.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/700">https://twimlai.com/go/700</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3570</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[008aea54-6a2b-11ef-b146-4b66c854276c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6045931233.mp3?updated=1725394223"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699</title>
      <link>https://twimlai.com/podcast/twimlai/the-eu-ai-act-and-mitigating-bias-in-automated-decisioning/</link>
      <description>Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of applying academic fairness metrics in real-world AI applications. We dig into the key ethical principles behind the Act, its broad definition of AI, and how it categorizes various AI risks. We also discuss the practical challenges of implementing fairness and bias metrics in real-world scenarios, and the importance of a risk-based approach in regulating AI systems. Finally, we cover how the EU AI Act might influence global practices, similar to the GDPR's effect on data privacy, and explore strategies for closing bias gaps in real-world automated decision-making.

The complete show notes for this episode can be found at https://twimlai.com/go/699.</description>
      <pubDate>Tue, 27 Aug 2024 00:22:50 -0000</pubDate>
      <itunes:title>The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>699</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6b0e1d62-63cb-11ef-8767-c7ba9752b5ed/image/7814f7740b2ce37b021ba763e3427a57.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of applying academic fairness metrics in real-world AI applications. We dig into the key ethical principles behind the Act, its broad definition of AI, and how it categorizes various AI risks. We also discuss the practical challenges of implementing fairness and bias metrics in real-world scenarios, and the importance of a risk-based approach in regulating AI systems. Finally, we cover how the EU AI Act might influence global practices, similar to the GDPR's effect on data privacy, and explore strategies for closing bias gaps in real-world automated decision-making.

The complete show notes for this episode can be found at https://twimlai.com/go/699.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of applying academic fairness metrics in real-world AI applications. We dig into the key ethical principles behind the Act, its broad definition of AI, and how it categorizes various AI risks. We also discuss the practical challenges of implementing fairness and bias metrics in real-world scenarios, and the importance of a risk-based approach in regulating AI systems. Finally, we cover how the EU AI Act might influence global practices, similar to the GDPR's effect on data privacy, and explore strategies for closing bias gaps in real-world automated decision-making.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/699">https://twimlai.com/go/699</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2734</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6b0e1d62-63cb-11ef-8767-c7ba9752b5ed]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9777212790.mp3?updated=1724695240"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Building Blocks of Agentic Systems with Harrison Chase - #698</title>
      <description>Today, we're joined by Harrison Chase, co-founder and CEO of LangChain to discuss LLM frameworks, agentic systems, RAG, evaluation, and more. We dig into the elements of a modern LLM framework, including the most productive developer experiences and appropriate levels of abstraction. We dive into agents and agentic systems as well, covering the “spectrum of agenticness,” cognitive architectures, and real-world applications. We explore key challenges in deploying agentic systems, and the importance of agentic architectures as a means of communication in system design and operation. Additionally, we review evolving use cases for RAG, and the role of observability, testing, and evaluation tools in moving LLM applications from prototype to production. Lastly, Harrison shares his hot takes on prompting, multi-modal models, and more!

The complete show notes for this episode can be found at https://twimlai.com/go/698.</description>
      <pubDate>Mon, 19 Aug 2024 19:54:00 -0000</pubDate>
      <itunes:title>The Building Blocks of Agentic Systems with Harrison Chase</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>698</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/162e2ec8-5e61-11ef-bf7c-8fa3a5707ee5/image/12528602c629dcbce637cc94e3bfcb3f.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Harrison Chase, co-founder and CEO of LangChain to discuss LLM frameworks, agentic systems, RAG, evaluation, and more. We dig into the elements of a modern LLM framework, including the most productive developer experiences and appropriate levels of abstraction. We dive into agents and agentic systems as well, covering the “spectrum of agenticness,” cognitive architectures, and real-world applications. We explore key challenges in deploying agentic systems, and the importance of agentic architectures as a means of communication in system design and operation. Additionally, we review evolving use cases for RAG, and the role of observability, testing, and evaluation tools in moving LLM applications from prototype to production. Lastly, Harrison shares his hot takes on prompting, multi-modal models, and more!

The complete show notes for this episode can be found at https://twimlai.com/go/698.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Harrison Chase, co-founder and CEO of LangChain to discuss LLM frameworks, agentic systems, RAG, evaluation, and more. We dig into the elements of a modern LLM framework, including the most productive developer experiences and appropriate levels of abstraction. We dive into agents and agentic systems as well, covering the “spectrum of agenticness,” cognitive architectures, and real-world applications. We explore key challenges in deploying agentic systems, and the importance of agentic architectures as a means of communication in system design and operation. Additionally, we review evolving use cases for RAG, and the role of observability, testing, and evaluation tools in moving LLM applications from prototype to production. Lastly, Harrison shares his hot takes on prompting, multi-modal models, and more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/698">https://twimlai.com/go/698</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3557</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[162e2ec8-5e61-11ef-bf7c-8fa3a5707ee5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7661308388.mp3?updated=1724098102"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697</title>
      <link>https://twimlai.com/podcast/twimlai/simplifying-on-device-ai-for-developers/</link>
      <description>Today, we're joined by Siddhika Nevrekar, AI Hub head at Qualcomm Technologies, to discuss on-device AI and how to make it easier for developers to take advantage of device capabilities. We unpack the motivations for AI engineers to move model inference from the cloud to local devices, and explore the challenges associated with on-device AI. We dig into the role of hardware solutions, from powerful system-on-chips (SoC) to neural processors, the importance of collaboration between community runtimes like ONNX and TFLite and chip manufacturers, the unique challenges of IoT and autonomous vehicles, and the key metrics developers should focus on to ensure optimal on-device performance. Finally, Siddhika introduces Qualcomm's AI Hub, a platform developed to simplify the process of testing and optimizing AI models across different devices.

The complete show notes for this episode can be found at https://twimlai.com/go/697.</description>
      <pubDate>Mon, 12 Aug 2024 18:07:00 -0000</pubDate>
      <itunes:title>Simplifying On-Device AI for Developers with Siddhika Nevrekar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>697</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a6197916-58d0-11ef-849e-c3a490092203/image/6171ebe5019261617ec8188a7db770de.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Siddhika Nevrekar, AI Hub head at Qualcomm Technologies, to discuss on-device AI and how to make it easier for developers to take advantage of device capabilities. We unpack the motivations for AI engineers to move model inference from the cloud to local devices, and explore the challenges associated with on-device AI. We dig into the role of hardware solutions, from powerful system-on-chips (SoC) to neural processors, the importance of collaboration between community runtimes like ONNX and TFLite and chip manufacturers, the unique challenges of IoT and autonomous vehicles, and the key metrics developers should focus on to ensure optimal on-device performance. Finally, Siddhika introduces Qualcomm's AI Hub, a platform developed to simplify the process of testing and optimizing AI models across different devices.

The complete show notes for this episode can be found at https://twimlai.com/go/697.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Siddhika Nevrekar, AI Hub head at Qualcomm Technologies, to discuss on-device AI and how to make it easier for developers to take advantage of device capabilities. We unpack the motivations for AI engineers to move model inference from the cloud to local devices, and explore the challenges associated with on-device AI. We dig into the role of hardware solutions, from powerful system-on-chips (SoC) to neural processors, the importance of collaboration between community runtimes like ONNX and TFLite and chip manufacturers, the unique challenges of IoT and autonomous vehicles, and the key metrics developers should focus on to ensure optimal on-device performance. Finally, Siddhika introduces Qualcomm's AI Hub, a platform developed to simplify the process of testing and optimizing AI models across different devices.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/697">https://twimlai.com/go/697</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2797</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a6197916-58d0-11ef-849e-c3a490092203]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7180381574.mp3?updated=1723486873"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Genie: Generative Interactive Environments with Ashley Edwards - #696</title>
      <link>https://twimlai.com/podcast/twimlai/genie-generative-interactive-environments/</link>
      <description>Today, we're joined by Ashley Edwards, a member of technical staff at Runway, to discuss Genie: Generative Interactive Environments, a system for creating ‘playable’ video environments for training deep reinforcement learning (RL) agents at scale in a completely unsupervised manner. We explore the motivations behind Genie, the challenges of data acquisition for RL, and Genie’s capability to learn world models from videos without explicit action data, enabling seamless interaction and frame prediction. Ashley walks us through Genie’s core components—the latent action model, video tokenizer, and dynamics model—and explains how these elements collaborate to predict future frames in video sequences. We discuss the model architecture, training strategies, benchmarks used, as well as the application of spatiotemporal transformers and the MaskGIT techniques used for efficient token prediction and representation. Finally, we touched on Genie’s practical implications, its comparison to other video generation models like “Sora,” and potential future directions in video generation and diffusion models.

The complete show notes for this episode can be found at https://twimlai.com/go/696.</description>
      <pubDate>Mon, 05 Aug 2024 17:14:00 -0000</pubDate>
      <itunes:title>Genie: Generative Interactive Environments with Ashley Edwards</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>696</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2ca627d4-534c-11ef-b45e-bb3dbb8e5a71/image/498237d45bd955df3934b28e11b9fd71.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Ashley Edwards, a member of technical staff at Runway, to discuss Genie: Generative Interactive Environments, a system for creating ‘playable’ video environments for training deep reinforcement learning (RL) agents at scale in a completely unsupervised manner. We explore the motivations behind Genie, the challenges of data acquisition for RL, and Genie’s capability to learn world models from videos without explicit action data, enabling seamless interaction and frame prediction. Ashley walks us through Genie’s core components—the latent action model, video tokenizer, and dynamics model—and explains how these elements collaborate to predict future frames in video sequences. We discuss the model architecture, training strategies, benchmarks used, as well as the application of spatiotemporal transformers and the MaskGIT techniques used for efficient token prediction and representation. Finally, we touched on Genie’s practical implications, its comparison to other video generation models like “Sora,” and potential future directions in video generation and diffusion models.

The complete show notes for this episode can be found at https://twimlai.com/go/696.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Ashley Edwards, a member of technical staff at Runway, to discuss <a href="https://arxiv.org/abs/2402.15391">Genie: Generative Interactive Environments</a>, a system for creating ‘playable’ video environments for training deep reinforcement learning (RL) agents at scale in a completely unsupervised manner. We explore the motivations behind Genie, the challenges of data acquisition for RL, and Genie’s capability to learn world models from videos without explicit action data, enabling seamless interaction and frame prediction. Ashley walks us through Genie’s core components—the latent action model, video tokenizer, and dynamics model—and explains how these elements collaborate to predict future frames in video sequences. We discuss the model architecture, training strategies, benchmarks used, as well as the application of spatiotemporal transformers and the MaskGIT techniques used for efficient token prediction and representation. Finally, we touched on Genie’s practical implications, its comparison to other video generation models like “Sora,” and potential future directions in video generation and diffusion models.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/696">https://twimlai.com/go/696</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2811</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2ca627d4-534c-11ef-b45e-bb3dbb8e5a71]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9110516542.mp3?updated=1722879160"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bridging the Sim2real Gap in Robotics with Marius Memmel - #695</title>
      <link>https://twimlai.com/podcast/twimlai/bridging-the-sim2real-gap-in-robotics/</link>
      <description>Today, we're joined by Marius Memmel, a PhD student at the University of Washington, to discuss his research on sim-to-real transfer approaches for developing autonomous robotic agents in unstructured environments. Our conversation focuses on his recent ASID and URDFormer papers. We explore the complexities presented by real-world settings like a cluttered kitchen, data acquisition challenges for training robust models, the importance of simulation, and the challenge of bridging the sim2real gap in robotics. Marius introduces ASID, a framework designed to enable robots to autonomously generate and refine simulation models to improve sim-to-real transfer. We discuss the role of Fisher information as a metric for trajectory sensitivity to physical parameters and the importance of exploration and exploitation phases in robot learning. Additionally, we cover URDFormer, a transformer-based model that generates URDF documents for scene and object reconstruction to create realistic simulation environments.

The complete show notes for this episode can be found at https://twimlai.com/go/695.</description>
      <pubDate>Tue, 30 Jul 2024 18:11:00 -0000</pubDate>
      <itunes:title>Bridging the Sim2real Gap in Robotics with Marius Memmel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>695</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/127a64b4-4e98-11ef-9e7d-2fe7cb569a82/image/a54428e211e1e9f6c2b0a04de9a65ca0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Marius Memmel, a PhD student at the University of Washington, to discuss his research on sim-to-real transfer approaches for developing autonomous robotic agents in unstructured environments. Our conversation focuses on his recent ASID and URDFormer papers. We explore the complexities presented by real-world settings like a cluttered kitchen, data acquisition challenges for training robust models, the importance of simulation, and the challenge of bridging the sim2real gap in robotics. Marius introduces ASID, a framework designed to enable robots to autonomously generate and refine simulation models to improve sim-to-real transfer. We discuss the role of Fisher information as a metric for trajectory sensitivity to physical parameters and the importance of exploration and exploitation phases in robot learning. Additionally, we cover URDFormer, a transformer-based model that generates URDF documents for scene and object reconstruction to create realistic simulation environments.

The complete show notes for this episode can be found at https://twimlai.com/go/695.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Marius Memmel, a PhD student at the University of Washington, to discuss his research on sim-to-real transfer approaches for developing autonomous robotic agents in unstructured environments. Our conversation focuses on his recent <a href="https://arxiv.org/abs/2404.12308">ASID</a> and <a href="https://arxiv.org/abs/2405.11656">URDFormer</a> papers. We explore the complexities presented by real-world settings like a cluttered kitchen, data acquisition challenges for training robust models, the importance of simulation, and the challenge of bridging the <em>sim2real</em> gap in robotics. Marius introduces ASID, a framework designed to enable robots to autonomously generate and refine simulation models to improve sim-to-real transfer. We discuss the role of Fisher information as a metric for trajectory sensitivity to physical parameters and the importance of exploration and exploitation phases in robot learning. Additionally, we cover URDFormer, a transformer-based model that generates URDF documents for scene and object reconstruction to create realistic simulation environments.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/695">https://twimlai.com/go/695</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3441</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[127a64b4-4e98-11ef-9e7d-2fe7cb569a82]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8822996431.mp3?updated=1722363990"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694</title>
      <link>https://twimlai.com/podcast/twimlai/building-real-world-llm-products-with-fine-tuning-and-more/</link>
      <description>Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.

The complete show notes for this episode can be found at https://twimlai.com/go/694.</description>
      <pubDate>Tue, 23 Jul 2024 21:02:00 -0000</pubDate>
      <itunes:title>Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>694</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/22148926-4935-11ef-a53a-17eb44d86b5f/image/053da7b53e0ce958778d7e4002bffc98.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.

The complete show notes for this episode can be found at https://twimlai.com/go/694.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/694">https://twimlai.com/go/694</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4805</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[22148926-4935-11ef-a53a-17eb44d86b5f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5252789067.mp3?updated=1721769728"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693</title>
      <link>https://twimlai.com/podcast/twimlai/mamba-mamba-2-and-post-transformer-architectures-for-generative-ai/</link>
      <description>Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-space models in general and Albert’s recent Mamba and Mamba-2 papers in particular. We dig into the efficiency of the attention mechanism and its limitations in handling high-resolution perceptual modalities, and the strengths and weaknesses of transformer architectures relative to alternatives for various tasks. We dig into the role of tokenization and patching in transformer pipelines, emphasizing how abstraction and semantic relationships between tokens underpin the model's effectiveness, and explore how this relates to the debate between handcrafted pipelines versus end-to-end architectures in machine learning. Additionally, we touch on the evolving landscape of hybrid models which incorporate elements of attention and state, the significance of state update mechanisms in model adaptability and learning efficiency, and the contribution and adoption of state-space models like Mamba and Mamba-2 in academia and industry. Lastly, Albert shares his vision for advancing foundation models across diverse modalities and applications.

The complete show notes for this episode can be found at https://twimlai.com/go/693.</description>
      <pubDate>Wed, 17 Jul 2024 10:27:00 -0000</pubDate>
      <itunes:title>Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>693</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/03d385c8-439c-11ef-875f-d7608101a7e6/image/7731e3d48caa0b7e00844f57fd9b6d63.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-space models in general and Albert’s recent Mamba and Mamba-2 papers in particular. We dig into the efficiency of the attention mechanism and its limitations in handling high-resolution perceptual modalities, and the strengths and weaknesses of transformer architectures relative to alternatives for various tasks. We dig into the role of tokenization and patching in transformer pipelines, emphasizing how abstraction and semantic relationships between tokens underpin the model's effectiveness, and explore how this relates to the debate between handcrafted pipelines versus end-to-end architectures in machine learning. Additionally, we touch on the evolving landscape of hybrid models which incorporate elements of attention and state, the significance of state update mechanisms in model adaptability and learning efficiency, and the contribution and adoption of state-space models like Mamba and Mamba-2 in academia and industry. Lastly, Albert shares his vision for advancing foundation models across diverse modalities and applications.

The complete show notes for this episode can be found at https://twimlai.com/go/693.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-space models in general and Albert’s recent <a href="https://arxiv.org/abs/2312.00752">Mamba</a> and <a href="https://arxiv.org/abs/2405.21060">Mamba-2</a> papers in particular. We dig into the efficiency of the attention mechanism and its limitations in handling high-resolution perceptual modalities, and the strengths and weaknesses of transformer architectures relative to alternatives for various tasks. We dig into the role of tokenization and patching in transformer pipelines, emphasizing how abstraction and semantic relationships between tokens underpin the model's effectiveness, and explore how this relates to the debate between handcrafted pipelines versus end-to-end architectures in machine learning. Additionally, we touch on the evolving landscape of hybrid models which incorporate elements of attention and state, the significance of state update mechanisms in model adaptability and learning efficiency, and the contribution and adoption of state-space models like Mamba and Mamba-2 in academia and industry. Lastly, Albert shares his vision for advancing foundation models across diverse modalities and applications.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/693">https://twimlai.com/go/693</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3474</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[03d385c8-439c-11ef-875f-d7608101a7e6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7483766778.mp3?updated=1721154194"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar - #692</title>
      <link>https://twimlai.com/podcast/twimlai/decoding-animal-behavior-to-train-robots-with-egopet/</link>
      <description>Today, we're joined by Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley to discuss his research on visual-based learning, including his recent paper, “EgoPet: Egomotion and Interaction Data from an Animal’s Perspective.” Amir shares his research projects focused on self-supervised object detection and analogy reasoning for general computer vision tasks. We also discuss the current limitations of caption-based datasets in model training, the ‘learning problem’ in robotics, and the gap between the capabilities of animals and AI systems. Amir introduces ‘EgoPet,’ a dataset and benchmark tasks which allow motion and interaction data from an animal's perspective to be incorporated into machine learning models for robotic planning and proprioception. We explore the dataset collection process, comparisons with existing datasets and benchmark tasks, the findings on the model performance trained on EgoPet, and the potential of directly training robot policies that mimic animal behavior.

The complete show notes for this episode can be found at https://twimlai.com/go/692.</description>
      <pubDate>Tue, 09 Jul 2024 14:00:00 -0000</pubDate>
      <itunes:title>Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>692</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/61173786-3d69-11ef-8758-e7ac9a0ec8cb/image/43840ee268033cc41eea85a05616fd2e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley to discuss his research on visual-based learning, including his recent paper, “EgoPet: Egomotion and Interaction Data from an Animal’s Perspective.” Amir shares his research projects focused on self-supervised object detection and analogy reasoning for general computer vision tasks. We also discuss the current limitations of caption-based datasets in model training, the ‘learning problem’ in robotics, and the gap between the capabilities of animals and AI systems. Amir introduces ‘EgoPet,’ a dataset and benchmark tasks which allow motion and interaction data from an animal's perspective to be incorporated into machine learning models for robotic planning and proprioception. We explore the dataset collection process, comparisons with existing datasets and benchmark tasks, the findings on the model performance trained on EgoPet, and the potential of directly training robot policies that mimic animal behavior.

The complete show notes for this episode can be found at https://twimlai.com/go/692.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley to discuss his research on visual-based learning, including his recent paper, “<a href="https://arxiv.org/pdf/2404.09991">EgoPet: Egomotion and Interaction Data from an Animal’s Perspective</a>.” Amir shares his research projects focused on self-supervised object detection and analogy reasoning for general computer vision tasks. We also discuss the current limitations of caption-based datasets in model training, the ‘learning problem’ in robotics, and the gap between the capabilities of animals and AI systems. Amir introduces ‘EgoPet,’ a dataset and benchmark tasks which allow motion and interaction data from an animal's perspective to be incorporated into machine learning models for robotic planning and proprioception. We explore the dataset collection process, comparisons with existing datasets and benchmark tasks, the findings on the model performance trained on EgoPet, and the potential of directly training robot policies that mimic animal behavior.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/692">https://twimlai.com/go/692</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2596</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[61173786-3d69-11ef-8758-e7ac9a0ec8cb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3784001197.mp3?updated=1720473918"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691</title>
      <link>https://twimlai.com/podcast/twimlai/how-microsoft-scales-testing-and-safety-for-generative-ai/</link>
      <description>Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/691.</description>
      <pubDate>Mon, 01 Jul 2024 16:23:22 -0000</pubDate>
      <itunes:title>How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>691</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/95ab2718-37c0-11ef-af82-c70430fbb683/image/ac17bb40dd53e156bf2d282957eac69e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.

The complete show notes for this episode can be found at https://twimlai.com/go/691.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ‘Tay’ and ‘Bing Chat’ incidents along with her thoughts on the rapidly evolving GenAI landscape.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/691">https://twimlai.com/go/691</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3432</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[95ab2718-37c0-11ef-af82-c70430fbb683]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3542757284.mp3?updated=1719851928"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Long Context Language Models and their Biological Applications with Eric Nguyen - #690</title>
      <link>https://twimlai.com/podcast/twimlai/long-context-language-models-and-their-biological-applications/</link>
      <description>Today, we're joined by Eric Nguyen, PhD student at Stanford University. In our conversation, we explore his research on long context foundation models and their application to biology particularly Hyena, and its evolution into Hyena DNA and Evo models. We discuss Hyena, a convolutional-based language model developed to tackle the challenges posed by long context lengths in language modeling. We dig into the limitations of transformers in dealing with longer sequences, the motivation for using convolutional models over transformers, its model training and architecture, the role of FFT in computational optimizations, and model explainability in long-sequence convolutions. We also talked about Hyena DNA, a genomic foundation model pre-trained on 1 million tokens, designed to capture long-range dependencies in DNA sequences. Finally, Eric introduces Evo, a 7 billion parameter hybrid model integrating attention layers with Hyena DNA's convolutional framework. We cover generating and designing DNA with language models, hallucinations in DNA models, evaluation benchmarks, the trade-offs between state-of-the-art models, zero-shot versus a few-shot performance, and the exciting potential in areas like CRISPR-Cas gene editing.

The complete show notes for this episode can be found at https://twimlai.com/go/690.</description>
      <pubDate>Tue, 25 Jun 2024 18:54:00 -0000</pubDate>
      <itunes:title>Long Context Language Models and their Biological Applications with Eric Nguyen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>690</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79dd630e-3322-11ef-8dca-a709e63b1ef1/image/458d0b738da272860bac4ae20bfe85c2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Eric Nguyen, PhD student at Stanford University. In our conversation, we explore his research on long context foundation models and their application to biology particularly Hyena, and its evolution into Hyena DNA and Evo models. We discuss Hyena, a convolutional-based language model developed to tackle the challenges posed by long context lengths in language modeling. We dig into the limitations of transformers in dealing with longer sequences, the motivation for using convolutional models over transformers, its model training and architecture, the role of FFT in computational optimizations, and model explainability in long-sequence convolutions. We also talked about Hyena DNA, a genomic foundation model pre-trained on 1 million tokens, designed to capture long-range dependencies in DNA sequences. Finally, Eric introduces Evo, a 7 billion parameter hybrid model integrating attention layers with Hyena DNA's convolutional framework. We cover generating and designing DNA with language models, hallucinations in DNA models, evaluation benchmarks, the trade-offs between state-of-the-art models, zero-shot versus a few-shot performance, and the exciting potential in areas like CRISPR-Cas gene editing.

The complete show notes for this episode can be found at https://twimlai.com/go/690.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Eric Nguyen, PhD student at Stanford University. In our conversation, we explore his research on long context foundation models and their application to biology particularly <a href="https://hazyresearch.stanford.edu/blog/2023-03-07-hyena">Hyena</a>, and its evolution into <a href="https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna">Hyena DNA</a> and <a href="https://arcinstitute.org/news/blog/evo">Evo</a> models. We discuss Hyena, a convolutional-based language model developed to tackle the challenges posed by long context lengths in language modeling. We dig into the limitations of transformers in dealing with longer sequences, the motivation for using convolutional models over transformers, its model training and architecture, the role of FFT in computational optimizations, and model explainability in long-sequence convolutions. We also talked about Hyena DNA, a genomic foundation model pre-trained on 1 million tokens, designed to capture long-range dependencies in DNA sequences. Finally, Eric introduces Evo, a 7 billion parameter hybrid model integrating attention layers with Hyena DNA's convolutional framework. We cover generating and designing DNA with language models, hallucinations in DNA models, evaluation benchmarks, the trade-offs between state-of-the-art models, zero-shot versus a few-shot performance, and the exciting potential in areas like CRISPR-Cas gene editing.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/690">https://twimlai.com/go/690</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2741</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[79dd630e-3322-11ef-8dca-a709e63b1ef1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4558723289.mp3?updated=1719342347"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating Sustainability with AI with Andres Ravinet - #689</title>
      <link>https://twimlai.com/podcast/twimlai/accelerating-sustainability-with-ai/</link>
      <description>Today, we're joined by Andres Ravinet, sustainability global black belt at Microsoft, to discuss the role of AI in sustainability. We explore real-world use cases where AI-driven solutions are leveraged to help tackle environmental and societal challenges, from early warning systems for extreme weather events to reducing food waste along the supply chain to conserving the Amazon rainforest. We cover the major threats that sustainability aims to address, the complexities in standardized sustainability compliance reporting, and the factors driving businesses to take a step toward sustainable practices. Lastly, Andres addresses the ways LLMs and generative AI can be applied towards the challenges of sustainability.

The complete show notes for this episode can be found at https://twimlai.com/go/689.</description>
      <pubDate>Tue, 18 Jun 2024 15:49:00 -0000</pubDate>
      <itunes:title>Accelerating Sustainability with AI with Andres Ravinet</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>689</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ca24c8ee-2ce6-11ef-9e64-e32db5b93fe2/image/d312c956ba9bbe44195975be8c39748f.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Andres Ravinet, sustainability global black belt at Microsoft, to discuss the role of AI in sustainability. We explore real-world use cases where AI-driven solutions are leveraged to help tackle environmental and societal challenges, from early warning systems for extreme weather events to reducing food waste along the supply chain to conserving the Amazon rainforest. We cover the major threats that sustainability aims to address, the complexities in standardized sustainability compliance reporting, and the factors driving businesses to take a step toward sustainable practices. Lastly, Andres addresses the ways LLMs and generative AI can be applied towards the challenges of sustainability.

The complete show notes for this episode can be found at https://twimlai.com/go/689.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Andres Ravinet, sustainability global black belt at Microsoft, to discuss the role of AI in sustainability. We explore real-world use cases where AI-driven solutions are leveraged to help tackle environmental and societal challenges, from early warning systems for extreme weather events to reducing food waste along the supply chain to conserving the Amazon rainforest. We cover the major threats that sustainability aims to address, the complexities in standardized sustainability compliance reporting, and the factors driving businesses to take a step toward sustainable practices. Lastly, Andres addresses the ways LLMs and generative AI can be applied towards the challenges of sustainability.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/689">https://twimlai.com/go/689</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2866</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ca24c8ee-2ce6-11ef-9e64-e32db5b93fe2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5510514731.mp3?updated=1718655822"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Gen AI at the Edge: Qualcomm AI Research at CVPR 2024 with Fatih Porikli - #688</title>
      <link>https://twimlai.com/podcast/twimlai/gen-ai-at-the-edge-qualcomm-ai-research-at-cvpr-2024/</link>
      <description>Today we’re joined by Fatih Porikli, senior director of technology at Qualcomm AI Research. In our conversation, we covered several of the Qualcomm team’s 16 accepted main track and workshop papers at this year’s CVPR conference. The papers span a variety of generative AI and traditional computer vision topics, with an emphasis on increased training and inference efficiency for mobile and edge deployment. We explore efficient diffusion models for text-to-image generation, grounded reasoning in videos using language models, real-time on-device 360° image generation for video portrait relighting, unique video-language model for situated interactions like fitness coaching, and visual reasoning model and benchmark for interpreting complex mathematical plots, and more! We also touched on several of the demos the team will be presenting at the conference, including multi-modal vision-language models (LLaVA) and parameter-efficient fine tuning (LoRA) on mobile phones.

The complete show notes for this episode can be found at https://twimlai.com/go/688.</description>
      <pubDate>Mon, 10 Jun 2024 22:25:00 -0000</pubDate>
      <itunes:title>Gen AI at the Edge: Qualcomm AI Research at CVPR 2024 with Fatih Porikli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>688</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f964f46-2773-11ef-b6fc-372167d98b47/image/b329bb6d4758a15ee182aa30aba295f0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Fatih Porikli, senior director of technology at Qualcomm AI Research. In our conversation, we covered several of the Qualcomm team’s 16 accepted main track and workshop papers at this year’s CVPR conference. The papers span a variety of generative AI and traditional computer vision topics, with an emphasis on increased training and inference efficiency for mobile and edge deployment. We explore efficient diffusion models for text-to-image generation, grounded reasoning in videos using language models, real-time on-device 360° image generation for video portrait relighting, unique video-language model for situated interactions like fitness coaching, and visual reasoning model and benchmark for interpreting complex mathematical plots, and more! We also touched on several of the demos the team will be presenting at the conference, including multi-modal vision-language models (LLaVA) and parameter-efficient fine tuning (LoRA) on mobile phones.

The complete show notes for this episode can be found at https://twimlai.com/go/688.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Fatih Porikli, senior director of technology at Qualcomm AI Research. In our conversation, we covered several of the Qualcomm team’s 16 accepted main track and workshop papers at this year’s CVPR conference. The papers span a variety of generative AI and traditional computer vision topics, with an emphasis on increased training and inference efficiency for mobile and edge deployment. We explore efficient diffusion models for text-to-image generation, grounded reasoning in videos using language models, real-time on-device 360° image generation for video portrait relighting, unique video-language model for situated interactions like fitness coaching, and visual reasoning model and benchmark for interpreting complex mathematical plots, and more! We also touched on several of the demos the team will be presenting at the conference, including multi-modal vision-language models (LLaVA) and parameter-efficient fine tuning (LoRA) on mobile phones.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/688">https://twimlai.com/go/688</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4241</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7f964f46-2773-11ef-b6fc-372167d98b47]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9153067232.mp3?updated=1718056549"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Energy Star Ratings for AI Models with Sasha Luccioni - #687 </title>
      <link>https://twimlai.com/podcast/twimlai/energy-star-ratings-for-ai-models/</link>
      <description>Today, we're joined by Sasha Luccioni, AI and Climate lead at Hugging Face, to discuss the environmental impact of AI models. We dig into her recent research into the relative energy consumption of general purpose pre-trained models vs. task-specific, non-generative models for common AI tasks. We discuss the implications of the significant difference in efficiency and power consumption between the two types of models. Finally, we explore the complexities of energy efficiency and performance benchmarking, and talk through Sasha’s recent initiative, Energy Star Ratings for AI Models, a rating system designed to help AI users select and deploy models based on their energy efficiency.

The complete show notes for this episode can be found at http://twimlai.com/go/687.</description>
      <pubDate>Mon, 03 Jun 2024 23:47:00 -0000</pubDate>
      <itunes:title>Energy Star Ratings for AI Models with Sasha Luccioni</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>687</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f44e2c0c-21f9-11ef-ac1a-3b2162227cac/image/8f0681db28be89a2f9b1e9f0f6f57700.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Sasha Luccioni, AI and Climate lead at Hugging Face, to discuss the environmental impact of AI models. We dig into her recent research into the relative energy consumption of general purpose pre-trained models vs. task-specific, non-generative models for common AI tasks. We discuss the implications of the significant difference in efficiency and power consumption between the two types of models. Finally, we explore the complexities of energy efficiency and performance benchmarking, and talk through Sasha’s recent initiative, Energy Star Ratings for AI Models, a rating system designed to help AI users select and deploy models based on their energy efficiency.

The complete show notes for this episode can be found at http://twimlai.com/go/687.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Sasha Luccioni, AI and Climate lead at Hugging Face, to discuss the environmental impact of AI models. We dig into her recent research into the relative energy consumption of general purpose pre-trained models vs. task-specific, non-generative models for common AI tasks. We discuss the implications of the significant difference in efficiency and power consumption between the two types of models. Finally, we explore the complexities of energy efficiency and performance benchmarking, and talk through Sasha’s recent initiative, <a href="https://huggingface.co/blog/sasha/energy-star-ai-proposal">Energy Star Ratings for AI Models</a>, a rating system designed to help AI users select and deploy models based on their energy efficiency.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/687">http://twimlai.com/go/687</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2906</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f44e2c0c-21f9-11ef-ac1a-3b2162227cac]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5181927287.mp3?updated=1717565807"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Language Understanding and LLMs with Christopher Manning - #686</title>
      <link>https://twimlai.com/podcast/twimlai/language-understanding-and-llms/</link>
      <description>Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of “intelligence” in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research.

The complete show notes for this episode can be found at https://twimlai.com/go/686.</description>
      <pubDate>Mon, 27 May 2024 18:53:00 -0000</pubDate>
      <itunes:title>Language Understanding and LLMs with Christopher Manning</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>686</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7083fe8e-1c53-11ef-b216-bb722023938b/image/2169acfce39da16384d677d85c1c98bc.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of “intelligence” in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research.

The complete show notes for this episode can be found at https://twimlai.com/go/686.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of “intelligence” in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/686">https://twimlai.com/go/686</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3370</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7083fe8e-1c53-11ef-b216-bb722023938b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7587334756.mp3?updated=1716953241"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Chronos: Learning the Language of Time Series with Abdul Fatir Ansari - #685</title>
      <link>https://twimlai.com/podcast/twimlai/chronos-learning-the-language-of-time-series/</link>
      <description>Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "Chronos: Learning the Language of Time Series." Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems.

The complete show notes for this episode can be found at twimlai.com/go/685.</description>
      <pubDate>Mon, 20 May 2024 17:21:00 -0000</pubDate>
      <itunes:title>Chronos: Learning the Language of Time Series with Abdul Fatir Ansari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>685</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/19218ecc-16cc-11ef-a05c-b3ea5ac3adcc/image/c5f92c34dc8f37ac960ad5caf0dd9236.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "Chronos: Learning the Language of Time Series." Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems.

The complete show notes for this episode can be found at twimlai.com/go/685.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "<a href="https://arxiv.org/abs/2403.07815">Chronos: Learning the Language of Time Series</a>." Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/685">twimlai.com/go/685</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2585</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[19218ecc-16cc-11ef-a05c-b3ea5ac3adcc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6204603914.mp3?updated=1716947469"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Powering AI with the World's Largest Computer Chip with Joel Hestness - #684</title>
      <link>https://twimlai.com/podcast/twimlai/powering-ai-with-the-worlds-largest-computer-chip/</link>
      <description>Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.

The complete show notes for this episode can be found at twimlai.com/go/684.</description>
      <pubDate>Mon, 13 May 2024 19:58:23 -0000</pubDate>
      <itunes:title>Powering AI with the World's Largest Computer Chip with Joel Hestness</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>684</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2fd98ae4-1162-11ef-adba-335ccb0b0eae/image/97c584bb7a1e196ab5627621edf913b3.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.

The complete show notes for this episode can be found at twimlai.com/go/684.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras’ custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company’s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS’ Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/684">twimlai.com/go/684</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3306</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2fd98ae4-1162-11ef-adba-335ccb0b0eae]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6928791405.mp3?updated=1715630189"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Power &amp; Energy with Laurent Boinot - #683</title>
      <link>https://twimlai.com/podcast/twimlai/ai-for-power-energy/</link>
      <description>Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management.

The complete show notes for this episode can be found at twimlai.com/go/683.</description>
      <pubDate>Tue, 07 May 2024 02:39:14 -0000</pubDate>
      <itunes:title>AI for Power &amp; Energy with Laurent Boinot</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>683</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5a8bd062-0bcc-11ef-8496-cf59a26926f0/image/b0fc1b9587757c6fe0c345f6f41107ee.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management.

The complete show notes for this episode can be found at twimlai.com/go/683.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/683">twimlai.com/go/683</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2981</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5a8bd062-0bcc-11ef-8496-cf59a26926f0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9631523506.mp3?updated=1715049563"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682</title>
      <link>https://twimlai.com/podcast/twimlai/controlling-fusion-reactor-instability-with-deep-reinforcement-learning/</link>
      <description>Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ‘tearing mode’. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production.

The complete show notes for this episode can be found at twimlai.com/go/682.</description>
      <pubDate>Mon, 29 Apr 2024 20:22:00 -0000</pubDate>
      <itunes:title>Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>682</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/32f9c516-0665-11ef-8476-3ff537c44b1b/image/dc2ef051bf9c2ecbc2a0067a0594d2a8.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ‘tearing mode’. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production.

The complete show notes for this episode can be found at twimlai.com/go/682.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ‘tearing mode’. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production.</p><p><br></p><p>The complete show notes for this episode can be found at<a href="http://twimlai.com/go/682"> twimlai.com/go/682</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2529</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[32f9c516-0665-11ef-8476-3ff537c44b1b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3018125522.mp3?updated=1714422019"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681</title>
      <link>https://twimlai.com/podcast/twimlai/graphrag-knowledge-graphs-for-ai-applications/</link>
      <description>Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables.

The complete show notes for this episode can be found at twimlai.com/go/681.</description>
      <pubDate>Mon, 22 Apr 2024 18:58:00 -0000</pubDate>
      <itunes:title>GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>681</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a60a397a-00d7-11ef-b8b8-93c07e331d9b/image/fdf61880cd84234d57f9028ca2a10355.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables.

The complete show notes for this episode can be found at twimlai.com/go/681.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses “prompt compilation” to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/681">twimlai.com/go/681</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2828</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a60a397a-00d7-11ef-b8b8-93c07e331d9b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3405997576.mp3?updated=1713836204"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680</title>
      <link>https://twimlai.com/podcast/twimlai/teaching-large-language-models-to-reason-with-reinforcement-learning/</link>
      <description>Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning.

The complete show notes for this episode can be found at twimlai.com/go/680.</description>
      <pubDate>Tue, 16 Apr 2024 22:58:00 -0000</pubDate>
      <itunes:title>Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>680</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b36e32d8-fc27-11ee-bfdf-57f6ac6bb460/image/eb2d7fc18e070e5c1eb609d04ac0ec3e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning.

The complete show notes for this episode can be found at twimlai.com/go/680.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/680">twimlai.com/go/680</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2784</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b36e32d8-fc27-11ee-bfdf-57f6ac6bb460]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2131262000.mp3?updated=1713836624"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Localizing and Editing Knowledge in LLMs with Peter Hase - #679</title>
      <link>https://twimlai.com/podcast/twimlai/localizing-and-editing-knowledge-in-llms/</link>
      <description>Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.</description>
      <pubDate>Mon, 08 Apr 2024 21:03:00 -0000</pubDate>
      <itunes:title>Localizing and Editing Knowledge in LLMs with Peter Hase</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>679</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/780e2eac-f5e5-11ee-b38e-8fa6069fe5ed/image/c93adff4ca155632fc1a3ead4cd0a8e2.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.

The complete show notes for this episode can be found at twimlai.com/go/679.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/679">twimlai.com/go/679</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2986</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[780e2eac-f5e5-11ee-b38e-8fa6069fe5ed]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6128348451.mp3?updated=1712607942"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678</title>
      <link>https://twimlai.com/podcast/twimlai/coercing-llms-to-do-and-reveal-almost-anything/</link>
      <description>Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.

The complete show notes for this episode can be found at twimlai.com/go/678.</description>
      <pubDate>Mon, 01 Apr 2024 19:15:00 -0000</pubDate>
      <itunes:title>Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>678</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1a6bcb64-f05a-11ee-ae98-07e6d7c6cc02/image/4c8a4eb691cdb9742c56ba8299089b6a.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.

The complete show notes for this episode can be found at twimlai.com/go/678.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/678">twimlai.com/go/678</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2907</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1a6bcb64-f05a-11ee-ae98-07e6d7c6cc02]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1896604137.mp3?updated=1711998329"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677</title>
      <link>https://twimlai.com/podcast/twimlai/v-jepa-ai-reasoning-from-a-non-generative-architecture/</link>
      <description>Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video version of Meta’s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI.

The complete show notes for this episode can be found at twimlai.com/go/677.</description>
      <pubDate>Mon, 25 Mar 2024 16:00:00 -0000</pubDate>
      <itunes:title>V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>677</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c44498fe-eabb-11ee-a41e-778a2261e343/image/00c81a7f30289e4a19a5ade2094750c5.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video version of Meta’s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI.

The complete show notes for this episode can be found at twimlai.com/go/677.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Mido Assran, a research scientist at Meta’s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as “the next step in Yann LeCun's vision” for true artificial reasoning. V-JEPA, the video version of Meta’s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="http://twimlai.com/go/677">twimlai.com/go/677</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2867</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c44498fe-eabb-11ee-a41e-778a2261e343]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9999482985.mp3?updated=1711380866"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Video as a Universal Interface for AI Reasoning with Sherry Yang - #676</title>
      <link>https://twimlai.com/podcast/twimlai/video-as-a-universal-interface-for-ai-reasoning/</link>
      <description>Today we’re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,” which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments.

The complete show notes for this episode can be found at twimlai.com/go/676.</description>
      <pubDate>Mon, 18 Mar 2024 17:09:00 -0000</pubDate>
      <itunes:title>Video as a Universal Interface for AI Reasoning with Sherry Yang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>676</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/63cdcb8a-e548-11ee-802e-2f8fd5ac7461/image/301c7f939a2daf5e426c90098c16ed4d.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,” which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments.

The complete show notes for this episode can be found at twimlai.com/go/676.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,” which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/676">twimlai.com/go/676</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2974</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[63cdcb8a-e548-11ee-802e-2f8fd5ac7461]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1270874992.mp3?updated=1710918505"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Assessing the Risks of Open AI Models with Sayash Kapoor - #675</title>
      <link>https://twimlai.com/podcast/twimlai/assessing-the-risks-of-open-ai-models/</link>
      <description>Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.” We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models.

The complete show notes for this episode can be found at twimlai.com/go/675.</description>
      <pubDate>Mon, 11 Mar 2024 18:09:00 -0000</pubDate>
      <itunes:title>Assessing the Risks of Open AI Models with Sayash Kapoor</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>675</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ce887312-dfd0-11ee-8cbd-d7a1f661569b/image/e9352b6d43249369f57ca45cf3d88165.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.” We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models.

The complete show notes for this episode can be found at twimlai.com/go/675.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.” We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/675">twimlai.com/go/675</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2426</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ce887312-dfd0-11ee-8cbd-d7a1f661569b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8468113294.mp3?updated=1710180142"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674</title>
      <link>https://twimlai.com/podcast/twimlai/olmo-everything-you-need-to-train-an-open-source-llm/</link>
      <description>Today we’re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models offered by Meta, Mistral, and others. Namely, the fact that AI2 has also published the dataset and key tools used to train the model. In our chat with Akshita, we dig into the OLMo models and the various projects falling under the OLMo umbrella, including Dolma, an open three-trillion-token corpus for language model pretraining, and Paloma, a benchmark and tooling for evaluating language model performance across a variety of domains.

The complete show notes for this episode can be found at twimlai.com/go/674.</description>
      <pubDate>Mon, 04 Mar 2024 20:10:00 -0000</pubDate>
      <itunes:title>OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>674</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fe9fa270-da56-11ee-ad1d-df46df0f9a30/image/cb594b6b5c6b99b336a17cf76af4994e.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models offered by Meta, Mistral, and others. Namely, the fact that AI2 has also published the dataset and key tools used to train the model. In our chat with Akshita, we dig into the OLMo models and the various projects falling under the OLMo umbrella, including Dolma, an open three-trillion-token corpus for language model pretraining, and Paloma, a benchmark and tooling for evaluating language model performance across a variety of domains.

The complete show notes for this episode can be found at twimlai.com/go/674.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models offered by Meta, Mistral, and others. Namely, the fact that AI2 has also published the dataset and key tools used to train the model. In our chat with Akshita, we dig into the OLMo models and the various projects falling under the OLMo umbrella, including Dolma, an open three-trillion-token corpus for language model pretraining, and Paloma, a benchmark and tooling for evaluating language model performance across a variety of domains.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/674">twimlai.com/go/674</a>.</p>]]>
      </content:encoded>
      <itunes:duration>1932</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fe9fa270-da56-11ee-ad1d-df46df0f9a30]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6116602529.mp3?updated=1709582779"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673</title>
      <link>https://twimlai.com/podcast/twimlai/training-data-locality-and-chain-of-thought-reasoning-in-llms/</link>
      <description>Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.

The complete show notes for this episode can be found at twimlai.com/go/673.</description>
      <pubDate>Mon, 26 Feb 2024 19:17:00 -0000</pubDate>
      <itunes:title>Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>673</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ff82aa00-d4c2-11ee-b34a-1f8b85a82301/image/9a90d0be8df5624826a835b948c48cbb.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.

The complete show notes for this episode can be found at twimlai.com/go/673.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/673">twimlai.com/go/673</a>.</p>]]>
      </content:encoded>
      <itunes:duration>1503</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ff82aa00-d4c2-11ee-b34a-1f8b85a82301]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1845606139.mp3?updated=1708975551"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672</title>
      <link>https://twimlai.com/podcast/twimlai/reasoning-over-complex-documents-with-docllm/</link>
      <description>Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model. Armineh explains how this model, distinct from both traditional LLMs and document AI models, incorporates both textual semantics and spatial layout in processing enterprise documents like reports and complex contracts. We dig into her team’s approach to training DocLLM, their choice of a generative model as opposed to an encoder-based approach, the datasets they used to build the model, their approach to incorporating layout information, and the various ways they evaluated the model’s performance.

The complete show notes for this episode can be found at twimlai.com/go/672.</description>
      <pubDate>Mon, 19 Feb 2024 19:07:00 -0000</pubDate>
      <itunes:title>Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>672</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/70a34026-cf48-11ee-9db5-1b8931c14a1d/image/9ecc15.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model. Armineh explains how this model, distinct from both traditional LLMs and document AI models, incorporates both textual semantics and spatial layout in processing enterprise documents like reports and complex contracts. We dig into her team’s approach to training DocLLM, their choice of a generative model as opposed to an encoder-based approach, the datasets they used to build the model, their approach to incorporating layout information, and the various ways they evaluated the model’s performance.

The complete show notes for this episode can be found at twimlai.com/go/672.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model. Armineh explains how this model, distinct from both traditional LLMs and document AI models, incorporates both textual semantics and spatial layout in processing enterprise documents like reports and complex contracts. We dig into her team’s approach to training DocLLM, their choice of a generative model as opposed to an encoder-based approach, the datasets they used to build the model, their approach to incorporating layout information, and the various ways they evaluated the model’s performance.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/672.">twimlai.com/go/672.</a></p>]]>
      </content:encoded>
      <itunes:duration>2738</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[70a34026-cf48-11ee-9db5-1b8931c14a1d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8614358492.mp3?updated=1708370325"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671</title>
      <link>https://twimlai.com/podcast/twimlai/are-emergent-behaviors-in-llms-an-illusion-2/</link>
      <description>Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?”. We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.

The complete show notes for this episode can be found at twimlai.com/go/671.</description>
      <pubDate>Mon, 12 Feb 2024 18:40:00 -0000</pubDate>
      <itunes:title>Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>671</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/597dabe0-ca05-11ee-be6f-3b2fe540d05d/image/22b841.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?”. We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.

The complete show notes for this episode can be found at twimlai.com/go/671.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?”. We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/671.">twimlai.com/go/671.</a></p>]]>
      </content:encoded>
      <itunes:duration>3940</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[597dabe0-ca05-11ee-be6f-3b2fe540d05d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9383356698.mp3?updated=1707838967"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2024-reinforcement-learning-in-the-age-of-llms/</link>
      <description>Today we’re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar’s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence.

The complete show notes for this episode can be found at twimlai.com/go/670.</description>
      <pubDate>Mon, 05 Feb 2024 19:14:00 -0000</pubDate>
      <itunes:title>AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>670</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/479c107a-c45a-11ee-8ffa-5f78fd40b0af/image/bde43b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar’s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence.

The complete show notes for this episode can be found at twimlai.com/go/670.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar’s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/670.">twimlai.com/go/670.</a></p>]]>
      </content:encoded>
      <itunes:duration>4225</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[479c107a-c45a-11ee-8ffa-5f78fd40b0af]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7496446704.mp3?updated=1707160554"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669</title>
      <link>https://twimlai.com/podcast/twimlai/building-and-deploying-real-world-rag-applications/</link>
      <description>Today we’re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks versus combining retrieval in vector databases and LLMs, the advantages and complexities of RAG with vector databases, the key considerations for building and deploying real-world RAG-based applications, and an in-depth look at Pinecone's new serverless offering. Currently in public preview, Pinecone Serverless is a vector database that enables on-demand data loading, flexible scaling, and cost-effective query processing. Ram discusses how the serverless paradigm impacts the vector database’s core architecture, key features, and other considerations. Lastly, Ram shares his perspective on the future of vector databases in helping enterprises deliver RAG systems.

The complete show notes for this episode can be found at twimlai.com/go/669.</description>
      <pubDate>Mon, 29 Jan 2024 19:19:00 -0000</pubDate>
      <itunes:title>Building and Deploying Real-World RAG Applications with Ram Sriharsha</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>669</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/da7fc99e-bec0-11ee-88da-e7a32b2c147e/image/df7b97.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks versus combining retrieval in vector databases and LLMs, the advantages and complexities of RAG with vector databases, the key considerations for building and deploying real-world RAG-based applications, and an in-depth look at Pinecone's new serverless offering. Currently in public preview, Pinecone Serverless is a vector database that enables on-demand data loading, flexible scaling, and cost-effective query processing. Ram discusses how the serverless paradigm impacts the vector database’s core architecture, key features, and other considerations. Lastly, Ram shares his perspective on the future of vector databases in helping enterprises deliver RAG systems.

The complete show notes for this episode can be found at twimlai.com/go/669.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks versus combining retrieval in vector databases and LLMs, the advantages and complexities of RAG with vector databases, the key considerations for building and deploying real-world RAG-based applications, and an in-depth look at Pinecone's new serverless offering. Currently in public preview, Pinecone Serverless is a vector database that enables on-demand data loading, flexible scaling, and cost-effective query processing. Ram discusses how the serverless paradigm impacts the vector database’s core architecture, key features, and other considerations. Lastly, Ram shares his perspective on the future of vector databases in helping enterprises deliver RAG systems.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/669">twimlai.com/go/669</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2129</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[da7fc99e-bec0-11ee-88da-e7a32b2c147e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5047897251.mp3?updated=1706556580"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668</title>
      <link>https://twimlai.com/podcast/twimlai/nightshade-data-poisoning-to-fight-generative-ai/</link>
      <description>Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly “cloaks” images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively “breaks” generative AI models that are trained on them.

The complete show notes for this episode can be found at twimlai.com/go/668.</description>
      <pubDate>Mon, 22 Jan 2024 18:06:00 -0000</pubDate>
      <itunes:title>Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>668</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f400200-b94a-11ee-adb1-d396ea25b51e/image/dea0e1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly “cloaks” images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively “breaks” generative AI models that are trained on them.

The complete show notes for this episode can be found at twimlai.com/go/668.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben’s recent Fawkes, Glaze, and Nightshade projects, which use “poisoning” approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly “cloaks” images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively “breaks” generative AI models that are trained on them.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/668">twimlai.com/go/668</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2385</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6f400200-b94a-11ee-adb1-d396ea25b51e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5848483126.mp3?updated=1705944285"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Transformer Programs with Dan Friedman - #667</title>
      <link>https://twimlai.com/podcast/twimlai/learning-transformer-programs/</link>
      <description>Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach’s function and scale limitations and constraints.

The complete show notes for this episode can be found at twimlai.com/go/667.</description>
      <pubDate>Mon, 15 Jan 2024 19:28:59 -0000</pubDate>
      <itunes:title>Learning Transformer Programs with Dan Friedman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>667</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1bdfbbba-b3d6-11ee-a62e-23b473a44dc3/image/0b2efb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach’s function and scale limitations and constraints.

The complete show notes for this episode can be found at twimlai.com/go/667.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach’s function and scale limitations and constraints.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/667">twimlai.com/go/667</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2328</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1bdfbbba-b3d6-11ee-a62e-23b473a44dc3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9132550809.mp3?updated=1705344567"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2024: Machine Learning &amp; Deep Learning with Thomas Dietterich - #666</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2024-machine-learning-deep-learning/</link>
      <description>Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.

The complete show notes for this episode can be found at twimlai.com/go/666.</description>
      <pubDate>Mon, 08 Jan 2024 16:50:03 -0000</pubDate>
      <itunes:title>AI Trends 2024: Machine Learning &amp; Deep Learning with Thomas Dietterich</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>666</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/44eb19bc-ae42-11ee-bad3-f3226e194437/image/011083.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.

The complete show notes for this episode can be found at twimlai.com/go/666.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don’t miss Tom’s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/666">twimlai.com/go/666</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3918</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[44eb19bc-ae42-11ee-bad3-f3226e194437]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6485051406.mp3?updated=1704731315"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2024: Computer Vision with Naila Murray - #665</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2024-computer-vision/</link>
      <description>Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs. We discuss tools and open source projects, including Segment Anything–a tool for versatile zero-shot image segmentation using simple text prompts clicks, and bounding boxes; ControlNet–which adds conditional control to stable diffusion models; and DINOv2–a visual encoding model enabling object recognition, segmentation, and depth estimation, even in data-scarce scenarios. Finally, Naila shares her view on the most exciting opportunities in the field, as well as her predictions for upcoming years.

The complete show notes for this episode can be found at twimlai.com/go/665.</description>
      <pubDate>Tue, 02 Jan 2024 21:07:00 -0000</pubDate>
      <itunes:title>AI Trends 2024: Computer Vision with Naila Murray</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>665</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ffb25d7a-a98b-11ee-bf82-ff5ff5e8268a/image/ac28e8.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs. We discuss tools and open source projects, including Segment Anything–a tool for versatile zero-shot image segmentation using simple text prompts clicks, and bounding boxes; ControlNet–which adds conditional control to stable diffusion models; and DINOv2–a visual encoding model enabling object recognition, segmentation, and depth estimation, even in data-scarce scenarios. Finally, Naila shares her view on the most exciting opportunities in the field, as well as her predictions for upcoming years.

The complete show notes for this episode can be found at twimlai.com/go/665.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs. We discuss tools and open source projects, including Segment Anything–a tool for versatile zero-shot image segmentation using simple text prompts clicks, and bounding boxes; ControlNet–which adds conditional control to stable diffusion models; and DINOv2–a visual encoding model enabling object recognition, segmentation, and depth estimation, even in data-scarce scenarios. Finally, Naila shares her view on the most exciting opportunities in the field, as well as her predictions for upcoming years.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/665">twimlai.com/go/665</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3121</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ffb25d7a-a98b-11ee-bf82-ff5ff5e8268a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4141143494.mp3?updated=1704213226"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are Vector DBs the Future Data Platform for AI? with Ed Anuff - #664</title>
      <link>https://twimlai.com/podcast/twimlai/are-vector-dbs-the-future-data-platform-for-ai/</link>
      <description>Today we’re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed’s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance.

The complete show notes for this episode can be found at twimlai.com/go/664.</description>
      <pubDate>Thu, 28 Dec 2023 20:23:00 -0000</pubDate>
      <itunes:title>Are Vector DBs the Future Data Platform for AI? with Ed Anuff</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>664</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a13e9df6-a5aa-11ee-97f7-db91d4125aa4/image/fc7055.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed’s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance.

The complete show notes for this episode can be found at twimlai.com/go/664.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed’s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/664">twimlai.com/go/664</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2893</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a13e9df6-a5aa-11ee-97f7-db91d4125aa4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7800032744.mp3?updated=1703786577"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel - #663</title>
      <link>https://twimlai.com/podcast/twimlai/quantizing-transformers-by-helping-attention-heads-do-nothing/</link>
      <description>Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.

The complete show notes for this episode can be found at twimlai.com/go/663.</description>
      <pubDate>Tue, 26 Dec 2023 20:07:54 -0000</pubDate>
      <itunes:title>Quantizing Transformers by Helping Attention Heads Do Nothing with Markus Nagel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>663</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d0de6e16-a419-11ee-82fc-bb527f7496ce/image/b9f2af.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.

The complete show notes for this episode can be found at twimlai.com/go/663.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/663">twimlai.com/go/663</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2809</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d0de6e16-a419-11ee-82fc-bb527f7496ce]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8515758475.mp3?updated=1703614428"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Responsible AI in the Generative Era with Michael Kearns - #662</title>
      <link>https://twimlai.com/podcast/twimlai/responsible-ai-in-the-generative-era/</link>
      <description>Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning.

The complete show notes for this episode can be found at twimlai.com/go/662.</description>
      <pubDate>Fri, 22 Dec 2023 01:37:00 -0000</pubDate>
      <itunes:title>Responsible AI in the Generative Era with Michael Kearns</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>662</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c19b684a-a05f-11ee-bfbc-8fb79d03af82/image/9a787c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning.

The complete show notes for this episode can be found at twimlai.com/go/662.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Michael Kearns, professor in the Department of Computer and Information Science at the University of Pennsylvania and an Amazon scholar. In our conversation with Michael, we discuss the new challenges to responsible AI brought about by the generative AI era. We explore Michael’s learnings and insights from the intersection of his real-world experience at AWS and his work in academia. We cover a diverse range of topics under this banner, including service card metrics, privacy, hallucinations, RLHF, and LLM evaluation benchmarks. We also touch on Clean Rooms ML, a secured environment that balances accessibility to private datasets through differential privacy techniques, offering a new approach for secure data handling in machine learning.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/662">twimlai.com/go/662</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2164</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c19b684a-a05f-11ee-bfbc-8fb79d03af82]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9941029015.mp3?updated=1703204663"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Edutainment for AI and AWS PartyRock with Mike Miller - #661</title>
      <link>https://twimlai.com/podcast/twimlai/edutainment-for-ai-and-aws-partyrock/</link>
      <description>Today we’re joined by Mike Miller, director of product at AWS responsible for the company’s “edutainment” products. In our conversation with Mike, we explore AWS PartyRock, a no-code generative AI app builder that allows users to easily create fun and shareable AI applications by selecting a model, chaining prompts together, and linking different text, image, and chatbot widgets together. Additionally, we discuss some of the previous tools Mike’s team has delivered at the intersection of developer education and entertainment, including DeepLens, a computer vision hardware device, DeepRacer, a programmable vehicle that uses reinforcement learning to navigate a track, and lastly, DeepComposer, a generative AI model that transforms musical inputs and creates accompanying compositions.

The complete show notes for this episode can be found at twimlai.com/go/661.</description>
      <pubDate>Mon, 18 Dec 2023 16:46:00 -0000</pubDate>
      <itunes:title>Edutainment for AI and AWS PartyRock with Mike Miller</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>661</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aa71afe8-9dbd-11ee-a32a-d758d9dd4c76/image/c25445.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Mike Miller, director of product at AWS responsible for the company’s “edutainment” products. In our conversation with Mike, we explore AWS PartyRock, a no-code generative AI app builder that allows users to easily create fun and shareable AI applications by selecting a model, chaining prompts together, and linking different text, image, and chatbot widgets together. Additionally, we discuss some of the previous tools Mike’s team has delivered at the intersection of developer education and entertainment, including DeepLens, a computer vision hardware device, DeepRacer, a programmable vehicle that uses reinforcement learning to navigate a track, and lastly, DeepComposer, a generative AI model that transforms musical inputs and creates accompanying compositions.

The complete show notes for this episode can be found at twimlai.com/go/661.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Mike Miller, director of product at AWS responsible for the company’s “edutainment” products. In our conversation with Mike, we explore AWS PartyRock, a no-code generative AI app builder that allows users to easily create fun and shareable AI applications by selecting a model, chaining prompts together, and linking different text, image, and chatbot widgets together. Additionally, we discuss some of the previous tools Mike’s team has delivered at the intersection of developer education and entertainment, including DeepLens, a computer vision hardware device, DeepRacer, a programmable vehicle that uses reinforcement learning to navigate a track, and lastly, DeepComposer, a generative AI model that transforms musical inputs and creates accompanying compositions.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/661">twimlai.com/go/661</a>.</p>]]>
      </content:encoded>
      <itunes:duration>1786</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aa71afe8-9dbd-11ee-a32a-d758d9dd4c76]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6922555491.mp3?updated=1703041712"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data, Systems and ML for Visual Understanding with Cody Coleman - #660</title>
      <link>https://twimlai.com/podcast/twimlai/data-systems-and-ml-for-visual-understanding/</link>
      <description>Today we’re joined by Cody Coleman, co-founder and CEO of Coactive AI. In our conversation with Cody, we discuss how Coactive has leveraged modern data, systems, and machine learning techniques to deliver its multimodal asset platform and visual search tools. Cody shares his expertise in the area of data-centric AI, and we dig into techniques like active learning and core set selection, and how they can drive greater efficiency throughout the machine learning lifecycle. We explore the various ways Coactive uses multimodal embeddings to enable their core visual search experience, and we cover the infrastructure optimizations they’ve implemented in order to scale their systems. We conclude with Cody’s advice for entrepreneurs and engineers building companies around generative AI technologies.

The complete show notes for this episode can be found at twimlai.com/go/660.</description>
      <pubDate>Thu, 14 Dec 2023 22:25:00 -0000</pubDate>
      <itunes:title>Data, Systems and ML for Visual Understanding with Cody Coleman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>660</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8abc0030-9ac1-11ee-8cbb-6b7829060994/image/5b5e01.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Cody Coleman, co-founder and CEO of Coactive AI. In our conversation with Cody, we discuss how Coactive has leveraged modern data, systems, and machine learning techniques to deliver its multimodal asset platform and visual search tools. Cody shares his expertise in the area of data-centric AI, and we dig into techniques like active learning and core set selection, and how they can drive greater efficiency throughout the machine learning lifecycle. We explore the various ways Coactive uses multimodal embeddings to enable their core visual search experience, and we cover the infrastructure optimizations they’ve implemented in order to scale their systems. We conclude with Cody’s advice for entrepreneurs and engineers building companies around generative AI technologies.

The complete show notes for this episode can be found at twimlai.com/go/660.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Cody Coleman, co-founder and CEO of Coactive AI. In our conversation with Cody, we discuss how Coactive has leveraged modern data, systems, and machine learning techniques to deliver its multimodal asset platform and visual search tools. Cody shares his expertise in the area of data-centric AI, and we dig into techniques like active learning and core set selection, and how they can drive greater efficiency throughout the machine learning lifecycle. We explore the various ways Coactive uses multimodal embeddings to enable their core visual search experience, and we cover the infrastructure optimizations they’ve implemented in order to scale their systems. We conclude with Cody’s advice for entrepreneurs and engineers building companies around generative AI technologies.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/660">twimlai.com/go/660</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2307</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8abc0030-9ac1-11ee-8cbb-6b7829060994]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1255105858.mp3?updated=1702592679"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Patterns and Middleware for LLM Applications with Kyle Roche - #659</title>
      <link>https://twimlai.com/podcast/twimlai/patterns-and-middleware-for-llm-applications/</link>
      <description>Today we’re joined by Kyle Roche, founder and CEO of Griptape to discuss patterns and middleware for LLM applications. We dive into the emerging patterns for developing LLM applications, such as off prompt data—which allows data retrieval without compromising the chain of thought within language models—and pipelines, which are sequential tasks that are given to LLMs that can involve different models for each task or step in the pipeline. We also explore Griptape, an open-source, Python-based middleware stack that aims to securely connect LLM applications to an organization’s internal and external data systems. We discuss the abstractions it offers, including drivers, memory management, rule sets, DAG-based workflows, and a prompt stack. Additionally, we touch on common customer concerns such as privacy, retraining, and sovereignty issues, and several use cases that leverage role-based retrieval methods to optimize human augmentation tasks.

The complete show notes for this episode can be found at twimlai.com/go/659.</description>
      <pubDate>Mon, 11 Dec 2023 23:15:23 -0000</pubDate>
      <itunes:title>Patterns and Middleware for LLM Applications with Kyle Roche</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>659</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6eb639fa-9855-11ee-9c6e-9fe469d915da/image/00157c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kyle Roche, founder and CEO of Griptape to discuss patterns and middleware for LLM applications. We dive into the emerging patterns for developing LLM applications, such as off prompt data—which allows data retrieval without compromising the chain of thought within language models—and pipelines, which are sequential tasks that are given to LLMs that can involve different models for each task or step in the pipeline. We also explore Griptape, an open-source, Python-based middleware stack that aims to securely connect LLM applications to an organization’s internal and external data systems. We discuss the abstractions it offers, including drivers, memory management, rule sets, DAG-based workflows, and a prompt stack. Additionally, we touch on common customer concerns such as privacy, retraining, and sovereignty issues, and several use cases that leverage role-based retrieval methods to optimize human augmentation tasks.

The complete show notes for this episode can be found at twimlai.com/go/659.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kyle Roche, founder and CEO of Griptape to discuss patterns and middleware for LLM applications. We dive into the emerging patterns for developing LLM applications, such as off prompt data—which allows data retrieval without compromising the chain of thought within language models—and pipelines, which are sequential tasks that are given to LLMs that can involve different models for each task or step in the pipeline. We also explore Griptape, an open-source, Python-based middleware stack that aims to securely connect LLM applications to an organization’s internal and external data systems. We discuss the abstractions it offers, including drivers, memory management, rule sets, DAG-based workflows, and a prompt stack. Additionally, we touch on common customer concerns such as privacy, retraining, and sovereignty issues, and several use cases that leverage role-based retrieval methods to optimize human augmentation tasks.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/659">twimlai.com/go/659</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2158</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6eb639fa-9855-11ee-9c6e-9fe469d915da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3085803991.mp3?updated=1702334494"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Access and Inclusivity as a Technical Challenge with Prem Natarajan - #658</title>
      <link>https://twimlai.com/podcast/twimlai/ai-access-and-inclusivity-as-a-technical-challenge/</link>
      <description>Today we’re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team’s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team’s work on foundation models for financial data curation, highlighting the importance of data quality and the use of federated learning, and emphasizing the impact these factors have on the model performance and reliability in critical applications like fraud detection. Lastly, Prem shares his overall approach to tackling AI research in the context of a banking enterprise, including prioritizing mission-inspired research aiming to deliver tangible benefits to customers and the broader community, investing in diverse talent and the best infrastructure, and forging strategic partnerships with a variety of academic labs.

The complete show notes for this episode can be found at twimlai.com/go/658.</description>
      <pubDate>Mon, 04 Dec 2023 20:08:41 -0000</pubDate>
      <itunes:title>AI Access and Inclusivity as a Technical Challenge with Prem Natarajan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>658</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fc649446-92dc-11ee-b1da-0b9f4f24eacb/image/447af0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team’s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team’s work on foundation models for financial data curation, highlighting the importance of data quality and the use of federated learning, and emphasizing the impact these factors have on the model performance and reliability in critical applications like fraud detection. Lastly, Prem shares his overall approach to tackling AI research in the context of a banking enterprise, including prioritizing mission-inspired research aiming to deliver tangible benefits to customers and the broader community, investing in diverse talent and the best infrastructure, and forging strategic partnerships with a variety of academic labs.

The complete show notes for this episode can be found at twimlai.com/go/658.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Prem Natarajan, chief scientist and head of enterprise AI at Capital One. In our conversation, we discuss AI access and inclusivity as technical challenges and explore some of Prem and his team’s multidisciplinary approaches to tackling these complexities. We dive into the issues of bias, dealing with class imbalances, and the integration of various research initiatives to achieve additive results. Prem also shares his team’s work on foundation models for financial data curation, highlighting the importance of data quality and the use of federated learning, and emphasizing the impact these factors have on the model performance and reliability in critical applications like fraud detection. Lastly, Prem shares his overall approach to tackling AI research in the context of a banking enterprise, including prioritizing mission-inspired research aiming to deliver tangible benefits to customers and the broader community, investing in diverse talent and the best infrastructure, and forging strategic partnerships with a variety of academic labs.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/658">twimlai.com/go/658</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2506</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fc649446-92dc-11ee-b1da-0b9f4f24eacb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2258208342.mp3?updated=1701719133"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657</title>
      <link>https://twimlai.com/podcast/twimlai/building-llm-based-applications-with-azure-openai/</link>
      <description>Today we’re joined by Jay Emery, director of technical sales &amp; architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes.

The complete show notes for this episode can be found at twimlai.com/go/657.</description>
      <pubDate>Tue, 28 Nov 2023 21:24:07 -0000</pubDate>
      <itunes:title>Building LLM-Based Applications with Azure OpenAI with Jay Emery</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>657</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0a345ffa-8e2f-11ee-bb22-4b695f4987e0/image/3989be.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jay Emery, director of technical sales &amp; architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes.

The complete show notes for this episode can be found at twimlai.com/go/657.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jay Emery, director of technical sales &amp; architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/657">twimlai.com/go/657</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2603</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0a345ffa-8e2f-11ee-bb22-4b695f4987e0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6677515809.mp3?updated=1701204619"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Visual Generative AI Ecosystem Challenges with Richard Zhang - #656</title>
      <link>https://twimlai.com/podcast/twimlai/visual-generative-ai-ecosystem-challenges/</link>
      <description>Today we’re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI applications such as stable diffusion, GANs, and latent diffusion. We look at his work creating detection tools for fake visual content, highlighting the importance of generalization of these detection methods to new, unseen models. Lastly, we dig into his work on data attribution and concept ablation, which aim to address the challenging open problem of allowing artists and others to manage their contributions to generative AI training data sets.

The complete show notes for this episode can be found at twimlai.com/go/656.</description>
      <pubDate>Mon, 20 Nov 2023 17:27:00 -0000</pubDate>
      <itunes:title>Visual Generative AI Ecosystem Challenges with Richard Zhang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>656</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5911989a-87c7-11ee-aa1b-0f9cbc2bf4eb/image/9d4198.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI applications such as stable diffusion, GANs, and latent diffusion. We look at his work creating detection tools for fake visual content, highlighting the importance of generalization of these detection methods to new, unseen models. Lastly, we dig into his work on data attribution and concept ablation, which aim to address the challenging open problem of allowing artists and others to manage their contributions to generative AI training data sets.

The complete show notes for this episode can be found at twimlai.com/go/656.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Richard Zhang, senior research scientist at Adobe Research. In our conversation with Richard, we explore the research challenges that arise when regarding visual generative AI from an ecosystem perspective, considering the disparate needs of creators, consumers, and contributors. We start with his work on perceptual metrics and the LPIPS paper, which allow us to better align human perception and computer vision and which remain used in contemporary generative AI applications such as stable diffusion, GANs, and latent diffusion. We look at his work creating detection tools for fake visual content, highlighting the importance of generalization of these detection methods to new, unseen models. Lastly, we dig into his work on data attribution and concept ablation, which aim to address the challenging open problem of allowing artists and others to manage their contributions to generative AI training data sets.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/656">twimlai.com/go/656</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2440</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5911989a-87c7-11ee-aa1b-0f9cbc2bf4eb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6292733087.mp3?updated=1700600746"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deploying Edge and Embedded AI Systems with Heather Gorr - #655</title>
      <link>https://twimlai.com/podcast/twimlai/deploying-edge-and-embedded-ai-systems/</link>
      <description>Today we’re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are modeling needs such as explainability, robustness and quantization; the use of simulation throughout the modeling process; the need to apply robust verification and validation methodologies to ensure safety and reliability; and the need to adapt and apply MLOps techniques for speed and consistency. Heather also shares noteworthy anecdotes about embedded AI deployments in industries including automotive and oil &amp; gas.

The complete show notes for this episode can be found at twimlai.com/go/655.</description>
      <pubDate>Mon, 13 Nov 2023 18:56:00 -0000</pubDate>
      <itunes:title>Deploying Edge and Embedded AI Systems with Heather Gorr</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>655</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f479e01e-824d-11ee-ba99-e71412bbf4a8/image/47f7af.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are modeling needs such as explainability, robustness and quantization; the use of simulation throughout the modeling process; the need to apply robust verification and validation methodologies to ensure safety and reliability; and the need to adapt and apply MLOps techniques for speed and consistency. Heather also shares noteworthy anecdotes about embedded AI deployments in industries including automotive and oil &amp; gas.

The complete show notes for this episode can be found at twimlai.com/go/655.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Heather Gorr, principal MATLAB product marketing manager at MathWorks. In our conversation with Heather, we discuss the deployment of AI models to hardware devices and embedded AI systems. We explore factors to consider during data preparation, model development, and ultimately deployment, to ensure a successful project. Factors such as device constraints and latency requirements which dictate the amount and frequency of data flowing onto the device are discussed, as are modeling needs such as explainability, robustness and quantization; the use of simulation throughout the modeling process; the need to apply robust verification and validation methodologies to ensure safety and reliability; and the need to adapt and apply MLOps techniques for speed and consistency. Heather also shares noteworthy anecdotes about embedded AI deployments in industries including automotive and oil &amp; gas.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/655">twimlai.com/go/655</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2316</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f479e01e-824d-11ee-ba99-e71412bbf4a8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1711734166.mp3?updated=1699977362"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - #654</title>
      <link>https://twimlai.com/podcast/twimlai/ai-sentience-agency-and-catastrophic-risk/</link>
      <description>Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems.

The complete show notes for this episode can be found at twimlai.com/go/654.</description>
      <pubDate>Mon, 06 Nov 2023 20:50:59 -0000</pubDate>
      <itunes:title>AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>654</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/20f661ba-7cdd-11ee-969e-bb6a498d86f0/image/7b5707.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems.

The complete show notes for this episode can be found at twimlai.com/go/654.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/654">twimlai.com/go/654</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2880</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[20f661ba-7cdd-11ee-969e-bb6a498d86f0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5856512799.mp3?updated=1699300269"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Delivering AI Systems in Highly Regulated Environments with Miriam Friedel - #653</title>
      <link>https://twimlai.com/podcast/twimlai/delivering-ai-systems-in-highly-regulated-environments/</link>
      <description>Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incentivizing model reuse. Miriam also shares her thoughts on building a ‘unicorn’ team, and what this means for the team she’s built at Capital One, as well as her take on build vs. buy decisions for MLOps, and the future of MLOps and enterprise AI more broadly. Throughout, Miriam shares examples of these ideas at work in some of the tools their team has built, such as Rubicon, an open source experiment management tool, and Kubeflow pipeline components that enable Capital One data scientists to efficiently leverage and scale models. 

The complete show notes for this episode can be found at twimlai.com/go/653.</description>
      <pubDate>Mon, 30 Oct 2023 18:27:45 -0000</pubDate>
      <itunes:title>Delivering AI Systems in Highly Regulated Environments with Miriam Friedel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>653</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/14df513c-774e-11ee-bdbc-bf4a65187c4a/image/96d3ab.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incentivizing model reuse. Miriam also shares her thoughts on building a ‘unicorn’ team, and what this means for the team she’s built at Capital One, as well as her take on build vs. buy decisions for MLOps, and the future of MLOps and enterprise AI more broadly. Throughout, Miriam shares examples of these ideas at work in some of the tools their team has built, such as Rubicon, an open source experiment management tool, and Kubeflow pipeline components that enable Capital One data scientists to efficiently leverage and scale models. 

The complete show notes for this episode can be found at twimlai.com/go/653.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Miriam Friedel, senior director of ML engineering at Capital One. In our conversation with Miriam, we discuss some of the challenges faced when delivering machine learning tools and systems in highly regulated enterprise environments, and some of the practices her teams have adopted to help them operate with greater speed and agility. We also explore how to create a culture of collaboration, the value of standardized tooling and processes, leveraging open-source, and incentivizing model reuse. Miriam also shares her thoughts on building a ‘unicorn’ team, and what this means for the team she’s built at Capital One, as well as her take on build vs. buy decisions for MLOps, and the future of MLOps and enterprise AI more broadly. Throughout, Miriam shares examples of these ideas at work in some of the tools their team has built, such as Rubicon, an open source experiment management tool, and Kubeflow pipeline components that enable Capital One data scientists to efficiently leverage and scale models. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/653">twimlai.com/go/653</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2645</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[14df513c-774e-11ee-bdbc-bf4a65187c4a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1495243177.mp3?updated=1698690252"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mental Models for Advanced ChatGPT Prompting with Riley Goodside - #652</title>
      <link>https://twimlai.com/podcast/twimlai/mental-models-for-advanced-chatgpt-prompting/</link>
      <description>Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.

The complete show notes for this episode can be found at twimlai.com/go/652.</description>
      <pubDate>Mon, 23 Oct 2023 19:44:00 -0000</pubDate>
      <itunes:title>Mental Models for Advanced ChatGPT Prompting with Riley Goodside</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>652</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9c5b9aca-71c3-11ee-8277-33e19c3b0b79/image/faa9b2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.

The complete show notes for this episode can be found at twimlai.com/go/652.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/652">twimlai.com/go/652</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2398</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9c5b9aca-71c3-11ee-8277-33e19c3b0b79]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4797963163.mp3?updated=1698091963"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multilingual LLMs and the Values Divide in AI with Sara Hooker - #651</title>
      <link>https://twimlai.com/podcast/twimlai/multilingual-llms-and-the-values-divide-in-ai/</link>
      <description>Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.

The complete show notes for this episode can be found at twimlai.com/go/651.</description>
      <pubDate>Mon, 16 Oct 2023 19:51:30 -0000</pubDate>
      <itunes:title>Multilingual LLMs and the Values Divide in AI with Sara Hooker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>651</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0d5c066a-6c56-11ee-97e9-5394e9a0c8b4/image/fe58bd.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.

The complete show notes for this episode can be found at twimlai.com/go/651.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/651">twimlai.com/go/651</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4719</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0d5c066a-6c56-11ee-97e9-5394e9a0c8b4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2669421148.mp3?updated=1697483035"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Multi-Modal Generative AI with Luke Zettlemoyer - #650</title>
      <link>https://twimlai.com/podcast/twimlai/scaling-multi-modal-generative-ai/</link>
      <description>Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment.

The complete show notes for this episode can be found at twimlai.com/go/650.</description>
      <pubDate>Mon, 09 Oct 2023 18:54:09 -0000</pubDate>
      <itunes:title>Scaling Multi-Modal Generative AI with Luke Zettlemoyer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>650</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b9aac532-66d4-11ee-9286-abb818e2c795/image/606a3b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment.

The complete show notes for this episode can be found at twimlai.com/go/650.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/650">twimlai.com/go/650</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2324</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b9aac532-66d4-11ee-9286-abb818e2c795]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3862469127.mp3?updated=1696877735"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pushing Back on AI Hype with Alex Hanna - #649</title>
      <link>https://twimlai.com/podcast/twimlai/pushing-back-on-ai-hype/</link>
      <description>Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation.

The complete show notes for this episode can be found at twimlai.com/go/649.</description>
      <pubDate>Mon, 02 Oct 2023 20:37:00 -0000</pubDate>
      <itunes:title>Pushing Back on AI Hype with Alex Hanna</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>649</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0d9131fe-6158-11ee-80f5-83d1aad696f0/image/5515e9.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation.

The complete show notes for this episode can be found at twimlai.com/go/649.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/649">twimlai.com/go/649</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2966</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0d9131fe-6158-11ee-80f5-83d1aad696f0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2669084010.mp3?updated=1696274432"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Personalization for Text-to-Image Generative AI with Nataniel Ruiz - #648</title>
      <link>https://twimlai.com/podcast/twimlai/personalization-for-text-to-image-generative-ai/</link>
      <description>Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus.

The complete show notes for this episode can be found at twimlai.com/go/648.</description>
      <pubDate>Mon, 25 Sep 2023 16:24:00 -0000</pubDate>
      <itunes:title>Personalization for Text-to-Image Generative AI with Nataniel Ruiz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>648</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a6abe2bc-5bb8-11ee-8112-43f506bb342b/image/5768e6.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus.

The complete show notes for this episode can be found at twimlai.com/go/648.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/648">twimlai.com/go/648</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2662</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a6abe2bc-5bb8-11ee-8112-43f506bb342b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9429073065.mp3?updated=1695659369"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647</title>
      <link>https://twimlai.com/podcast/twimlai/ensuring-llm-safety-for-production-applications/</link>
      <description>Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.

The complete show notes for this episode can be found at twimlai.com/go/647.</description>
      <pubDate>Mon, 18 Sep 2023 18:17:11 -0000</pubDate>
      <itunes:title>Ensuring LLM Safety for Production Applications with Shreya Rajpal</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>647</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8e382422-563e-11ee-9669-d73fcdd12156/image/f17f6a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.

The complete show notes for this episode can be found at twimlai.com/go/647.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/647">twimlai.com/go/647</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2452</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8e382422-563e-11ee-9669-d73fcdd12156]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8492343705.mp3?updated=1695059716"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What’s Next in LLM Reasoning? with Roland Memisevic - #646 </title>
      <link>https://twimlai.com/podcast/twimlai/whats-next-in-llm-reasoning/</link>
      <description>Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.

The complete show notes for this episode can be found at twimlai.com/go/646.</description>
      <pubDate>Mon, 11 Sep 2023 18:38:00 -0000</pubDate>
      <itunes:title>What’s Next in LLM Reasoning? with Roland Memisevic</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>646</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65dedab4-50c8-11ee-8bd4-134e82c5904e/image/29de7b.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.

The complete show notes for this episode can be found at twimlai.com/go/646.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Roland Memisevic, a senior director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/646">twimlai.com/go/646</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3540</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[65dedab4-50c8-11ee-8bd4-134e82c5904e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8628716738.mp3?updated=1695064477"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Is ChatGPT Getting Worse? with James Zou - #645</title>
      <link>https://twimlai.com/podcast/twimlai/is-chatgpt-getting-worse/</link>
      <description>Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process.

The complete show notes for this episode can be found at twimlai.com/go/645.</description>
      <pubDate>Mon, 04 Sep 2023 16:00:00 -0000</pubDate>
      <itunes:title>Is ChatGPT Getting Worse? with James Zou</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>645</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4be1eb1c-4914-11ee-af46-3fa07a3549f6/image/4b7dac.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process.

The complete show notes for this episode can be found at twimlai.com/go/645.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how surgical AI editing akin to CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/645">twimlai.com/go/645</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2537</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4be1eb1c-4914-11ee-af46-3fa07a3549f6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8011459966.mp3?updated=1693938099"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644</title>
      <link>https://twimlai.com/podcast/twimlai/why-deep-networks-and-brains-learn-similar-features/</link>
      <description>Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.

The complete show notes for this episode can be found at twimlai.com/go/644.</description>
      <pubDate>Mon, 28 Aug 2023 18:13:14 -0000</pubDate>
      <itunes:title>Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>644</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/031b39f0-436b-11ee-a9c0-fbd0cdd6d344/image/45df83.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.

The complete show notes for this episode can be found at twimlai.com/go/644.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/644">twimlai.com/go/644</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2715</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[031b39f0-436b-11ee-a9c0-fbd0cdd6d344]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4765432477.mp3?updated=1693246375"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inverse Reinforcement Learning Without RL with Gokul Swamy - #643</title>
      <link>https://twimlai.com/podcast/twimlai/inverse-reinforcement-learning-without-rl/</link>
      <description>Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.

The complete show notes for this episode can be found at twimlai.com/go/643.</description>
      <pubDate>Mon, 21 Aug 2023 17:59:05 -0000</pubDate>
      <itunes:title>Inverse Reinforcement Learning Without RL with Gokul Swamy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>643</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/16a2d432-3f52-11ee-a6e6-e7b527e8ba03/image/d66dcb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.

The complete show notes for this episode can be found at twimlai.com/go/643.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/643">twimlai.com/go/643</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2035</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[16a2d432-3f52-11ee-a6e6-e7b527e8ba03]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3531803742.mp3?updated=1692641394"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Explainable AI for Biology and Medicine with Su-In Lee - #642</title>
      <link>https://twimlai.com/podcast/twimlai/explainable-ai-for-biology-and-medicine-2/</link>
      <description>Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases.

The complete show notes for this episode can be found at twimlai.com/go/642.</description>
      <pubDate>Mon, 14 Aug 2023 17:36:00 -0000</pubDate>
      <itunes:title>Explainable AI for Biology and Medicine with Su-In Lee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>642</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/55401f80-378f-11ee-b037-9b9ed253b112/image/664867.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases.

The complete show notes for this episode can be found at twimlai.com/go/642.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/642">twimlai.com/go/642</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2294</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55401f80-378f-11ee-b037-9b9ed253b112]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4264085375.mp3?updated=1692034960"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Transformers On Large-Scale Graphs with Bayan Bruss - #641</title>
      <link>https://twimlai.com/podcast/twimlai/transformers-on-large-scale-graphs/</link>
      <description>Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.

The complete show notes for this episode can be found at twimlai.com/go/641.</description>
      <pubDate>Mon, 07 Aug 2023 16:15:00 -0000</pubDate>
      <itunes:title>Transformers On Large-Scale Graphs with Bayan Bruss</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>641</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f98ff378-326a-11ee-a021-37a41cc7f33f/image/dfb640.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.

The complete show notes for this episode can be found at twimlai.com/go/641.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Bayan Bruss, Vice President of Applied ML Research at Capital One. In our conversation with Bayan, we covered a pair of papers his team presented at this year’s ICML conference. We begin with the paper Interpretable Subspaces in Image Representations, where Bayan gives us a dive deep into the interpretability framework, embedding dimensions, contrastive approaches, and how their model can accelerate image representation in deep learning. We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer. We talk through the computation challenges, homophilic and heterophilic principles, model sparsity, and how their research proposes methodologies to get around the computational barrier when scaling to large-scale graph models.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/641">twimlai.com/go/641</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2316</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f98ff378-326a-11ee-a021-37a41cc7f33f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2074874022.mp3?updated=1691426537"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Enterprise LLM Landscape with Atul Deo - #640</title>
      <link>https://twimlai.com/podcast/twimlai/the-enterprise-llm-landscape/</link>
      <description>Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale.

The complete show notes for this episode can be found at twimlai.com/go/640.</description>
      <pubDate>Mon, 31 Jul 2023 16:00:00 -0000</pubDate>
      <itunes:title>The Enterprise LLM Landscape with Atul Deo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>640</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/885eee8e-2d65-11ee-a181-a7fbec69bd56/image/7ad530.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale.

The complete show notes for this episode can be found at twimlai.com/go/640.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Atul Deo, General Manager of Amazon Bedrock. In our conversation with Atul, we discuss the process of training large language models in the enterprise, including the pain points of creating and training machine learning models, and the power of pre-trained models. We explore different approaches to how companies can leverage large language models, dealing with the hallucination, and the transformative process of retrieval augmented generation (RAG). Finally, Atul gives us an inside look at Bedrock, a fully managed service that simplifies the deployment of generative AI-based apps at scale.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/640">twimlai.com/go/640</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2228</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[885eee8e-2d65-11ee-a181-a7fbec69bd56]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5755736749.mp3?updated=1690583051"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>BloombergGPT - an LLM for Finance with David Rosenberg - #639</title>
      <link>https://twimlai.com/podcast/twimlai/bloomberggpt-an-llm-for-finance/</link>
      <description>Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues.

The complete show notes for this episode can be found at twimlai.com/go/639</description>
      <pubDate>Mon, 24 Jul 2023 17:36:22 -0000</pubDate>
      <itunes:title>BloombergGPT - an LLM for Finance with David Rosenberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>639</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/57fefc38-282a-11ee-a15b-bb46dba09abc/image/538f78.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues.

The complete show notes for this episode can be found at twimlai.com/go/639</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by David Rosenberg, head of the machine learning strategy team in the Office of the CTO at Bloomberg. In our conversation with David, we discuss the creation of BloombergGPT, a custom-built LLM focused on financial applications. We explore the model’s architecture, validation process, benchmarks, and its distinction from other language models. David also discussed the evaluation process, performance comparisons, progress, and the future directions of the model. Finally, we discuss the ethical considerations that come with building these types of models, and how they've approached dealing with these issues.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/639">twimlai.com/go/639</a></p>]]>
      </content:encoded>
      <itunes:duration>2212</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[57fefc38-282a-11ee-a15b-bb46dba09abc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4234088173.mp3?updated=1690217245"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638</title>
      <link>https://twimlai.com/podcast/twimlai/are-llms-good-at-causal-reasoning/</link>
      <description>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and architecture to correctly answer these questions. The episode discusses the challenge of generalization in causal relationships and the importance of incorporating inductive biases, explores the model's ability to generalize beyond the provided benchmarks, and the importance of considering causal factors in decision-making processes.
The complete show notes for this episode can be found at twimlai.com/go/638.</description>
      <pubDate>Mon, 17 Jul 2023 17:24:57 -0000</pubDate>
      <itunes:title>Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>638</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b8f61d80-2269-11ee-b0cc-23b8916b6f90/image/d7e266.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and architecture to correctly answer these questions. The episode discusses the challenge of generalization in causal relationships and the importance of incorporating inductive biases, explores the model's ability to generalize beyond the provided benchmarks, and the importance of considering causal factors in decision-making processes.
The complete show notes for this episode can be found at twimlai.com/go/638.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, Professor at Northeastern University, and Founder of Altdeep.ai. In our conversation with Robert, we explore whether large language models, specifically GPT-3, 3.5, and 4, are good at causal reasoning. We discuss the benchmarks used to evaluate these models and the limitations they have in answering specific causal reasoning questions, while Robert highlights the need for access to weights, training data, and architecture to correctly answer these questions. The episode discusses the challenge of generalization in causal relationships and the importance of incorporating inductive biases, explores the model's ability to generalize beyond the provided benchmarks, and the importance of considering causal factors in decision-making processes.</p><p>The complete show notes for this episode can be found at twimlai.com/go/638.</p>]]>
      </content:encoded>
      <itunes:duration>2901</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b8f61d80-2269-11ee-b0cc-23b8916b6f90]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5479815893.mp3?updated=1689615122"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Privacy vs Fairness in Computer Vision with Alice Xiang - #637</title>
      <link>https://twimlai.com/podcast/twimlai/privacy-vs-fairness-in-computer-vision/</link>
      <description>Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US.

The complete show notes for this episode can be found at twimlai.com/go/637.</description>
      <pubDate>Mon, 10 Jul 2023 17:22:32 -0000</pubDate>
      <itunes:title>Privacy vs Fairness in Computer Vision with Alice Xiang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>637</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d4f21632-1f40-11ee-901b-97f3f0dcd753/image/63ffec.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US.

The complete show notes for this episode can be found at twimlai.com/go/637.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Alice Xiang, Lead Research Scientist at Sony AI, and Global Head of AI Ethics at Sony Group Corporation. In our conversation with Alice, we discuss the ongoing debate between privacy and fairness in computer vision, diving into the impact of data privacy laws on the AI space while highlighting concerns about unauthorized use and lack of transparency in data usage. We explore the potential harm of inaccurate AI model outputs and the need for legal protection against biased AI products, and Alice suggests various solutions to address these challenges, such as working through third parties for data collection and establishing closer relationships with communities. Finally, we talk through the history of unethical data collection practices in CV and the emergence of generative AI technologies that exacerbate the problem, the importance of operationalizing ethical data collection and practice, including appropriate consent, representation, diversity, and compensation, and the need for interdisciplinary collaboration in AI ethics and the growing interest in AI regulation, including the EU AI Act and regulatory activities in the US.</p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/637.</p>]]>
      </content:encoded>
      <itunes:duration>2261</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d4f21632-1f40-11ee-901b-97f3f0dcd753]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4883886561.mp3?updated=1689007680"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Unifying Vision and Language Models with Mohit Bansal - #636</title>
      <link>https://twimlai.com/podcast/twimlai/unifying-vision-and-language-models</link>
      <description>Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks while using fewer parameters. Finally, we discuss the importance of data efficiency, evaluating bias in models, and the future of multimodal models and explainability.

The complete show notes for this episode can be found at twimlai.com/go/636.</description>
      <pubDate>Mon, 03 Jul 2023 18:06:00 -0000</pubDate>
      <itunes:title>Unifying Vision and Language Models with Mohit Bansal</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>636</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/83e0fc14-19cc-11ee-a963-a3887e95959a/image/70190d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks while using fewer parameters. Finally, we discuss the importance of data efficiency, evaluating bias in models, and the future of multimodal models and explainability.

The complete show notes for this episode can be found at twimlai.com/go/636.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Mohit Bansal, Parker Professor, and Director of the MURGe-Lab at UNC, Chapel Hill. In our conversation with Mohit, we explore the concept of unification in AI models, highlighting the advantages of shared knowledge and efficiency. He addresses the challenges of evaluation in generative AI, including biases and spurious correlations. Mohit introduces groundbreaking models such as UDOP and VL-T5, which achieved state-of-the-art results in various vision and language tasks while using fewer parameters. Finally, we discuss the importance of data efficiency, evaluating bias in models, and the future of multimodal models and explainability.</p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/636.</p>]]>
      </content:encoded>
      <itunes:duration>2888</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[83e0fc14-19cc-11ee-a963-a3887e95959a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9407594028.mp3?updated=1688409213"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635</title>
      <link>https://twimlai.com/podcast/twimlai/data-augmentation-and-optimized-architectures-for-computer-vision/</link>
      <description>Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language models for fine-grained labeling. We also discuss generative AI advancements and computer vision optimization for running large models on edge devices. Finally, we discuss objective functions, architecture design choices for neural networks, and efficiency and accuracy improvements in AI models via the techniques introduced in the papers.</description>
      <pubDate>Mon, 26 Jun 2023 18:06:00 -0000</pubDate>
      <itunes:title>Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>635</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cd40c150-1445-11ee-aac7-3361ecb50957/image/a29b5e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language models for fine-grained labeling. We also discuss generative AI advancements and computer vision optimization for running large models on edge devices. Finally, we discuss objective functions, architecture design choices for neural networks, and efficiency and accuracy improvements in AI models via the techniques introduced in the papers.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our coverage of the 2023 CVPR conference joined by Fatih Porikli, a Senior Director of Technology at Qualcomm. In our conversation with Fatih, we covered quite a bit of ground, touching on a total of 12 papers/demos, focusing on topics like data augmentation and optimized architectures for computer vision. We explore advances in optical flow estimation networks, cross-model, and stage knowledge distillation for efficient 3D object detection, and zero-shot learning via language models for fine-grained labeling. We also discuss generative AI advancements and computer vision optimization for running large models on edge devices. Finally, we discuss objective functions, architecture design choices for neural networks, and efficiency and accuracy improvements in AI models via the techniques introduced in the papers.</p>]]>
      </content:encoded>
      <itunes:duration>3151</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cd40c150-1445-11ee-aac7-3361ecb50957]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4204176868.mp3?updated=1687800352"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mojo: A Supercharged Python for AI with Chris Lattner - #634</title>
      <link>https://twimlai.com/podcast/twimlai/mojo-a-supercharged-python-for-ai/</link>
      <description>Today we’re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the relationship between the Modular Engine and Mojo, the challenge of packaging Python, particularly when incorporating C code, and how Mojo aims to solve these problems to make the AI stack more dependable.


The complete show notes for this episode can be found at twimlai.com/go/634</description>
      <pubDate>Mon, 19 Jun 2023 17:31:22 -0000</pubDate>
      <itunes:title>Mojo: A Supercharged Python for AI with Chris Lattner - #634</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>634</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9cbe4c08-0ebb-11ee-9bd8-136284bd0aa0/image/904092.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the relationship between the Modular Engine and Mojo, the challenge of packaging Python, particularly when incorporating C code, and how Mojo aims to solve these problems to make the AI stack more dependable.


The complete show notes for this episode can be found at twimlai.com/go/634</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Chris Lattner, Co-Founder and CEO of Modular. In our conversation with Chris, we discuss Mojo, a new programming language for AI developers. Mojo is unique in this space and simplifies things by making the entire stack accessible and understandable to people who are not compiler engineers. It also offers Python programmers the ability to make it high-performance and capable of running accelerators, making it more accessible to more people and researchers. We discuss the relationship between the Modular Engine and Mojo, the challenge of packaging Python, particularly when incorporating C code, and how Mojo aims to solve these problems to make the AI stack more dependable.</p><p><br></p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/634</p>]]>
      </content:encoded>
      <itunes:duration>3442</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9cbe4c08-0ebb-11ee-9bd8-136284bd0aa0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2486721769.mp3?updated=1687193251"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Stable Diffusion and LLMs at the Edge with Jilei Hou - #633</title>
      <link>https://twimlai.com/podcast/twimlai/stable-diffusion-and-llms-at-the-edge</link>
      <description>Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we explore how these emerging technologies fit into the existing AI Model Efficiency Toolkit (AIMET) framework. 
The complete show notes for this episode can be found at twimlai.com/go/633</description>
      <pubDate>Mon, 12 Jun 2023 18:24:11 -0000</pubDate>
      <itunes:title>Stable Diffusion and LLMs at the Edge with Jilei Hou</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>633</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8f513566-0949-11ee-94e9-dfd491f5b00c/image/1cc82d.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we explore how these emerging technologies fit into the existing AI Model Efficiency Toolkit (AIMET) framework. 
The complete show notes for this episode can be found at twimlai.com/go/633</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jilei Hou, a VP of Engineering at Qualcomm Technologies. In our conversation with Jilei, we focus on the emergence of generative AI, and how they've worked towards providing these models for use on edge devices. We explore how the distribution of models on devices can help amortize large models' costs while improving reliability and performance and the challenges of running machine learning workloads on devices, including model size and inference latency. Finally, Jilei we explore how these emerging technologies fit into the existing AI Model Efficiency Toolkit (AIMET) framework. </p><p>The complete show notes for this episode can be found at twimlai.com/go/633</p>]]>
      </content:encoded>
      <itunes:duration>2409</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8f513566-0949-11ee-94e9-dfd491f5b00c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6925422585.mp3?updated=1686592503"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Modeling Human Behavior with Generative Agents with Joon Sung Park - #632</title>
      <link>https://twimlai.com/podcast/twimlai/modeling-human-behavior-with-generative-agents/</link>
      <description>Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.</description>
      <pubDate>Mon, 05 Jun 2023 17:17:34 -0000</pubDate>
      <itunes:title>Modeling Human Behavior with Generative Agents with Joon Sung Park</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>632</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77fc48a6-fb20-11ed-9afe-e72392b7749c/image/73fa3a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper Generative Agents: Interactive Simulacra of Human Behavior, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Joon Sung Park, a PhD Student at Stanford University. Joon shares his passion for creating AI systems that can solve human problems and his work on the recent paper <a href="https://arxiv.org/abs/2304.03442"><em>Generative Agents: Interactive Simulacra of Human Behavior</em></a>, which showcases generative agents that exhibit believable human behavior. We discuss using empirical methods to study these systems and the conflicting papers on whether AI models have a worldview and common sense. Joon talks about the importance of context and environment in creating believable agent behavior and shares his team's work on scaling emerging community behaviors. He also dives into the importance of a long-term memory module in agents and the use of knowledge graphs in retrieving associative information. The goal, Joon explains, is to create something that people can enjoy and empower people, solving existing problems and challenges in the traditional HCI and AI field.</p>]]>
      </content:encoded>
      <itunes:duration>2798</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[77fc48a6-fb20-11ed-9afe-e72392b7749c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2527292991.mp3?updated=1685035539"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Towards Improved Transfer Learning with Hugo Larochelle - #631</title>
      <link>https://twimlai.com/podcast/twimlai/towards-improved-transfer-learning/</link>
      <description>Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system. 

The complete show notes for this episode can be found at twimlai.com/go/631.</description>
      <pubDate>Mon, 29 May 2023 16:00:00 -0000</pubDate>
      <itunes:title>Towards Improved Transfer Learning with Hugo Larochelle</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>631</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system. 

The complete show notes for this episode can be found at twimlai.com/go/631.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Hugo Larochelle, a research scientist at Google Deepmind. In our conversation with Hugo, we discuss his work on transfer learning, understanding the capabilities of deep learning models, and creating the Transactions on Machine Learning Research journal. We explore the use of large language models in NLP, prompting, and zero-shot learning. Hugo also shares insights from his research on neural knowledge mobilization for code completion and discusses the adaptive prompts used in their system. </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/631.</p>]]>
      </content:encoded>
      <itunes:duration>2332</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2c8e7682-fb16-11ed-82c7-f3c9812c0b94]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8279020353.mp3?updated=1685031117"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Language Modeling With State Space Models with Dan Fu - #630</title>
      <link>https://twimlai.com/podcast/twimlai/language-modeling-with-state-space-models/</link>
      <description>Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention.
The complete show notes for this episode can be found at https://twimlai.com/go/630</description>
      <pubDate>Mon, 22 May 2023 18:10:36 -0000</pubDate>
      <itunes:title>Language Modeling With State Space Models with Dan Fu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>630</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b92dac7c-f8ad-11ed-b724-5bc7bb5391c7/image/d7aa86.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention.
The complete show notes for this episode can be found at https://twimlai.com/go/630</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Dan Fu, a PhD student at Stanford University. In our conversation with Dan, we discuss the limitations of state space models in language modeling and the search for alternative building blocks that can help increase context length without being computationally infeasible. Dan walks us through the H3 architecture and Flash Attention technique, which can reduce the memory footprint of a model and make it feasible to fine-tune. We also explore his work on improving language models using synthetic languages, the issue of long sequence length affecting both training and inference in models, and the hope for finding something sub-quadratic that can perform language processing more effectively than the brute force approach of attention.</p><p>The complete show notes for this episode can be found at https://twimlai.com/go/630</p>]]>
      </content:encoded>
      <itunes:duration>1695</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b92dac7c-f8ad-11ed-b724-5bc7bb5391c7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9167975697.mp3?updated=1684766354"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629</title>
      <link>https://twimlai.com/podcast/twimlai/learning-maps-and-spatial-awareness-in-blind-ai-agents/</link>
      <description>Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv’s work on the paper Emergence of Maps in the Memories of Blind Navigation Agents, which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv’s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms.
The complete show notes for this episode can be found at https://twimlai.com/go/629</description>
      <pubDate>Mon, 15 May 2023 18:03:41 -0000</pubDate>
      <itunes:title>Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>629</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9f70525c-f34a-11ed-aac8-bb043d92c3c6/image/d8c6aa.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv’s work on the paper Emergence of Maps in the Memories of Blind Navigation Agents, which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv’s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms.
The complete show notes for this episode can be found at https://twimlai.com/go/629</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our coverage of ICLR 2023 joined by Dhruv Batra, an associate professor at Georgia Tech and research director of the Fundamental AI Research (FAIR) team at META. In our conversation, we discuss Dhruv’s work on the paper <em>Emergence of Maps in the Memories of Blind Navigation Agents, </em>which won an Outstanding Paper Award at the event. We explore navigation with multilayer LSTM and the question of whether embodiment is necessary for intelligence. We delve into the Embodiment Hypothesis and the progress being made in language models and caution on the responsible use of these models. We also discuss the history of AI and the importance of using the right data sets in training. The conversation explores the different meanings of "maps" across AI and cognitive science fields, Dhruv’s experience in navigating mapless systems, and the early discovery stages of memory representation and neural mechanisms.</p><p>The complete show notes for this episode can be found at https://twimlai.com/go/629</p>]]>
      </content:encoded>
      <itunes:duration>2604</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9f70525c-f34a-11ed-aac8-bb043d92c3c6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2694586028.mp3?updated=1684174034"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628</title>
      <link>https://twimlai.com/podcast/twimlai/ai-agents-and-data-integration-with-gpt-and-llama/</link>
      <description>Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data. 
The complete show notes for this episode can be found at twimlai.com/go/628.</description>
      <pubDate>Mon, 08 May 2023 18:04:29 -0000</pubDate>
      <itunes:title>AI Agents and Data Integration with GPT and LLaMa with Jerry Liu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>628</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/42a858ba-edca-11ed-8a41-9ffd90f2be31/image/cf215c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data. 
The complete show notes for this episode can be found at twimlai.com/go/628.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jerry Liu, co-founder and CEO of Llama Index. In our conversation with Jerry, we explore the creation of Llama Index, a centralized interface to connect your external data with the latest large language models. We discuss the challenges of adding private data to language models and how Llama Index connects the two for better decision-making. We discuss the role of agents in automation, the evolution of the agent abstraction space, and the difficulties of optimizing queries over large amounts of complex data. We also discuss a range of topics from combining summarization and semantic search, to automating reasoning, to improving language model results by exploiting relationships between nodes in data. </p><p>The complete show notes for this episode can be found at twimlai.com/go/628.</p>]]>
      </content:encoded>
      <itunes:duration>2486</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[42a858ba-edca-11ed-8a41-9ffd90f2be31]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7752871709.mp3?updated=1683569397"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627</title>
      <link>https://twimlai.com/podcast/twimlai/hyperparameter-optimization-through-neural-network-partitioning/</link>
      <description>Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more. 
The complete show notes for this episode can be found at https://twimlai.com/go/627.</description>
      <pubDate>Mon, 01 May 2023 19:34:00 -0000</pubDate>
      <itunes:title>Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>627</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/37a1b5a6-e528-11ed-ad04-c325ebe25424/image/eaba86.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper Hyperparameter Optimization through Neural Network Partitioning and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more. 
The complete show notes for this episode can be found at https://twimlai.com/go/627.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our coverage of the 2023 ICLR conference joined by Christos Louizos, an ML researcher at Qualcomm Technologies. In our conversation with Christos, we explore his paper <a href="https://openreview.net/pdf?id=nAgdXgfmqj">Hyperparameter Optimization through Neural Network Partitioning</a> and a few of his colleague's works from the conference. We discuss methods for speeding up attention mechanisms in transformers, scheduling operations for computation graphs, estimating channels in indoor environments, and adapting to distribution shifts in test time with neural network modules. We also talk through the benefits and limitations of federated learning, exploring sparse models, optimizing communication between servers and devices, and much more. </p><p>The complete show notes for this episode can be found at https://twimlai.com/go/627.</p>]]>
      </content:encoded>
      <itunes:duration>1991</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[37a1b5a6-e528-11ed-ad04-c325ebe25424]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5763417634.mp3?updated=1682969802"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626</title>
      <link>https://twimlai.com/podcast/twimlai/are-llms-overhyped-or-under-appreciated/</link>
      <description>Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also their potential for spreading misinformation. Marti expresses skepticism about whether these models truly have cognition compared to the nuance of the human brain. We discuss the intersection of language and visualization and the need for specialized research to ensure safety and appropriateness for specific uses. We also delve into the latest tools and algorithms such as Copilot and Chat GPT, which enhance programming and help in identifying comparisons, respectively. Finally, we discuss Marti’s long research history in search and her breakthrough in developing a standard interaction that allows for finding items on websites and library catalogs.
The complete show notes for this episode can be found at https://twimlai.com/go/626.</description>
      <pubDate>Mon, 24 Apr 2023 20:08:00 -0000</pubDate>
      <itunes:title>Are LLMs Overhyped or Under Appreciated with Marti Hearst</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>626</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f1d2f3d8-e2ba-11ed-b543-d370a31fa296/image/3f40da.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also their potential for spreading misinformation. Marti expresses skepticism about whether these models truly have cognition compared to the nuance of the human brain. We discuss the intersection of language and visualization and the need for specialized research to ensure safety and appropriateness for specific uses. We also delve into the latest tools and algorithms such as Copilot and Chat GPT, which enhance programming and help in identifying comparisons, respectively. Finally, we discuss Marti’s long research history in search and her breakthrough in developing a standard interaction that allows for finding items on websites and library catalogs.
The complete show notes for this episode can be found at https://twimlai.com/go/626.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Marti Hearst, Professor at UC Berkeley. In our conversation with Marti, we explore the intricacies of AI language models and their usefulness in improving efficiency but also their potential for spreading misinformation. Marti expresses skepticism about whether these models truly have cognition compared to the nuance of the human brain. We discuss the intersection of language and visualization and the need for specialized research to ensure safety and appropriateness for specific uses. We also delve into the latest tools and algorithms such as Copilot and Chat GPT, which enhance programming and help in identifying comparisons, respectively. Finally, we discuss Marti’s long research history in search and her breakthrough in developing a standard interaction that allows for finding items on websites and library catalogs.</p><p>The complete show notes for this episode can be found at https://twimlai.com/go/626.</p>]]>
      </content:encoded>
      <itunes:duration>2276</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f1d2f3d8-e2ba-11ed-b543-d370a31fa296]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5101605789.mp3?updated=1682369086"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are Large Language Models a Path to AGI? with Ben Goertzel - #625</title>
      <link>https://twimlai.com/podcast/twimlai/are-large-language-models-a-path-to-agi/</link>
      <description>Today we’re joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future.

The complete show notes for this episode can be found at https://twimlai.com/go/625</description>
      <pubDate>Mon, 17 Apr 2023 17:50:44 -0000</pubDate>
      <itunes:title>Are Large Language Models a Path to AGI? with Ben Goertzel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>625</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e97b2df2-dd3f-11ed-b18c-4be82067ab18/image/e797dc.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future.

The complete show notes for this episode can be found at https://twimlai.com/go/625</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ben Goertzel, CEO of SingularityNET. In our conversation with Ben, we explore all things AGI, including the potential scenarios that could arise with the advent of AGI and his preference for a decentralized rollout comparable to the internet or Linux. Ben shares his research in bridging neural nets, symbolic logic engines, and evolutionary programming engines to develop a common mathematical framework for AI paradigms. We also discuss the limitations of Large Language Models and the potential of hybridizing LLMs with other AGI approaches. Additionally, we chat about their work using LLMs for music generation and the limitations of formalizing creativity. Finally, Ben discusses his team's work with the OpenCog Hyperon framework and Simuli to achieve AGI, and the potential implications of their research in the future.</p><p><br></p><p>The complete show notes for this episode can be found at https://twimlai.com/go/625</p>]]>
      </content:encoded>
      <itunes:duration>3575</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e97b2df2-dd3f-11ed-b18c-4be82067ab18]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3896122141.mp3?updated=1681750509"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Open Source Generative AI at Hugging Face with Jeff Boudier - #624</title>
      <link>https://twimlai.com/podcast/twimlai/open-source-generative-ai-at-hugging-face-2/</link>
      <description>Today we’re joined by Jeff Boudier, head of product at Hugging Face &#129303;. In our conversation with Jeff, we explore the current landscape of open-source machine learning tools and models, the recent shift towards consumer-focused releases, and the importance of making ML tools accessible. We also discuss the growth of the Hugging Face Hub, which currently hosts over 150k models, and how formalizing their collaboration with AWS will help drive the adoption of open-source models in the enterprise.  
The complete show notes for this episode can be found at twimlai.com/go/624</description>
      <pubDate>Tue, 11 Apr 2023 17:28:40 -0000</pubDate>
      <itunes:title>Open Source Generative AI at Hugging Face with Jeff Boudier</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>624</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cb147d04-d7d7-11ed-bb60-1b1936bc2474/image/9d91d0.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jeff Boudier, head of product at Hugging Face &#129303;. In our conversation with Jeff, we explore the current landscape of open-source machine learning tools and models, the recent shift towards consumer-focused releases, and the importance of making ML tools accessible. We also discuss the growth of the Hugging Face Hub, which currently hosts over 150k models, and how formalizing their collaboration with AWS will help drive the adoption of open-source models in the enterprise.  
The complete show notes for this episode can be found at twimlai.com/go/624</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jeff Boudier, head of product at Hugging Face 🤗. In our conversation with Jeff, we explore the current landscape of open-source machine learning tools and models, the recent shift towards consumer-focused releases, and the importance of making ML tools accessible. We also discuss the growth of the Hugging Face Hub, which currently hosts over 150k models, and how formalizing their collaboration with AWS will help drive the adoption of open-source models in the enterprise.  </p><p>The complete show notes for this episode can be found at twimlai.com/go/624</p>]]>
      </content:encoded>
      <itunes:duration>2031</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cb147d04-d7d7-11ed-bb60-1b1936bc2474]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9712166154.mp3?updated=1681156034"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Generative AI at the Edge with Vinesh Sukumar - #623</title>
      <link>https://twimlai.com/podcast/twimlai/generative-ai-at-the-edge/</link>
      <description>Today we’re joined by Vinesh Sukumar, a senior director and head of AI/ML product management at Qualcomm Technologies. In our conversation with Vinesh, we explore how mobile and automotive devices have different requirements for AI models and how their AI stack helps developers create complex models on both platforms. We also discuss the growing interest in text-based input and the shift towards transformers, generative content, and recommendation engines. Additionally, we explore the challenges and opportunities for ML Ops investments on the edge, including the use of synthetic data and evolving models based on user data. Finally, we delve into the latest advancements in large language models, including Prometheus-style models and GPT-4.
The complete show notes for this episode can be found at twimlai.com/go/623.</description>
      <pubDate>Mon, 03 Apr 2023 18:44:16 -0000</pubDate>
      <itunes:title>Generative AI at the Edge with Vinesh Sukumar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>623</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c443eba0-d24a-11ed-bc83-779f9cda3c30/image/ef41a4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Vinesh Sukumar, a senior director and head of AI/ML product management at Qualcomm Technologies. In our conversation with Vinesh, we explore how mobile and automotive devices have different requirements for AI models and how their AI stack helps developers create complex models on both platforms. We also discuss the growing interest in text-based input and the shift towards transformers, generative content, and recommendation engines. Additionally, we explore the challenges and opportunities for ML Ops investments on the edge, including the use of synthetic data and evolving models based on user data. Finally, we delve into the latest advancements in large language models, including Prometheus-style models and GPT-4.
The complete show notes for this episode can be found at twimlai.com/go/623.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Vinesh Sukumar, a senior director and head of AI/ML product management at Qualcomm Technologies. In our conversation with Vinesh, we explore how mobile and automotive devices have different requirements for AI models and how their AI stack helps developers create complex models on both platforms. We also discuss the growing interest in text-based input and the shift towards transformers, generative content, and recommendation engines. Additionally, we explore the challenges and opportunities for ML Ops investments on the edge, including the use of synthetic data and evolving models based on user data. Finally, we delve into the latest advancements in large language models, including Prometheus-style models and GPT-4.</p><p>The complete show notes for this episode can be found at twimlai.com/go/623.</p>]]>
      </content:encoded>
      <itunes:duration>2346</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c443eba0-d24a-11ed-bc83-779f9cda3c30]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3295427970.mp3?updated=1680545708"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis - #622</title>
      <link>https://twimlai.com/podcast/twimlai/runway-gen-2-generative-ai-for-video-creation/</link>
      <description>Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more!
The complete show notes for this episode can be found at twimlai.com/go/622.</description>
      <pubDate>Mon, 27 Mar 2023 22:41:18 -0000</pubDate>
      <itunes:title>Runway Gen-2: Generative AI for Video Creation with Anastasis Germanidis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>622</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9ee78650-ccf0-11ed-81f1-ef07fe65f025/image/2c82c8.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more!
The complete show notes for this episode can be found at twimlai.com/go/622.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Anastasis Germanidis, Co-Founder and CTO of RunwayML. Amongst all the product and model releases over the past few months, Runway threw its hat into the ring with Gen-1, a model that can take still images or video and transform them into completely stylized videos. They followed that up just a few weeks later with the release of Gen-2, a multimodal model that can produce a video from text prompts. We had the pleasure of chatting with Anastasis about both models, exploring the challenges of generating video, the importance of alignment in model deployment, the potential use of RLHF, the deployment of models as APIs, and much more!</p><p>The complete show notes for this episode can be found at twimlai.com/go/622.</p>]]>
      </content:encoded>
      <itunes:duration>2961</itunes:duration>
      <guid isPermaLink="false"><![CDATA[9ee78650-ccf0-11ed-81f1-ef07fe65f025]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7930027272.mp3?updated=1719841192"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein - 621</title>
      <link>https://twimlai.com/podcast/twimlai/watermarking-large-language-models-to-fight-plagiarism/</link>
      <description>Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction. </description>
      <pubDate>Mon, 20 Mar 2023 20:04:48 -0000</pubDate>
      <itunes:title>Watermarking Large Language Models to Fight Plagiarism with Tom Goldstein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>621</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a3ad8f0e-c755-11ed-87ec-17129fa6ca35/image/213354.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Tom Goldstein, an associate professor at the University of Maryland. Tom’s research sits at the intersection of ML and optimization and has previously been featured in the New Yorker for his work on invisibility cloaks, clothing that can evade object detection. In our conversation, we focus on his more recent research on watermarking LLM output. We explore the motivations behind adding these watermarks, how they work, and different ways a watermark could be deployed, as well as political and economic incentive structures around the adoption of watermarking and future directions for that line of work. We also discuss Tom’s research into data leakage, particularly in stable diffusion models, work that is analogous to recent guest Nicholas Carlini’s research into LLM data extraction. </p>]]>
      </content:encoded>
      <itunes:duration>3087</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a3ad8f0e-c755-11ed-87ec-17129fa6ca35]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6246048702.mp3?updated=1679343089"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova - #620</title>
      <link>https://twimlai.com/podcast/twimlai/does-chatgpt-think-a-cognitive-neuroscience-perspective/</link>
      <description>Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper Dissociating language and thought in large language models: a cognitive perspective. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more! 
The complete show notes for this episode can be found at twimlai.com/go/620.</description>
      <pubDate>Mon, 13 Mar 2023 19:04:36 -0000</pubDate>
      <itunes:title>Does ChatGPT “Think”? A Cognitive Neuroscience Perspective with Anna Ivanova</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>620</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5a0a95d8-c1cc-11ed-8212-fbdfaff267a0/image/7b2bac.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper Dissociating language and thought in large language models: a cognitive perspective. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more! 
The complete show notes for this episode can be found at twimlai.com/go/620.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Anna Ivanova, a postdoctoral researcher at MIT Quest for Intelligence. In our conversation with Anna, we discuss her recent paper <a href="https://arxiv.org/abs/2301.06627">Dissociating language and thought in large language models: a cognitive perspective</a>. In the paper, Anna reviews the capabilities of LLMs by considering their performance on two different aspects of language use: 'formal linguistic competence', which includes knowledge of rules and patterns of a given language, and 'functional linguistic competence', a host of cognitive abilities required for language understanding and use in the real world. We explore parallels between linguistic competence and AGI, the need to identify new benchmarks for these models, whether an end-to-end trained LLM can address various aspects of functional competence, and much more! </p><p>The complete show notes for this episode can be found at twimlai.com/go/620.</p>]]>
      </content:encoded>
      <itunes:duration>2705</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5a0a95d8-c1cc-11ed-8212-fbdfaff267a0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6946758983.mp3?updated=1678732194"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Robotic Dexterity and Collaboration with Monroe Kennedy III - #619</title>
      <link>https://twimlai.com/podcast/twimlai/robotic-dexterity-and-collaboration/</link>
      <description>Today we’re joined by Monroe Kennedy III, an assistant professor at Stanford, director of the Assistive Robotics and Manipulation Lab, and a national director of Black in Robotics. In our conversation with Monroe, we spend some time exploring the robotics landscape, getting Monroe’s thoughts on the current challenges in the field, as well as his opinion on choreographed demonstrations like the dancing Boston Robotics machines. We also dig into his work around two distinct threads, Robotic Dexterity, (what does it take to make robots capable of doing manipulation useful tasks with and for humans?) and Collaborative Robotics (how do we go beyond advanced autonomy in robots towards making effective robotic teammates capable of working with human counterparts?). Finally, we discuss DenseTact, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform calibrated shape reconstruction and 6-axis wrench estimation.
The complete show notes for this episode can be found at twimlai.com/go/619.</description>
      <pubDate>Mon, 06 Mar 2023 19:07:37 -0000</pubDate>
      <itunes:title>Robotic Dexterity and Collaboration with Monroe Kennedy III</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>619</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6d552bec-bc51-11ed-ace6-63563d2995e9/image/90ae04.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Monroe Kennedy III, an assistant professor at Stanford, director of the Assistive Robotics and Manipulation Lab, and a national director of Black in Robotics. In our conversation with Monroe, we spend some time exploring the robotics landscape, getting Monroe’s thoughts on the current challenges in the field, as well as his opinion on choreographed demonstrations like the dancing Boston Robotics machines. We also dig into his work around two distinct threads, Robotic Dexterity, (what does it take to make robots capable of doing manipulation useful tasks with and for humans?) and Collaborative Robotics (how do we go beyond advanced autonomy in robots towards making effective robotic teammates capable of working with human counterparts?). Finally, we discuss DenseTact, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform calibrated shape reconstruction and 6-axis wrench estimation.
The complete show notes for this episode can be found at twimlai.com/go/619.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Monroe Kennedy III, an assistant professor at Stanford, director of the Assistive Robotics and Manipulation Lab, and a national director of Black in Robotics. In our conversation with Monroe, we spend some time exploring the robotics landscape, getting Monroe’s thoughts on the current challenges in the field, as well as his opinion on choreographed demonstrations like the dancing Boston Robotics machines. We also dig into his work around two distinct threads, Robotic Dexterity, (what does it take to make robots capable of doing manipulation useful tasks with and for humans?) and Collaborative Robotics (how do we go beyond advanced autonomy in robots towards making effective robotic teammates capable of working with human counterparts?). Finally, we discuss DenseTact, an optical-tactile sensor capable of visualizing the deformed surface of a soft fingertip and using that image in a neural network to perform calibrated shape reconstruction and 6-axis wrench estimation.</p><p>The complete show notes for this episode can be found at twimlai.com/go/619.</p>]]>
      </content:encoded>
      <itunes:duration>3169</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6d552bec-bc51-11ed-ace6-63563d2995e9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3901987359.mp3?updated=1678129643"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini - #618</title>
      <link>https://twimlai.com/podcast/twimlai/privacy-and-security-for-stable-diffusion-and-llms/</link>
      <description>Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.
The complete show notes for this episode can be found at twimlai.com/go/618.</description>
      <pubDate>Mon, 27 Feb 2023 18:26:50 -0000</pubDate>
      <itunes:title>Privacy and Security for Stable Diffusion and LLMs with Nicholas Carlini</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>618</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/eaac5bb4-b6c1-11ed-99f5-37a13417ed67/image/5300f4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.
The complete show notes for this episode can be found at twimlai.com/go/618.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Nicholas Carlini, a research scientist at Google Brain. Nicholas works at the intersection of machine learning and computer security, and his recent paper “Extracting Training Data from LLMs” has generated quite a buzz within the ML community. In our conversation, we discuss the current state of adversarial machine learning research, the dynamic of dealing with privacy issues in black box vs accessible models, what privacy attacks in vision models like diffusion models look like, and the scale of “memorization” within these models. We also explore Nicholas’ work on data poisoning, which looks to understand what happens if a bad actor can take control of a small fraction of the data that an ML model is trained on.</p><p>The complete show notes for this episode can be found at twimlai.com/go/618.</p>]]>
      </content:encoded>
      <itunes:duration>2591</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eaac5bb4-b6c1-11ed-99f5-37a13417ed67]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1458183436.mp3?updated=1677518250"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran - #617</title>
      <link>https://twimlai.com/podcast/twimlai/understanding-ais-impact-on-social-disparities/</link>
      <description>Today we’re joined by Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research. In our conversation with Vinod, we discuss his two main areas of research, using ML, specifically NLP, to explore these social disparities, and how these same social disparities are captured and propagated within machine learning tools. We explore a few specific projects, the first using NLP to analyze interactions between police officers and community members, determining factors like level of respect or politeness and how they play out across a spectrum of community members. We also discuss his work on understanding how bias creeps into the pipeline of building ML models, whether it be from the data or the person building the model. Finally, for those working with human annotators, Vinod shares his thoughts on how to incorporate principles of fairness to help build more robust models. 

The complete show notes for this episode can be found at https://twimlai.com/go/617.</description>
      <pubDate>Mon, 20 Feb 2023 20:12:00 -0000</pubDate>
      <itunes:title>Understanding AI’s Impact on Social Disparities with Vinodkumar Prabhakaran</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:season>1</itunes:season>
      <itunes:episode>617</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c68531c-b158-11ed-a504-036326a570c1/image/193b1a.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research. In our conversation with Vinod, we discuss his two main areas of research, using ML, specifically NLP, to explore these social disparities, and how these same social disparities are captured and propagated within machine learning tools. We explore a few specific projects, the first using NLP to analyze interactions between police officers and community members, determining factors like level of respect or politeness and how they play out across a spectrum of community members. We also discuss his work on understanding how bias creeps into the pipeline of building ML models, whether it be from the data or the person building the model. Finally, for those working with human annotators, Vinod shares his thoughts on how to incorporate principles of fairness to help build more robust models. 

The complete show notes for this episode can be found at https://twimlai.com/go/617.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Vinodkumar Prabhakaran, a Senior Research Scientist at Google Research. In our conversation with Vinod, we discuss his two main areas of research, using ML, specifically NLP, to explore these social disparities, and how these same social disparities are captured and propagated within machine learning tools. We explore a few specific projects, the first using NLP to analyze interactions between police officers and community members, determining factors like level of respect or politeness and how they play out across a spectrum of community members. We also discuss his work on understanding how bias creeps into the pipeline of building ML models, whether it be from the data or the person building the model. Finally, for those working with human annotators, Vinod shares his thoughts on how to incorporate principles of fairness to help build more robust models. </p><p><br></p><p>The complete show notes for this episode can be found at https://twimlai.com/go/617.</p>]]>
      </content:encoded>
      <itunes:duration>1874</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5c68531c-b158-11ed-a504-036326a570c1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5105361031.mp3?updated=1676926562"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness - #616</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2023-causality-and-the-impact-on-large-language-models/</link>
      <description>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.  

The complete show notes for this episode can be found at twimlai.com/go/616.</description>
      <pubDate>Tue, 14 Feb 2023 12:00:00 -0000</pubDate>
      <itunes:title>AI Trends 2023: Causality and the Impact on Large Language Models with Robert Osazuwa Ness</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>616</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79305c56-ac3f-11ed-9e52-1b10509cd8de/image/87c0dd.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.  

The complete show notes for this episode can be found at twimlai.com/go/616.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Robert Osazuwa Ness, a senior researcher at Microsoft Research, to break down the latest trends in the world of causal modeling. In our conversation with Robert, we explore advances in areas like causal discovery, causal representation learning, and causal judgements. We also discuss the impact causality could have on large language models, especially in some of the recent use cases we’ve seen like Bing Search and ChatGPT. Finally, we discuss the benchmarks for causal modeling, the top causality use cases, and the most exciting opportunities in the field.  </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/616.</p>]]>
      </content:encoded>
      <itunes:duration>4920</itunes:duration>
      <guid isPermaLink="false"><![CDATA[79305c56-ac3f-11ed-9e52-1b10509cd8de]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4478555625.mp3?updated=1676362713"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data-Centric Zero-Shot Learning for Precision Agriculture with Dimitris Zermas - #615</title>
      <link>https://twimlai.com/podcast/twimlai/data-centric-zero-shot-learning-for-precision-agriculture/</link>
      <description>Today we’re joined by Dimitris Zermas, a principal scientist at agriscience company Sentera. Dimitris’ work at Sentera is focused on developing tools for precision agriculture using machine learning, including hardware like cameras and sensors, as well as ML models for analyzing the vast amount of data they acquire. We explore some specific use cases for machine learning, including plant counting, the challenges of working with classical computer vision techniques, database management, and data annotation. We also discuss their use of approaches like zero-shot learning and how they’ve taken advantage of a data-centric mindset when building a better, more cost-efficient product.</description>
      <pubDate>Mon, 06 Feb 2023 19:11:56 -0000</pubDate>
      <itunes:title>Data-Centric Zero-Shot Learning for Precision Agriculture with Dimitris Zermas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>615</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c999f7a-a647-11ed-84c2-83d6fd7fcf3b/image/78dc66.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Dimitris Zermas, a principal scientist at agriscience company Sentera. Dimitris’ work at Sentera is focused on developing tools for precision agriculture using machine learning, including hardware like cameras and sensors, as well as ML models for analyzing the vast amount of data they acquire. We explore some specific use cases for machine learning, including plant counting, the challenges of working with classical computer vision techniques, database management, and data annotation. We also discuss their use of approaches like zero-shot learning and how they’ve taken advantage of a data-centric mindset when building a better, more cost-efficient product.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Dimitris Zermas, a principal scientist at agriscience company Sentera. Dimitris’ work at Sentera is focused on developing tools for precision agriculture using machine learning, including hardware like cameras and sensors, as well as ML models for analyzing the vast amount of data they acquire. We explore some specific use cases for machine learning, including plant counting, the challenges of working with classical computer vision techniques, database management, and data annotation. We also discuss their use of approaches like zero-shot learning and how they’ve taken advantage of a data-centric mindset when building a better, more cost-efficient product.</p>]]>
      </content:encoded>
      <itunes:duration>1953</itunes:duration>
      <guid isPermaLink="false"><![CDATA[4c999f7a-a647-11ed-84c2-83d6fd7fcf3b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7212631212.mp3?updated=1675706368"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How LLMs and Generative AI are Revolutionizing AI for Science with Anima Anandkumar - #614 </title>
      <link>https://twimlai.com/podcast/twimlai/how-llms-and-generative-ai-are-revolutionizing-ai-for-science/</link>
      <description>Today we’re joined by Anima Anandkumar, Bren Professor of Computing And Mathematical Sciences at Caltech and Sr Director of AI Research at NVIDIA. In our conversation, we take a broad look at the emerging field of AI for Science, focusing on both practical applications and longer-term research areas. We discuss the latest developments in the area of protein folding, and how much it has evolved since we first discussed it on the podcast in 2018, the impact of generative models and stable diffusion on the space, and the application of neural operators. We also explore the ways in which prediction models like weather models could be improved, how foundation models are helping to drive innovation, and finally, we dig into MineDojo, a new framework built on the popular Minecraft game for embodied agent research, which won a 2022 Outstanding Paper Award at NeurIPS. 
The complete show notes for this episode can be found at twimlai.com/go/614</description>
      <pubDate>Mon, 30 Jan 2023 19:02:26 -0000</pubDate>
      <itunes:title>How LLMs and Generative AI are Revolutionizing AI for Science with Anima Anandkumar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>614</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9bc828b0-a0ce-11ed-9427-ef74dbfdba82/image/9ea4c2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Anima Anandkumar, Bren Professor of Computing And Mathematical Sciences at Caltech and Sr Director of AI Research at NVIDIA. In our conversation, we take a broad look at the emerging field of AI for Science, focusing on both practical applications and longer-term research areas. We discuss the latest developments in the area of protein folding, and how much it has evolved since we first discussed it on the podcast in 2018, the impact of generative models and stable diffusion on the space, and the application of neural operators. We also explore the ways in which prediction models like weather models could be improved, how foundation models are helping to drive innovation, and finally, we dig into MineDojo, a new framework built on the popular Minecraft game for embodied agent research, which won a 2022 Outstanding Paper Award at NeurIPS. 
The complete show notes for this episode can be found at twimlai.com/go/614</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Anima Anandkumar, Bren Professor of Computing And Mathematical Sciences at Caltech and Sr Director of AI Research at NVIDIA. In our conversation, we take a broad look at the emerging field of AI for Science, focusing on both practical applications and longer-term research areas. We discuss the latest developments in the area of protein folding, and how much it has evolved since we first discussed it on the podcast in 2018, the impact of generative models and stable diffusion on the space, and the application of neural operators. We also explore the ways in which prediction models like weather models could be improved, how foundation models are helping to drive innovation, and finally, we dig into MineDojo, a new framework built on the popular Minecraft game for embodied agent research, which won a 2022 Outstanding Paper Award at NeurIPS. </p><p>The complete show notes for this episode can be found at twimlai.com/go/614</p>]]>
      </content:encoded>
      <itunes:duration>3704</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9bc828b0-a0ce-11ed-9427-ef74dbfdba82]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5251465738.mp3?updated=1675105620"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2023: Natural Language Proc - ChatGPT, GPT-4 and Cutting Edge Research with Sameer Singh - #613</title>
      <link>https://twimlai.com/podcast/twimlai/ai-trends-2023-natural-language-proc-chatgpt-gpt-4-and-cutting-edge-research/</link>
      <description>Today we continue our AI Trends 2023 series joined by Sameer Singh, an associate professor in the department of computer science at UC Irvine and fellow at the Allen Institute for Artificial Intelligence (AI2). In our conversation with Sameer, we focus on the latest and greatest advancements and developments in the field of NLP, starting out with one that took the internet by storm just a few short weeks ago, ChatGPT. We also explore top themes like decomposed reasoning, causal modeling in NLP, and the need for “clean” data. We also discuss projects like HuggingFace’s BLOOM, the debacle that was the Galactica demo, the impending intersection of LLMs and search, use cases like Copilot, and of course, we get Sameer’s predictions for what will happen this year in the field.
The complete show notes for this episode can be found at twimlai.com/go/613.</description>
      <pubDate>Mon, 23 Jan 2023 18:52:00 -0000</pubDate>
      <itunes:title>AI Trends 2023: Natural Language Proc - ChatGPT, GPT-4 and Cutting Edge Research with Sameer Singh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>613</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c411159e-9b4b-11ed-a9f3-df9befa5a620/image/8cbd01.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our AI Trends 2023 series joined by Sameer Singh, an associate professor in the department of computer science at UC Irvine and fellow at the Allen Institute for Artificial Intelligence (AI2). In our conversation with Sameer, we focus on the latest and greatest advancements and developments in the field of NLP, starting out with one that took the internet by storm just a few short weeks ago, ChatGPT. We also explore top themes like decomposed reasoning, causal modeling in NLP, and the need for “clean” data. We also discuss projects like HuggingFace’s BLOOM, the debacle that was the Galactica demo, the impending intersection of LLMs and search, use cases like Copilot, and of course, we get Sameer’s predictions for what will happen this year in the field.
The complete show notes for this episode can be found at twimlai.com/go/613.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our AI Trends 2023 series joined by Sameer Singh, an associate professor in the department of computer science at UC Irvine and fellow at the Allen Institute for Artificial Intelligence (AI2). In our conversation with Sameer, we focus on the latest and greatest advancements and developments in the field of NLP, starting out with one that took the internet by storm just a few short weeks ago, ChatGPT. We also explore top themes like decomposed reasoning, causal modeling in NLP, and the need for “clean” data. We also discuss projects like HuggingFace’s BLOOM, the debacle that was the Galactica demo, the impending intersection of LLMs and search, use cases like Copilot, and of course, we get Sameer’s predictions for what will happen this year in the field.</p><p>The complete show notes for this episode can be found at twimlai.com/go/613.</p>]]>
      </content:encoded>
      <itunes:duration>6345</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c411159e-9b4b-11ed-a9f3-df9befa5a620]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8614692045.mp3?updated=1674500282"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Trends 2023: Reinforcement Learning - RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine - #612</title>
      <link>https://twimlai.com/go/612</link>
      <description>Today we’re taking a deep dive into the latest and greatest in the world of Reinforcement Learning with our friend Sergey Levine, an associate professor, at UC Berkeley. In our conversation with Sergey, we explore some game-changing developments in the field including the release of ChatGPT and the onset of RLHF. We also explore more broadly the intersection of RL and language models, as well as advancements in offline RL and pre-training for robotics models, inverse RL, Q learning, and a host of papers along the way. Finally, you don’t want to miss Sergey’s predictions for the top developments of the year 2023! 
The complete show notes for this episode can be found at twimlai.com/go/612</description>
      <pubDate>Mon, 16 Jan 2023 17:49:21 -0000</pubDate>
      <itunes:title>AI Trends 2023: Reinforcement Learning - RLHF, Robotic Pre-Training, and Offline RL with Sergey Levine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>612</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d8f612b6-95c4-11ed-a87f-43f2babccc15/image/4e10fb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re taking a deep dive into the latest and greatest in the world of Reinforcement Learning with our friend Sergey Levine, an associate professor, at UC Berkeley. In our conversation with Sergey, we explore some game-changing developments in the field including the release of ChatGPT and the onset of RLHF. We also explore more broadly the intersection of RL and language models, as well as advancements in offline RL and pre-training for robotics models, inverse RL, Q learning, and a host of papers along the way. Finally, you don’t want to miss Sergey’s predictions for the top developments of the year 2023! 
The complete show notes for this episode can be found at twimlai.com/go/612</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re taking a deep dive into the latest and greatest in the world of Reinforcement Learning with our friend Sergey Levine, an associate professor, at UC Berkeley. In our conversation with Sergey, we explore some game-changing developments in the field including the release of ChatGPT and the onset of RLHF. We also explore more broadly the intersection of RL and language models, as well as advancements in offline RL and pre-training for robotics models, inverse RL, Q learning, and a host of papers along the way. Finally, you don’t want to miss Sergey’s predictions for the top developments of the year 2023! </p><p>The complete show notes for this episode can be found at twimlai.com/go/612</p>]]>
      </content:encoded>
      <itunes:duration>3580</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d8f612b6-95c4-11ed-a87f-43f2babccc15]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4574115607.mp3?updated=1673891120"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Supporting Food Security in Africa Using ML with Catherine Nakalembe - #611</title>
      <link>https://twimlai.com/podcast/twimlai/supporting-food-security-in-africa-using-ml/</link>
      <description>Today we conclude our coverage of the 2022 NeurIPS series joined by Catherine Nakalembe, an associate research professor at the University of Maryland, and Africa Program Director under NASA Harvest. In our conversation with Catherine, we take a deep dive into her talk from the ML in the Physical Sciences workshop, Supporting Food Security in Africa using Machine Learning and Earth Observations. We discuss the broad challenges associated with food insecurity, as well as Catherine’s role and the priorities of Harvest Africa, a program focused on advancing innovative satellite-driven methods to produce automated within-season crop type and crop-specific condition products that support agricultural assessments. We explore some of the technical challenges of her work, including the limited, but growing, access to remote sensing and earth observation datasets and how the availability of that data has changed in recent years, the lack of benchmarks for the tasks she’s working on, examples of how they’ve applied techniques like multi-task learning and task-informed meta-learning, and much more. 
The complete show notes for this episode can be found at twimlai.com/go/611.</description>
      <pubDate>Mon, 09 Jan 2023 20:17:00 -0000</pubDate>
      <itunes:title>Supporting Food Security in Africa Using ML with Catherine Nakalembe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>611</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/76dce686-9056-11ed-b93f-cbca02783e69/image/a92fd1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we conclude our coverage of the 2022 NeurIPS series joined by Catherine Nakalembe, an associate research professor at the University of Maryland, and Africa Program Director under NASA Harvest. In our conversation with Catherine, we take a deep dive into her talk from the ML in the Physical Sciences workshop, Supporting Food Security in Africa using Machine Learning and Earth Observations. We discuss the broad challenges associated with food insecurity, as well as Catherine’s role and the priorities of Harvest Africa, a program focused on advancing innovative satellite-driven methods to produce automated within-season crop type and crop-specific condition products that support agricultural assessments. We explore some of the technical challenges of her work, including the limited, but growing, access to remote sensing and earth observation datasets and how the availability of that data has changed in recent years, the lack of benchmarks for the tasks she’s working on, examples of how they’ve applied techniques like multi-task learning and task-informed meta-learning, and much more. 
The complete show notes for this episode can be found at twimlai.com/go/611.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we conclude our coverage of the 2022 NeurIPS series joined by Catherine Nakalembe, an associate research professor at the University of Maryland, and Africa Program Director under NASA Harvest. In our conversation with Catherine, we take a deep dive into her talk from the ML in the Physical Sciences workshop, Supporting Food Security in Africa using Machine Learning and Earth Observations. We discuss the broad challenges associated with food insecurity, as well as Catherine’s role and the priorities of Harvest Africa, a program focused on advancing innovative satellite-driven methods to produce automated within-season crop type and crop-specific condition products that support agricultural assessments. We explore some of the technical challenges of her work, including the limited, but growing, access to remote sensing and earth observation datasets and how the availability of that data has changed in recent years, the lack of benchmarks for the tasks she’s working on, examples of how they’ve applied techniques like multi-task learning and task-informed meta-learning, and much more. </p><p>The complete show notes for this episode can be found at twimlai.com/go/611.</p>]]>
      </content:encoded>
      <itunes:duration>3969</itunes:duration>
      <guid isPermaLink="false"><![CDATA[76dce686-9056-11ed-b93f-cbca02783e69]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5075227108.mp3?updated=1673295883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Service Cards and ML Governance with Michael Kearns - #610</title>
      <link>https://twimlai.com/podcast/twimlai/service-cards-and-ml-governance/</link>
      <description>Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.  
The complete show notes for this episode can be found at twimlai.com/go/610.</description>
      <pubDate>Mon, 02 Jan 2023 17:05:00 -0000</pubDate>
      <itunes:title>Service Cards and ML Governance with Michael Kearns</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>610</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3b1871c8-8abc-11ed-9e50-677d7c62bc1d/image/8f4bfb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.  
The complete show notes for this episode can be found at twimlai.com/go/610.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we conclude our AWS re:Invent 2022 series joined by Michael Kearns, a professor in the department of computer and information science at UPenn, as well as an Amazon Scholar. In our conversation, we briefly explore Michael’s broader research interests in responsible AI and ML governance and his role at Amazon. We then discuss the announcement of service cards, and their take on “model cards” at a holistic, system level as opposed to an individual model level. We walk through the information represented on the cards, as well as explore the decision-making process around specific information being omitted from the cards. We also get Michael’s take on the years-old debate of algorithmic bias vs dataset bias, what some of the current issues are around this topic, and what research he has seen (and hopes to see) addressing issues of “fairness” in large language models.  </p><p>The complete show notes for this episode can be found at twimlai.com/go/610.</p>]]>
      </content:encoded>
      <itunes:duration>2348</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3b1871c8-8abc-11ed-9e50-677d7c62bc1d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4416407284.mp3?updated=1672679454"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reinforcement Learning for Personalization at Spotify with Tony Jebara - #609</title>
      <link>https://twimlai.com/podcast/twimlai/reinforcement-learning-lifetime-value-at-spotify/</link>
      <description>Today we continue our NeurIPS 2022 series joined by Tony Jebara, VP of engineering and head of machine learning at Spotify. In our conversation with Tony, we discuss his role at Spotify and how the company’s use of machine learning has evolved over the last few years, and the business value of machine learning, specifically recommendations, hold at the company.
We dig into his talk on the intersection of reinforcement learning and lifetime value (LTV) at Spotify, which explores the application of Offline RL for user experience personalization. We discuss the various papers presented in the talk, and how they all map toward determining and increasing a user’s LTV. 
The complete show notes for this episode can be found at twimlai.com/go/609.</description>
      <pubDate>Thu, 29 Dec 2022 18:46:00 -0000</pubDate>
      <itunes:title>Reinforcement Learning for Personalization at Spotify with Tony Jebara</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>609</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9c418dd8-87a4-11ed-8ddf-f73f5b4174cc/image/d72d81.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our NeurIPS 2022 series joined by Tony Jebara, VP of engineering and head of machine learning at Spotify. In our conversation with Tony, we discuss his role at Spotify and how the company’s use of machine learning has evolved over the last few years, and the business value of machine learning, specifically recommendations, hold at the company.
We dig into his talk on the intersection of reinforcement learning and lifetime value (LTV) at Spotify, which explores the application of Offline RL for user experience personalization. We discuss the various papers presented in the talk, and how they all map toward determining and increasing a user’s LTV. 
The complete show notes for this episode can be found at twimlai.com/go/609.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our NeurIPS 2022 series joined by Tony Jebara, VP of engineering and head of machine learning at Spotify. In our conversation with Tony, we discuss his role at Spotify and how the company’s use of machine learning has evolved over the last few years, and the business value of machine learning, specifically recommendations, hold at the company.</p><p>We dig into his talk on the intersection of reinforcement learning and lifetime value (LTV) at Spotify, which explores the application of Offline RL for user experience personalization. We discuss the various papers presented in the talk, and how they all map toward determining and increasing a user’s LTV. </p><p>The complete show notes for this episode can be found at twimlai.com/go/609.</p>]]>
      </content:encoded>
      <itunes:duration>2487</itunes:duration>
      <guid isPermaLink="false"><![CDATA[9c418dd8-87a4-11ed-8ddf-f73f5b4174cc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3651994906.mp3?updated=1672350192"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Will ChatGPT take my job? - #608</title>
      <link>https://twimlai.com/go/608</link>
      <description>More than any system before it, ChatGPT has tapped into our enduring fascination with artificial intelligence, raising in a more concrete and present way important questions and fears about what AI is capable of and how it will impact us as humans. One of the concerns most frequently voiced, whether sincerely or cloaked in jest, is how ChatGPT or systems like it, will impact our livelihoods. In other words, “will ChatGPT put me out of a job???” In this episode of the podcast, I seek to answer this very question by conducting an interview in which ChatGPT is asking all the questions. (The questions are answered by a second ChatGPT, as in my own recent Interview with it, Exploring Large Laguage Models with ChatGPT.) In addition to the straight dialogue, I include my own commentary along the way and conclude with a discussion of the results of the experiment, that is, whether I think ChatGPT will be taking my job as your host anytime soon. Ultimately, though, I hope you’ll be the judge of that and share your thoughts on how ChatGPT did at my job via a comment below or on social media.</description>
      <pubDate>Mon, 26 Dec 2022 22:31:44 -0000</pubDate>
      <itunes:title>Will ChatGPT take my job?</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>608</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2ce92c6c-8569-11ed-8617-a38c5199edcc/image/7deda4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>More than any system before it, ChatGPT has tapped into our enduring fascination with artificial intelligence, raising in a more concrete and present way important questions and fears about what AI is capable of and how it will impact us as humans. One of the concerns most frequently voiced, whether sincerely or cloaked in jest, is how ChatGPT or systems like it, will impact our livelihoods. In other words, “will ChatGPT put me out of a job???” In this episode of the podcast, I seek to answer this very question by conducting an interview in which ChatGPT is asking all the questions. (The questions are answered by a second ChatGPT, as in my own recent Interview with it, Exploring Large Laguage Models with ChatGPT.) In addition to the straight dialogue, I include my own commentary along the way and conclude with a discussion of the results of the experiment, that is, whether I think ChatGPT will be taking my job as your host anytime soon. Ultimately, though, I hope you’ll be the judge of that and share your thoughts on how ChatGPT did at my job via a comment below or on social media.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>More than any system before it, ChatGPT has tapped into our enduring fascination with artificial intelligence, raising in a more <em>concrete</em> and <em>present</em> way important questions and fears about what AI is capable of and how it will impact us as humans. One of the concerns most frequently voiced, whether sincerely or cloaked in jest, is how ChatGPT or systems like it, will impact our livelihoods. In other words, “will ChatGPT put me out of a job???” In this episode of the podcast, I seek to answer this very question by conducting an interview in which ChatGPT is asking all the questions. (The questions are answered by a second ChatGPT, as in my own recent Interview with it, <a href="https://twimlai.com/podcast/twimlai/exploring-large-language-models/"><em>Exploring Large Laguage Models with ChatGPT</em></a>.) In addition to the straight dialogue, I include my own commentary along the way and conclude with a discussion of the results of the experiment, that is, whether I think ChatGPT will be taking my job as your host anytime soon. Ultimately, though, I hope you’ll be the judge of that and share your thoughts on how ChatGPT did at my job via a comment below or on social media.</p>]]>
      </content:encoded>
      <itunes:duration>2248</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2ce92c6c-8569-11ed-8617-a38c5199edcc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2711691846.mp3?updated=1672094095"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Geospatial Machine Learning at AWS with Kumar Chellapilla - #607</title>
      <link>https://twimlai.com/go/607</link>
      <description>Today we continue our re:Invent 2022 series joined by Kumar Chellapilla, a general manager of ML and AI Services at AWS. We had the opportunity to speak with Kumar after announcing their recent addition of geospatial data to the SageMaker Platform. In our conversation, we explore Kumar’s role as the GM for a diverse array of SageMaker services, what has changed in the geospatial data landscape over the last 10 years, and why Amazon decided now was the right time to invest in geospatial data. We discuss the challenges of accessing and working with this data and the pain points they’re trying to solve. Finally, Kumar walks us through a few customer use cases, describes how this addition will make users more effective than they currently are, and shares his thoughts on the future of this space over the next 2-5 years, including the potential intersection of geospatial data and stable diffusion/generative models.
The complete show notes for this episode can be found at twimlai.com/go/607</description>
      <pubDate>Thu, 22 Dec 2022 17:55:00 -0000</pubDate>
      <itunes:title>Geospatial Machine Learning at AWS with Kumar Chellapilla</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>607</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/783972c6-8221-11ed-9ead-eb3641508ce5/image/178352.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our re:Invent 2022 series joined by Kumar Chellapilla, a general manager of ML and AI Services at AWS. We had the opportunity to speak with Kumar after announcing their recent addition of geospatial data to the SageMaker Platform. In our conversation, we explore Kumar’s role as the GM for a diverse array of SageMaker services, what has changed in the geospatial data landscape over the last 10 years, and why Amazon decided now was the right time to invest in geospatial data. We discuss the challenges of accessing and working with this data and the pain points they’re trying to solve. Finally, Kumar walks us through a few customer use cases, describes how this addition will make users more effective than they currently are, and shares his thoughts on the future of this space over the next 2-5 years, including the potential intersection of geospatial data and stable diffusion/generative models.
The complete show notes for this episode can be found at twimlai.com/go/607</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our re:Invent 2022 series joined by Kumar Chellapilla, a general manager of ML and AI Services at AWS. We had the opportunity to speak with Kumar after announcing their recent addition of geospatial data to the SageMaker Platform. In our conversation, we explore Kumar’s role as the GM for a diverse array of SageMaker services, what has changed in the geospatial data landscape over the last 10 years, and why Amazon decided now was the right time to invest in geospatial data. We discuss the challenges of accessing and working with this data and the pain points they’re trying to solve. Finally, Kumar walks us through a few customer use cases, describes how this addition will make users more effective than they currently are, and shares his thoughts on the future of this space over the next 2-5 years, including the potential intersection of geospatial data and stable diffusion/generative models.</p><p>The complete show notes for this episode can be found at twimlai.com/go/607</p>]]>
      </content:encoded>
      <itunes:duration>2206</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[783972c6-8221-11ed-9ead-eb3641508ce5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4649651173.mp3?updated=1671731878"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Real-Time ML Workflows at Capital One with Disha Singla - #606</title>
      <link>https://twimlai.com/podcast/twimlai/real-time-ml-workflows-at-capital-one/</link>
      <description>Today we’re joined by Disha Singla, a senior director of machine learning engineering at Capital One. In our conversation with Disha, we explore her role as the leader of the Data Insights team at Capital One, where they’ve been tasked with creating reusable libraries, components, and workflows to make ML usable broadly across the company, as well as a platform to make it all accessible and to drive meaningful insights. We discuss the construction of her team, as well as the types of interactions and requests they receive from their customers (data scientists), productionized use cases from the platform, and their efforts to transition from batch to real-time deployment. Disha also shares her thoughts on the ROI of machine learning and getting buy-in from executives, how she sees machine learning evolving at the company over the next 10 years, and much more!
The complete show notes for this episode can be found at twimlai.com/go/606</description>
      <pubDate>Mon, 19 Dec 2022 19:37:06 -0000</pubDate>
      <itunes:title>Real-Time ML Workflows at Capital One with Disha Singla</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>606</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f428ab5c-7fd0-11ed-af30-e319330d7368/image/8f4370.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Disha Singla, a senior director of machine learning engineering at Capital One. In our conversation with Disha, we explore her role as the leader of the Data Insights team at Capital One, where they’ve been tasked with creating reusable libraries, components, and workflows to make ML usable broadly across the company, as well as a platform to make it all accessible and to drive meaningful insights. We discuss the construction of her team, as well as the types of interactions and requests they receive from their customers (data scientists), productionized use cases from the platform, and their efforts to transition from batch to real-time deployment. Disha also shares her thoughts on the ROI of machine learning and getting buy-in from executives, how she sees machine learning evolving at the company over the next 10 years, and much more!
The complete show notes for this episode can be found at twimlai.com/go/606</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Disha Singla, a senior director of machine learning engineering at Capital One. In our conversation with Disha, we explore her role as the leader of the Data Insights team at Capital One, where they’ve been tasked with creating reusable libraries, components, and workflows to make ML usable broadly across the company, as well as a platform to make it all accessible and to drive meaningful insights. We discuss the construction of her team, as well as the types of interactions and requests they receive from their customers (data scientists), productionized use cases from the platform, and their efforts to transition from batch to real-time deployment. Disha also shares her thoughts on the ROI of machine learning and getting buy-in from executives, how she sees machine learning evolving at the company over the next 10 years, and much more!</p><p>The complete show notes for this episode can be found at twimlai.com/go/606</p>]]>
      </content:encoded>
      <itunes:duration>2616</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f428ab5c-7fd0-11ed-af30-e319330d7368]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5019407142.mp3?updated=1671477394"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Weakly Supervised Causal Representation Learning with Johann Brehmer - #605</title>
      <link>https://twimlai.com/go/605</link>
      <description>Today we’re excited to kick off our coverage of the 2022 NeurIPS conference with Johann Brehmer, a research scientist at Qualcomm AI Research in Amsterdam. We begin our conversation discussing some of the broader problems that causality will help us solve, before turning our focus to Johann’s paper Weakly supervised causal representation learning, which seeks to prove that high-level causal representations are identifiable in weakly supervised settings. We also discuss a few other papers that the team at Qualcomm presented, including neural topological ordering for computation graphs, as well as some of the demos they showcased, which we’ll link to on the show notes page. 
The complete show notes for this episode can be found at twimlai.com/go/605.</description>
      <pubDate>Thu, 15 Dec 2022 18:57:13 -0000</pubDate>
      <itunes:title>Weakly Supervised Causal Representation Learning with Johann Brehmer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>605</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6185d33a-7ca9-11ed-9bff-df9b83be9642/image/85e695.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re excited to kick off our coverage of the 2022 NeurIPS conference with Johann Brehmer, a research scientist at Qualcomm AI Research in Amsterdam. We begin our conversation discussing some of the broader problems that causality will help us solve, before turning our focus to Johann’s paper Weakly supervised causal representation learning, which seeks to prove that high-level causal representations are identifiable in weakly supervised settings. We also discuss a few other papers that the team at Qualcomm presented, including neural topological ordering for computation graphs, as well as some of the demos they showcased, which we’ll link to on the show notes page. 
The complete show notes for this episode can be found at twimlai.com/go/605.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re excited to kick off our coverage of the 2022 NeurIPS conference with Johann Brehmer, a research scientist at Qualcomm AI Research in Amsterdam. We begin our conversation discussing some of the broader problems that causality will help us solve, before turning our focus to Johann’s paper <em>Weakly supervised causal representation learning, </em>which seeks to prove that high-level causal representations are identifiable in weakly supervised settings. We also discuss a few other papers that the team at Qualcomm presented, including <em>neural topological ordering for computation graphs, </em>as well as some of the demos they showcased, which we’ll link to on the show notes page.<em> </em></p><p>The complete show notes for this episode can be found at twimlai.com/go/605.</p>]]>
      </content:encoded>
      <itunes:duration>2804</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6185d33a-7ca9-11ed-9bff-df9b83be9642]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6256253212.mp3?updated=1671130544"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Stable Diffusion &amp; Generative AI with Emad Mostaque - #604</title>
      <link>https://twimlai.com/go/604</link>
      <description>Today we’re excited to kick off our 2022 AWS re:Invent series with a conversation with Emad Mostaque, Founder and CEO of Stability.ai. Stability.ai is a very popular name in the generative AI space at the moment, having taken the internet by storm with the release of its stable diffusion model just a few months ago. In our conversation with Emad, we discuss the story behind Stability's inception, the model's speed and scale, and the connection between stable diffusion and programming. We explore some of the spaces that Emad anticipates being disrupted by this technology, his thoughts on the open-source vs API debate, how they’re dealing with issues of user safety and artist attribution, and of course, what infrastructure they’re using to stand the model up.
The complete show notes for this episode can be found at https://twimlai.com/go/604.</description>
      <pubDate>Mon, 12 Dec 2022 21:12:27 -0000</pubDate>
      <itunes:title>Stable Diffusion &amp; Generative AI with Emad Mostaque</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>604</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ddea916-7a5f-11ed-b9de-43f2d1422a7a/image/aab9c4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re excited to kick off our 2022 AWS re:Invent series with a conversation with Emad Mostaque, Founder and CEO of Stability.ai. Stability.ai is a very popular name in the generative AI space at the moment, having taken the internet by storm with the release of its stable diffusion model just a few months ago. In our conversation with Emad, we discuss the story behind Stability's inception, the model's speed and scale, and the connection between stable diffusion and programming. We explore some of the spaces that Emad anticipates being disrupted by this technology, his thoughts on the open-source vs API debate, how they’re dealing with issues of user safety and artist attribution, and of course, what infrastructure they’re using to stand the model up.
The complete show notes for this episode can be found at https://twimlai.com/go/604.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re excited to kick off our 2022 AWS re:Invent series with a conversation with Emad Mostaque, Founder and CEO of Stability.ai. Stability.ai is a very popular name in the generative AI space at the moment, having taken the internet by storm with the release of its stable diffusion model just a few months ago. In our conversation with Emad, we discuss the story behind Stability's inception, the model's speed and scale, and the connection between stable diffusion and programming. We explore some of the spaces that Emad anticipates being disrupted by this technology, his thoughts on the open-source vs API debate, how they’re dealing with issues of user safety and artist attribution, and of course, what infrastructure they’re using to stand the model up.</p><p>The complete show notes for this episode can be found at https://twimlai.com/go/604.</p>]]>
      </content:encoded>
      <itunes:duration>2571</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4ddea916-7a5f-11ed-b9de-43f2d1422a7a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6770658893.mp3?updated=1670879179"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring Large Language Models with ChatGPT - #603</title>
      <link>https://twimlai.com/go/603</link>
      <description>Today we're joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models.
The complete show notes for this episode can be found at https://twimlai.com/go/603</description>
      <pubDate>Thu, 08 Dec 2022 16:28:00 -0000</pubDate>
      <itunes:title>Exploring Large Language Models with ChatGPT</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>603</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/70f69bea-76ac-11ed-ae30-8fd7607708f2/image/bfd533.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we're joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models.
The complete show notes for this episode can be found at https://twimlai.com/go/603</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by ChatGPT, the latest and coolest large language model developed by OpenAl. In our conversation with ChatGPT, we discuss the background and capabilities of large language models, the potential applications of these models, and some of the technical challenges and open questions in the field. We also explore the role of supervised learning in creating ChatGPT, and the use of PPO in training the model. Finally, we discuss the risks of misuse of large language models, and the best resources for learning more about these models and their applications. Join us for a fascinating conversation with ChatGPT, and learn more about the exciting world of large language models.</p><p>The complete show notes for this episode can be found at https://twimlai.com/go/603</p>]]>
      </content:encoded>
      <itunes:duration>2190</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[70f69bea-76ac-11ed-ae30-8fd7607708f2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8308128115.mp3?updated=1670517462"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating Intelligence with AI-Generating Algorithms with Jeff Clune - #602</title>
      <link>https://twimlai.com/go/602</link>
      <description>Are AI-generating algorithms the path to artificial general intelligence(AGI)? 

Today we’re joined by Jeff Clune, an associate professor of computer science at the University of British Columbia, and faculty member at the Vector Institute. In our conversation with Jeff, we discuss the broad ambitious goal of the AI field, artificial general intelligence, where we are on the path to achieving it, and his opinion on what we should be doing to get there, specifically, focusing on AI generating algorithms. With the goal of creating open-ended algorithms that can learn forever, Jeff shares his three pillars to an AI-GA, meta-learning architectures, meta-learning algorithms, and auto-generating learning environments. Finally, we discuss the inherent safety issues with these learning algorithms and Jeff’s thoughts on how to combat them, and what the not-so-distant future holds for this area of research. 

The complete show notes for this episode can be found at twimlai.com/go/602.</description>
      <pubDate>Mon, 05 Dec 2022 19:16:00 -0000</pubDate>
      <itunes:title>Accelerating Intelligence with AI-Generating Algorithms with Jeff Clune</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>602</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1a505286-74d1-11ed-9541-9f0536d9b69d/image/99da81.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Are AI-generating algorithms the path to artificial general intelligence(AGI)? 

Today we’re joined by Jeff Clune, an associate professor of computer science at the University of British Columbia, and faculty member at the Vector Institute. In our conversation with Jeff, we discuss the broad ambitious goal of the AI field, artificial general intelligence, where we are on the path to achieving it, and his opinion on what we should be doing to get there, specifically, focusing on AI generating algorithms. With the goal of creating open-ended algorithms that can learn forever, Jeff shares his three pillars to an AI-GA, meta-learning architectures, meta-learning algorithms, and auto-generating learning environments. Finally, we discuss the inherent safety issues with these learning algorithms and Jeff’s thoughts on how to combat them, and what the not-so-distant future holds for this area of research. 

The complete show notes for this episode can be found at twimlai.com/go/602.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Are AI-generating algorithms the path to artificial general intelligence(AGI)? </p><p><br></p><p>Today we’re joined by Jeff Clune, an associate professor of computer science at the University of British Columbia, and faculty member at the Vector Institute. In our conversation with Jeff, we discuss the broad ambitious goal of the AI field, artificial general intelligence, where we are on the path to achieving it, and his opinion on what we should be doing to get there, specifically, focusing on AI generating algorithms. With the goal of creating open-ended algorithms that can learn forever, Jeff shares his three pillars to an AI-GA, meta-learning architectures, meta-learning algorithms, and auto-generating learning environments. Finally, we discuss the inherent safety issues with these learning algorithms and Jeff’s thoughts on how to combat them, and what the not-so-distant future holds for this area of research. </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/602.</p>]]>
      </content:encoded>
      <itunes:duration>3401</itunes:duration>
      <guid isPermaLink="false"><![CDATA[1a505286-74d1-11ed-9541-9f0536d9b69d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1486446010.mp3?updated=1670267996"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Programmatic Labeling and Data Scaling for Autonomous Commercial Aviation with Cedric Cocaud - #601</title>
      <link>https://twimlai.com/go/601</link>
      <description>Today we’re joined by Cedric Cocaud, the chief engineer of the Wayfinder Group at Acubed, the innovation center for aircraft manufacturer Airbus. In our conversation with Cedric, we explore some of the technical challenges of innovation in the aircraft space, including autonomy. Cedric’s work on Project Vahana, Acubed’s foray into air taxis, attempted to leverage work in the self-driving car industry to develop fully autonomous planes. We discuss some of the algorithms being developed for this work, the data collection process, and Cedric’s thoughts on using synthetic data for these tasks. We also discuss the challenges of labeling the data, including programmatic and automated labeling, and much more.</description>
      <pubDate>Mon, 28 Nov 2022 19:34:00 -0000</pubDate>
      <itunes:title>Programmatic Labeling and Data Scaling for Autonomous Commercial Aviation with Cedric Cocaud</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>601</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b131132c-6f52-11ed-b860-371b3abf2a28/image/c5ab85.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Cedric Cocaud, the chief engineer of the Wayfinder Group at Acubed, the innovation center for aircraft manufacturer Airbus. In our conversation with Cedric, we explore some of the technical challenges of innovation in the aircraft space, including autonomy. Cedric’s work on Project Vahana, Acubed’s foray into air taxis, attempted to leverage work in the self-driving car industry to develop fully autonomous planes. We discuss some of the algorithms being developed for this work, the data collection process, and Cedric’s thoughts on using synthetic data for these tasks. We also discuss the challenges of labeling the data, including programmatic and automated labeling, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Cedric Cocaud, the chief engineer of the Wayfinder Group at Acubed, the innovation center for aircraft manufacturer Airbus. In our conversation with Cedric, we explore some of the technical challenges of innovation in the aircraft space, including autonomy. Cedric’s work on Project Vahana, Acubed’s foray into air taxis, attempted to leverage work in the self-driving car industry to develop fully autonomous planes. We discuss some of the algorithms being developed for this work, the data collection process, and Cedric’s thoughts on using synthetic data for these tasks. We also discuss the challenges of labeling the data, including programmatic and automated labeling, and much more.</p>]]>
      </content:encoded>
      <itunes:duration>3280</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b131132c-6f52-11ed-b860-371b3abf2a28]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1769464053.mp3?updated=1669837775"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Engineering Production NLP Systems at T-Mobile with Heather Nolis - #600</title>
      <link>https://twimlai.com/go/600</link>
      <description>Today we’re joined by Heather Nolis, a principal machine learning engineer at T-Mobile. In our conversation with Heather, we explored her machine learning journey at T-Mobile, including their initial proof of concept project, which held the goal of putting their first real-time deep learning model into production. We discuss the use case, which aimed to build a model customer intent model that would pull relevant information about a customer during conversations with customer support. This process has now become widely known as blank assist. We also discuss the decision to use supervised learning to solve this problem and the challenges they faced when developing a taxonomy. Finally, we explore the idea of using small models vs uber-large models, the hardware being used to stand up their infrastructure, and how Heather thinks about the age-old question of build vs buy. </description>
      <pubDate>Mon, 21 Nov 2022 19:49:40 -0000</pubDate>
      <itunes:title>Engineering Production NLP Systems at T-Mobile with Heather Nolis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>600</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6383f07c-69d5-11ed-9284-63321b2b4185/image/f47323.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Heather Nolis, a principal machine learning engineer at T-Mobile. In our conversation with Heather, we explored her machine learning journey at T-Mobile, including their initial proof of concept project, which held the goal of putting their first real-time deep learning model into production. We discuss the use case, which aimed to build a model customer intent model that would pull relevant information about a customer during conversations with customer support. This process has now become widely known as blank assist. We also discuss the decision to use supervised learning to solve this problem and the challenges they faced when developing a taxonomy. Finally, we explore the idea of using small models vs uber-large models, the hardware being used to stand up their infrastructure, and how Heather thinks about the age-old question of build vs buy. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Heather Nolis, a principal machine learning engineer at T-Mobile. In our conversation with Heather, we explored her machine learning journey at T-Mobile, including their initial proof of concept project, which held the goal of putting their first real-time deep learning model into production. We discuss the use case, which aimed to build a model customer intent model that would pull relevant information about a customer during conversations with customer support. This process has now become widely known as blank assist. We also discuss the decision to use supervised learning to solve this problem and the challenges they faced when developing a taxonomy. Finally, we explore the idea of using small models vs uber-large models, the hardware being used to stand up their infrastructure, and how Heather thinks about the age-old question of build vs buy. </p>]]>
      </content:encoded>
      <itunes:duration>2633</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6383f07c-69d5-11ed-9284-63321b2b4185]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6930154670.mp3?updated=1669060374"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Sim2Real and Optimus, the Humanoid Robot with Ken Goldberg - #599</title>
      <link>https://twimlai.com/go/599</link>
      <description>Today we’re joined by return guest Ken Goldberg, a professor at UC Berkeley and the chief scientist at Ambi Robotics. It’s been a few years since our initial conversation with Ken, so we spent a bit of time talking through the progress that has been made in robotics in the time that has passed. We discuss Ken’s recent work, including the paper Autonomously Untangling Long Cables, which won Best Systems Paper at the RSS conference earlier this year, including the complexity of the problem and why it is classified as a systems challenge, as well as the advancements in hardware that made solving this problem possible. We also explore Ken’s thoughts on the push towards simulation by research entities and large tech companies, and the potential for causal modeling to find its way into robotics. Finally, we discuss the recent showcase of Optimus, Tesla, and Elon Musk’s “humanoid” robot and how far we are from it being a viable piece of technology.

The complete show notes for this episode can be found at twimlai.com/go/599.</description>
      <pubDate>Mon, 14 Nov 2022 19:11:32 -0000</pubDate>
      <itunes:title>Sim2Real and Optimus, the Humanoid Robot with Ken Goldberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/52f2334c-644c-11ed-868d-43cf9bf574ef/image/99fbeb.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by return guest Ken Goldberg, a professor at UC Berkeley and the chief scientist at Ambi Robotics. It’s been a few years since our initial conversation with Ken, so we spent a bit of time talking through the progress that has been made in robotics in the time that has passed. We discuss Ken’s recent work, including the paper Autonomously Untangling Long Cables, which won Best Systems Paper at the RSS conference earlier this year, including the complexity of the problem and why it is classified as a systems challenge, as well as the advancements in hardware that made solving this problem possible. We also explore Ken’s thoughts on the push towards simulation by research entities and large tech companies, and the potential for causal modeling to find its way into robotics. Finally, we discuss the recent showcase of Optimus, Tesla, and Elon Musk’s “humanoid” robot and how far we are from it being a viable piece of technology.

The complete show notes for this episode can be found at twimlai.com/go/599.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by return guest Ken Goldberg, a professor at UC Berkeley and the chief scientist at Ambi Robotics. It’s been a few years since our initial conversation with Ken, so we spent a bit of time talking through the progress that has been made in robotics in the time that has passed. We discuss Ken’s recent work, including the paper <em>Autonomously Untangling Long Cables, </em>which won Best Systems Paper at the RSS conference earlier this year, including the complexity of the problem and why it is classified as a systems challenge, as well as the advancements in hardware that made solving this problem possible. We also explore Ken’s thoughts on the push towards simulation by research entities and large tech companies, and the potential for causal modeling to find its way into robotics. Finally, we discuss the recent showcase of Optimus, Tesla, and Elon Musk’s “humanoid” robot and how far we are from it being a viable piece of technology.</p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/599.</p>]]>
      </content:encoded>
      <itunes:duration>2831</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[52f2334c-644c-11ed-868d-43cf9bf574ef]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5044224980.mp3?updated=1668453151"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Evolution of the NLP Landscape with Oren Etzioni - #598</title>
      <link>https://twimlai.com/go/598</link>
      <description>Today friend of the show and esteemed guest host John Bohannon is back with another great interview, this time around joined by Oren Etzioni, former CEO of the Allen Institute for AI, where he is currently an advisor. In our conversation with Oren, we discuss his philosophy as a researcher and how that has manifested in his pivot to institution builder. We also explore his thoughts on the current landscape of NLP, including the emergence of LLMs and the hype being built up around AI systems from folks like Elon Musk. Finally, we explore some of the research coming out of AI2, including Semantic Scholar, an AI-powered research tool analogous to arxiv, and the somewhat controversial Delphi project, a research prototype designed to model people’s moral judgments on a variety of everyday situations.</description>
      <pubDate>Mon, 07 Nov 2022 20:37:54 -0000</pubDate>
      <itunes:title>The Evolution of the NLP Landscape with Oren Etzioni</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>598</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a5fb4c4-5edb-11ed-b887-f7b803cc0320/image/1bc32e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today friend of the show and esteemed guest host John Bohannon is back with another great interview, this time around joined by Oren Etzioni, former CEO of the Allen Institute for AI, where he is currently an advisor. In our conversation with Oren, we discuss his philosophy as a researcher and how that has manifested in his pivot to institution builder. We also explore his thoughts on the current landscape of NLP, including the emergence of LLMs and the hype being built up around AI systems from folks like Elon Musk. Finally, we explore some of the research coming out of AI2, including Semantic Scholar, an AI-powered research tool analogous to arxiv, and the somewhat controversial Delphi project, a research prototype designed to model people’s moral judgments on a variety of everyday situations.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today friend of the show and esteemed guest host John Bohannon is back with another great interview, this time around joined by Oren Etzioni, former CEO of the Allen Institute for AI, where he is currently an advisor. In our conversation with Oren, we discuss his philosophy as a researcher and how that has manifested in his pivot to institution builder. We also explore his thoughts on the current landscape of NLP, including the emergence of LLMs and the hype being built up around AI systems from folks like Elon Musk. Finally, we explore some of the research coming out of AI2, including Semantic Scholar, an AI-powered research tool analogous to arxiv, and the somewhat controversial Delphi project, a research prototype designed to model people’s moral judgments on a variety of everyday situations.</p>]]>
      </content:encoded>
      <itunes:duration>3195</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3a5fb4c4-5edb-11ed-b887-f7b803cc0320]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7359885583.mp3?updated=1667853796"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools - #597</title>
      <link>https://twimlai.com/podcast/twimlai/the-great-mlops-debate-end-to-end-ml-platforms-vs-specialized-tools/</link>
      <description>Over the last few years, it’s been established that your ML team needs at least some basic tooling in order to be effective, providing support for various aspects of the machine learning workflow, from data acquisition and management, to model development and optimization, to model deployment and monitoring.
But how do you get there? Many tools available off the shelf, both commercial and open source, can help.
At the extremes, these tools can fall into one of a couple of buckets. End-to-end platforms that try to provide support for many aspects of the ML lifecycle, and specialized tools that offer deep functionality in a particular domain or area.
At TWIMLcon: AI Platforms 2022, our panelists debated the merits of these approaches in The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools.</description>
      <pubDate>Mon, 31 Oct 2022 19:22:33 -0000</pubDate>
      <itunes:title>The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>597</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41e47772-56dc-11ed-94f1-b3cc9ec67e39/image/3e782c.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Over the last few years, it’s been established that your ML team needs at least some basic tooling in order to be effective, providing support for various aspects of the machine learning workflow, from data acquisition and management, to model development and optimization, to model deployment and monitoring.
But how do you get there? Many tools available off the shelf, both commercial and open source, can help.
At the extremes, these tools can fall into one of a couple of buckets. End-to-end platforms that try to provide support for many aspects of the ML lifecycle, and specialized tools that offer deep functionality in a particular domain or area.
At TWIMLcon: AI Platforms 2022, our panelists debated the merits of these approaches in The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Over the last few years, it’s been established that your ML team needs at least some basic tooling in order to be effective, providing support for various aspects of the machine learning workflow, from data acquisition and management, to model development and optimization, to model deployment and monitoring.</p><p>But how do you get there? Many tools available off the shelf, both commercial and open source, can help.</p><p>At the extremes, these tools can fall into one of a couple of buckets. End-to-end platforms that try to provide support for many aspects of the ML lifecycle, and specialized tools that offer deep functionality in a particular domain or area.</p><p>At TWIMLcon: AI Platforms 2022, our panelists debated the merits of these approaches in <em>The Great MLOps Debate: End-to-End ML Platforms vs Specialized Tools</em>.</p>]]>
      </content:encoded>
      <itunes:duration>2879</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[41e47772-56dc-11ed-94f1-b3cc9ec67e39]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4359601730.mp3?updated=1667243800"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! You're not Facebook. Architecting MLOps for B2B Use Cases with Jacopo Tagliabue - #596</title>
      <link>https://twimlai.com/podcast/twimlai/youre-not-facebook-architecting-mlops-for-b2b-use-cases-with-jacopo-tagliabue/</link>
      <description>Much of the way we talk and think about MLOps comes from the perspective of large consumer internet companies like Facebook or Google. If you work at a FAANG company, these approaches might work well for you. But what about if you work at one of the many small, B2B companies that stand to benefit through the use of machine learning? How should you be thinking about MLOps and the ML lifecycle in that case? In this live podcast interview from TWIMLcon: AI Platforms 2022, Sam Charrington explores these questions with Jacopo Tagliabue, whose perspectives and contributions on scaling down MLOps have served to make the field more accessible and relevant to a wider array of practitioners.</description>
      <pubDate>Mon, 24 Oct 2022 17:37:00 -0000</pubDate>
      <itunes:title>Live from TWIMLcon: AI Platforms 2022 - You're not Facebook. Architecting MLOps for B2B Use Cases with Jacopo Tagliabue</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>596</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0e675856-53a3-11ed-9678-13021d801ef0/image/99a89e.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Much of the way we talk and think about MLOps comes from the perspective of large consumer internet companies like Facebook or Google. If you work at a FAANG company, these approaches might work well for you. But what about if you work at one of the many small, B2B companies that stand to benefit through the use of machine learning? How should you be thinking about MLOps and the ML lifecycle in that case? In this live podcast interview from TWIMLcon: AI Platforms 2022, Sam Charrington explores these questions with Jacopo Tagliabue, whose perspectives and contributions on scaling down MLOps have served to make the field more accessible and relevant to a wider array of practitioners.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Much of the way we talk and think about MLOps comes from the perspective of large consumer internet companies like Facebook or Google. If you work at a FAANG company, these approaches might work well for you. But what about if you work at one of the many small, B2B companies that stand to benefit through the use of machine learning? How should you be thinking about MLOps and the ML lifecycle in that case? In this live podcast interview from TWIMLcon: AI Platforms 2022, Sam Charrington explores these questions with Jacopo Tagliabue, whose perspectives and contributions on scaling down MLOps have served to make the field more accessible and relevant to a wider array of practitioners.</p>]]>
      </content:encoded>
      <itunes:duration>2982</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0e675856-53a3-11ed-9678-13021d801ef0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6913215986.mp3?updated=1667243776"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Foundational ML Platforms with Kubernetes and Kubeflow with Ali Rodell - #595</title>
      <link>https://twimlai.com/podcast/twimlai/building-foundational-ml-platforms-with-kubernetes-and-kubeflow-with-ali-rodell/</link>
      <description>Today we’re joined by Ali Rodell, a senior director of machine learning engineering at Capital One. In our conversation with Ali, we explore his role as the head of model development platforms at Capital One, including how his 25+ years in software development have shaped his view on building platforms and the evolution of the platforms space over the last 10 years. We discuss the importance of a healthy open source tooling ecosystem, Capital One’s use of various open source capabilites like kubeflow and kubernetes to build out platforms, and some of the challenges that come along with modifying/customizing these tools to work for him and his teams. Finally, we explore the range of user personas that need to be accounted for when making decisions about tooling, supporting things like Jupyter notebooks and other low level tools, and how that can be potentially challenging in a highly regulated environment like the financial industry.
The complete show notes for this episode can be found at twimlai.com/go/595</description>
      <pubDate>Mon, 17 Oct 2022 16:57:57 -0000</pubDate>
      <itunes:title>Building Foundational ML Platforms with Kubernetes and Kubeflow with Ali Rodell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>595</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d52414d4-4e26-11ed-af68-13b3f8b0406b/image/4d3860.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Ali Rodell, a senior director of machine learning engineering at Capital One. In our conversation with Ali, we explore his role as the head of model development platforms at Capital One, including how his 25+ years in software development have shaped his view on building platforms and the evolution of the platforms space over the last 10 years. We discuss the importance of a healthy open source tooling ecosystem, Capital One’s use of various open source capabilites like kubeflow and kubernetes to build out platforms, and some of the challenges that come along with modifying/customizing these tools to work for him and his teams. Finally, we explore the range of user personas that need to be accounted for when making decisions about tooling, supporting things like Jupyter notebooks and other low level tools, and how that can be potentially challenging in a highly regulated environment like the financial industry.
The complete show notes for this episode can be found at twimlai.com/go/595</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ali Rodell, a senior director of machine learning engineering at Capital One. In our conversation with Ali, we explore his role as the head of model development platforms at Capital One, including how his 25+ years in software development have shaped his view on building platforms and the evolution of the platforms space over the last 10 years. We discuss the importance of a healthy open source tooling ecosystem, Capital One’s use of various open source capabilites like kubeflow and kubernetes to build out platforms, and some of the challenges that come along with modifying/customizing these tools to work for him and his teams. Finally, we explore the range of user personas that need to be accounted for when making decisions about tooling, supporting things like Jupyter notebooks and other low level tools, and how that can be potentially challenging in a highly regulated environment like the financial industry.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/595">twimlai.com/go/595</a></p>]]>
      </content:encoded>
      <itunes:duration>2604</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d52414d4-4e26-11ed-af68-13b3f8b0406b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3593135305.mp3?updated=1666025886"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI-Powered Peer Programming with Vasi Philomin - #594</title>
      <link>https://twimlai.com/podcast/twimlai/ai-powered-peer-programming-with-vasi-philomin/</link>
      <description>Today we’re joined by Vasi Philomin, vice president of AI services at AWS, joins us for our first in-person interview since 2019! In our conversation with Vasi, we discussed the recently released Amazon Code Whisperer, a developer-focused coding companion. We begin by exploring Vasi’s role and the various products under the banner of cognitive and non-cognitive services, and how those came together where Code Whisperer fits into the equation and some of the differences between Code Whisperer and some of the other recently released coding companions like GitHub Copilot. We also discuss the training corpus for the model, and how they’ve dealt with the potential issues of bias that arise when training LLMs with crawled web data, and Vasi’s thoughts on what the path of innovation looks like for Code Whisperer. 
At the end of our conversation, Vasi was gracious enough to share a quick live demo of Code Whisperer, so you can catch that here.</description>
      <pubDate>Mon, 10 Oct 2022 16:58:58 -0000</pubDate>
      <itunes:title>AI-Powered Peer Programming with Vasi Philomin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>594</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/dbcf889c-489d-11ed-b702-1b6b64e24184/image/ee7bf1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Vasi Philomin, vice president of AI services at AWS, joins us for our first in-person interview since 2019! In our conversation with Vasi, we discussed the recently released Amazon Code Whisperer, a developer-focused coding companion. We begin by exploring Vasi’s role and the various products under the banner of cognitive and non-cognitive services, and how those came together where Code Whisperer fits into the equation and some of the differences between Code Whisperer and some of the other recently released coding companions like GitHub Copilot. We also discuss the training corpus for the model, and how they’ve dealt with the potential issues of bias that arise when training LLMs with crawled web data, and Vasi’s thoughts on what the path of innovation looks like for Code Whisperer. 
At the end of our conversation, Vasi was gracious enough to share a quick live demo of Code Whisperer, so you can catch that here.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Vasi Philomin, vice president of AI services at AWS, joins us for our first in-person interview since 2019! In our conversation with Vasi, we discussed the recently released Amazon Code Whisperer, a developer-focused coding companion. We begin by exploring Vasi’s role and the various products under the banner of cognitive and non-cognitive services, and how those came together where Code Whisperer fits into the equation and some of the differences between Code Whisperer and some of the other recently released coding companions like GitHub Copilot. We also discuss the training corpus for the model, and how they’ve dealt with the potential issues of bias that arise when training LLMs with crawled web data, and Vasi’s thoughts on what the path of innovation looks like for Code Whisperer. </p><p>At the end of our conversation, Vasi was gracious enough to share a quick live demo of Code Whisperer, so you can catch that <a href="https://www.youtube.com/watch?v=hesiEjgF7Jo">here</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2153</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dbcf889c-489d-11ed-b702-1b6b64e24184]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9749131712.mp3?updated=1665420057"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Top 10 Reasons to Register for TWIMLcon: AI Platforms 2022!</title>
      <link>https://twimlcon.com</link>
      <description>TWIMLcon: AI Platforms 2022 is just a day away! If you're interested in all things MLOps and Platforms/Infrastructure technology, this is the event for you! Register now at https://twimlcon.com/attend for FREE!</description>
      <pubDate>Mon, 03 Oct 2022 21:26:15 -0000</pubDate>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:subtitle/>
      <itunes:summary>TWIMLcon: AI Platforms 2022 is just a day away! If you're interested in all things MLOps and Platforms/Infrastructure technology, this is the event for you! Register now at https://twimlcon.com/attend for FREE!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>TWIMLcon: AI Platforms 2022 is just a day away! If you're interested in all things MLOps and Platforms/Infrastructure technology, this is the event for you! Register now at https://twimlcon.com/attend for FREE!</p>]]>
      </content:encoded>
      <itunes:duration>243</itunes:duration>
      <guid isPermaLink="false"><![CDATA[90e35eaa-435a-11ed-8a48-eb8016a57d44]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5908005502.mp3?updated=1664829616"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applied AI/ML Research at PayPal with Vidyut Naware - #593</title>
      <link>https://twimlai.com/podcast/twimlai/applied-ai-ml-research-at-paypal-with-vidyut-naware/</link>
      <description>Today we’re joined by Vidyut Naware, the director of machine learning and artificial intelligence at Paypal. As the leader of the ML/AI organization at Paypal, Vidyut is responsible for all things applied, from R&amp;D to MLOps infrastructure. In our conversation, we explore the work being done in four major categories, hardware/compute, data, applied responsible AI, and tools, frameworks, and platforms. We also discuss their use of federated learning and delayed supervision models for use cases like anomaly detection and fraud prevention, research into quantum computing and causal inference, as well as applied use cases like graph machine learning and collusion detection. 
The complete show notes for this episode can be found at twimlai.com/go/593</description>
      <pubDate>Mon, 26 Sep 2022 20:02:00 -0000</pubDate>
      <itunes:title>Applied AI/ML Research at PayPal with Vidyut Naware</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>593</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d32f4826-3da3-11ed-8a0d-9bc92209a1d9/image/twiml-vidyut-naware-applied-ai-ml-research-at-paypal-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Vidyut Naware, the director of machine learning and artificial intelligence at Paypal. As the leader of the ML/AI organization at Paypal, Vidyut is responsible for all things applied, from R&amp;D to MLOps infrastructure. In our conversation, we explore the work being done in four major categories, hardware/compute, data, applied responsible AI, and tools, frameworks, and platforms. We also discuss their use of federated learning and delayed supervision models for use cases like anomaly detection and fraud prevention, research into quantum computing and causal inference, as well as applied use cases like graph machine learning and collusion detection. 
The complete show notes for this episode can be found at twimlai.com/go/593</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Vidyut Naware, the director of machine learning and artificial intelligence at Paypal. As the leader of the ML/AI organization at Paypal, Vidyut is responsible for all things applied, from R&amp;D to MLOps infrastructure. In our conversation, we explore the work being done in four major categories, hardware/compute, data, applied responsible AI, and tools, frameworks, and platforms. We also discuss their use of federated learning and delayed supervision models for use cases like anomaly detection and fraud prevention, research into quantum computing and causal inference, as well as applied use cases like graph machine learning and collusion detection. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/593">twimlai.com/go/593</a></p>]]>
      </content:encoded>
      <itunes:duration>1909</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d32f4826-3da3-11ed-8a0d-9bc92209a1d9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9579201730.mp3?updated=1664210836"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Assessing Data Quality at Shopify with Wendy Foster - #592</title>
      <link>https://twimlai.com/podcast/twimlai/assessing-data-quality-at-shopify-with-wendy-foster/</link>
      <description>Today we’re back with another installment of our Data-Centric AI series, joined by Wendy Foster, a director of engineering &amp; data science at Shopify. In our conversation with Wendy, we explore the differences between data-centric and model-centric approaches and how they manifest at Shopify, including on her team, which is responsible for utilizing merchant and product data to assist individual vendors on the platform. We discuss how they address, maintain, and improve data quality, emphasizing the importance of coverage and “freshness” data when solving constantly evolving use cases. Finally, we discuss how data is taxonomized at the company and the challenges that present themselves when producing large-scale ML models, future use cases that Wendy expects her team to tackle, and we briefly explore Merlin, Shopify’s new ML platform (that you can hear more about at TWIMLcon!), and how it fits into the broader scope of ML at the company.
The complete show notes for this episode can be found at twimlai.com/go/592</description>
      <pubDate>Mon, 19 Sep 2022 16:48:26 -0000</pubDate>
      <itunes:title>Assessing Data Quality at Shopify with Wendy Foster</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>592</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b8868b52-3825-11ed-8139-a76ebd2eed22/image/twiml-wendy-foster-assessing-data-quality-at-shopify-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re back with another installment of our Data-Centric AI series, joined by Wendy Foster, a director of engineering &amp; data science at Shopify. In our conversation with Wendy, we explore the differences between data-centric and model-centric approaches and how they manifest at Shopify, including on her team, which is responsible for utilizing merchant and product data to assist individual vendors on the platform. We discuss how they address, maintain, and improve data quality, emphasizing the importance of coverage and “freshness” data when solving constantly evolving use cases. Finally, we discuss how data is taxonomized at the company and the challenges that present themselves when producing large-scale ML models, future use cases that Wendy expects her team to tackle, and we briefly explore Merlin, Shopify’s new ML platform (that you can hear more about at TWIMLcon!), and how it fits into the broader scope of ML at the company.
The complete show notes for this episode can be found at twimlai.com/go/592</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re back with another installment of our Data-Centric AI series, joined by Wendy Foster, a director of engineering &amp; data science at Shopify. In our conversation with Wendy, we explore the differences between data-centric and model-centric approaches and how they manifest at Shopify, including on her team, which is responsible for utilizing merchant and product data to assist individual vendors on the platform. We discuss how they address, maintain, and improve data quality, emphasizing the importance of coverage and “freshness” data when solving constantly evolving use cases. Finally, we discuss how data is taxonomized at the company and the challenges that present themselves when producing large-scale ML models, future use cases that Wendy expects her team to tackle, and we briefly explore Merlin, Shopify’s new ML platform (that you can hear more about at <a href="http://twimlcon.com">TWIMLcon</a>!), and how it fits into the broader scope of ML at the company.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/592">twimlai.com/go/592</a></p>]]>
      </content:encoded>
      <itunes:duration>2189</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b8868b52-3825-11ed-8139-a76ebd2eed22]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9843111618.mp3?updated=1663602973"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Transformers for Tabular Data at Capital One with Bayan Bruss - #591</title>
      <link>https://twimlai.com/podcast/twimlai/transformers-for-tabular-data-at-capital-one-with-bayan-bruss/</link>
      <description>Today we’re joined by Bayan Bruss, a Sr. director of applied ML research at Capital One. In our conversation with Bayan, we dig into his work in applying various deep learning techniques to tabular data, including taking advancements made in other areas like graph CNNs and other traditional graph mining algorithms and applying them to financial services applications. We discuss why despite a “flood” of innovation in the field, work on tabular data doesn’t elicit as much fanfare despite its broad use across businesses, Bayan’s experience with the difficulty of making deep learning work on tabular data, and what opportunities have been presented for the field with the emergence of multi-modality and transformer models. We also explore a pair of papers from Bayan’s team, focused on both transformers and transfer learning for tabular data. 
The complete show notes for this episode can be found at twimlai.com/go/591</description>
      <pubDate>Mon, 12 Sep 2022 18:20:46 -0000</pubDate>
      <itunes:title>Transformers for Tabular Data at Capital One with Bayan Bruss</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>591</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c938dce-32ac-11ed-8d82-ef0e71360142/image/twiml-bayan-bruss-transformers-for-tabular-data-at-capital-one-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Bayan Bruss, a Sr. director of applied ML research at Capital One. In our conversation with Bayan, we dig into his work in applying various deep learning techniques to tabular data, including taking advancements made in other areas like graph CNNs and other traditional graph mining algorithms and applying them to financial services applications. We discuss why despite a “flood” of innovation in the field, work on tabular data doesn’t elicit as much fanfare despite its broad use across businesses, Bayan’s experience with the difficulty of making deep learning work on tabular data, and what opportunities have been presented for the field with the emergence of multi-modality and transformer models. We also explore a pair of papers from Bayan’s team, focused on both transformers and transfer learning for tabular data. 
The complete show notes for this episode can be found at twimlai.com/go/591</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:bayan.bruss@capitalone.com">Bayan Bruss</a>, a Sr. director of applied ML research at Capital One. In our conversation with Bayan, we dig into his work in applying various deep learning techniques to tabular data, including taking advancements made in other areas like graph CNNs and other traditional graph mining algorithms and applying them to financial services applications. We discuss why despite a “flood” of innovation in the field, work on tabular data doesn’t elicit as much fanfare despite its broad use across businesses, Bayan’s experience with the difficulty of making deep learning work on tabular data, and what opportunities have been presented for the field with the emergence of multi-modality and transformer models. We also explore a pair of papers from Bayan’s team, focused on both transformers and transfer learning for tabular data. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/591">twimlai.com/go/591</a></p>]]>
      </content:encoded>
      <itunes:duration>2815</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4c938dce-32ac-11ed-8d82-ef0e71360142]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5255571464.mp3?updated=1662995412"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Understanding Collective Insect Communication with ML, w/ Orit Peleg - #590</title>
      <link>https://twimlai.com/podcast/twimlai/understanding-collective-insect-communication-with-ml-w-orig-peleg/</link>
      <description>Today we’re joined by Orit Peleg, an assistant professor at the University of Colorado, Boulder. Orit’s work focuses on understanding the behavior of disordered living systems, by merging tools from physics, biology, engineering, and computer science. In our conversation, we discuss how Orit found herself exploring problems of swarming behaviors and their relationship to distributed computing system architecture and spiking neurons. We look at two specific areas of research, the first focused on the patterns observed in firefly species, how the data is collected, and the types of algorithms used for optimization. Finally, we look at how Orit’s research with fireflies translates to a completely different insect, the honeybee, and what the next steps are for investigating these and other insect families.

The complete show notes for this episode can be found at twimlai.com/go/590</description>
      <pubDate>Mon, 05 Sep 2022 16:00:00 -0000</pubDate>
      <itunes:title>Understanding Collective Insect Communication with ML, w/ Orit Peleg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>590</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6bb64696-2c72-11ed-8e18-a788b665f662/image/twiml-orit-peleg-understanding-collective-insect-communication-with-ml-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Orit Peleg, an assistant professor at the University of Colorado, Boulder. Orit’s work focuses on understanding the behavior of disordered living systems, by merging tools from physics, biology, engineering, and computer science. In our conversation, we discuss how Orit found herself exploring problems of swarming behaviors and their relationship to distributed computing system architecture and spiking neurons. We look at two specific areas of research, the first focused on the patterns observed in firefly species, how the data is collected, and the types of algorithms used for optimization. Finally, we look at how Orit’s research with fireflies translates to a completely different insect, the honeybee, and what the next steps are for investigating these and other insect families.

The complete show notes for this episode can be found at twimlai.com/go/590</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Orit Peleg, an assistant professor at the University of Colorado, Boulder. Orit’s work focuses on understanding the behavior of disordered living systems, by merging tools from physics, biology, engineering, and computer science. In our conversation, we discuss how Orit found herself exploring problems of swarming behaviors and their relationship to distributed computing system architecture and spiking neurons. We look at two specific areas of research, the first focused on the patterns observed in firefly species, how the data is collected, and the types of algorithms used for optimization. Finally, we look at how Orit’s research with fireflies translates to a completely different insect, the honeybee, and what the next steps are for investigating these and other insect families.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/590">twimlai.com/go/590</a></p>]]>
      </content:encoded>
      <itunes:duration>2233</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6bb64696-2c72-11ed-8e18-a788b665f662]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2203349859.mp3?updated=1662386514"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multimodal, Multi-Lingual NLP at Hugging Face with John Bohannon and Douwe Kiela - #589</title>
      <link>https://twimlai.com/podcast/twimlai/multimodal-multi-lingual-nlp-at-hugging-face-with-john-bohannon-and-douwe-kiela/</link>
      <description>In this extra special episode of the TWIML AI Podcast, a friend of the show John Bohannon leads a jam-packed conversation with Hugging Face’s recently appointed head of research Douwe Kiela. In our conversation with Douwe, we explore his role at the company, how his perception of Hugging Face has changed since joining, and what research entails at the company. We discuss the emergence of the transformer model and the emergence of BERT-ology, the recent shift to solving more multimodal problems, the importance of this subfield as one of the “Grand Directions'' of Hugging Face’s research agenda, and the importance of BLOOM, the open-access Multilingual Language Model that was the output of the BigScience project. Finally, we get into how Douwe’s background in philosophy shapes his view of current projects, as well as his projections for the future of NLP and multimodal ML.
The complete show notes for this episode can be found at twimlai.com/go/589</description>
      <pubDate>Mon, 29 Aug 2022 15:59:56 -0000</pubDate>
      <itunes:title>Multimodal, Multi-Lingual NLP at Hugging Face with John Bohannon and Douwe Kiela</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>589</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fd0086a0-27a1-11ed-bb43-0b53ac3031a3/image/twiml-douwe-kiela-multimodal-multi-lingual-nlp-at-hugging-face-b-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this extra special episode of the TWIML AI Podcast, a friend of the show John Bohannon leads a jam-packed conversation with Hugging Face’s recently appointed head of research Douwe Kiela. In our conversation with Douwe, we explore his role at the company, how his perception of Hugging Face has changed since joining, and what research entails at the company. We discuss the emergence of the transformer model and the emergence of BERT-ology, the recent shift to solving more multimodal problems, the importance of this subfield as one of the “Grand Directions'' of Hugging Face’s research agenda, and the importance of BLOOM, the open-access Multilingual Language Model that was the output of the BigScience project. Finally, we get into how Douwe’s background in philosophy shapes his view of current projects, as well as his projections for the future of NLP and multimodal ML.
The complete show notes for this episode can be found at twimlai.com/go/589</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this extra special episode of the TWIML AI Podcast, a friend of the show John Bohannon leads a jam-packed conversation with Hugging Face’s recently appointed head of research Douwe Kiela. In our conversation with Douwe, we explore his role at the company, how his perception of Hugging Face has changed since joining, and what research entails at the company. We discuss the emergence of the transformer model and the emergence of BERT-ology, the recent shift to solving more multimodal problems, the importance of this subfield as one of the “Grand Directions'' of Hugging Face’s research agenda, and the importance of BLOOM, the open-access Multilingual Language Model that was the output of the BigScience project. Finally, we get into how Douwe’s background in philosophy shapes his view of current projects, as well as his projections for the future of NLP and multimodal ML.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/589">twimlai.com/go/589</a></p>]]>
      </content:encoded>
      <itunes:duration>3192</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fd0086a0-27a1-11ed-bb43-0b53ac3031a3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1809197965.mp3?updated=1661781520"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Synthetic Data Generation for Robotics with Bill Vass - #588</title>
      <link>https://twimlai.com/podcast/twimlai/synthetic-data-generation-for-robotics-with-bill-vass/</link>
      <description>Today we’re joined by Bill Vass, a VP of engineering at Amazon Web Services. Bill spoke at the most recent AWS re:MARS conference, where he delivered an engineering Keynote focused on some recent updates to Amazon sagemaker, including its support for synthetic data generation. In our conversation, we discussed all things synthetic data, including the importance of data quality when creating synthetic data, and some of the use cases that this data is being created for, including warehouses and in the case of one of their more recent acquisitions, iRobot, synthetic house generation. We also explore Astro, the household robot for home monitoring, including the types of models running it, is running, what type of on-device sensor suite it has, the relationship between the robot and the cloud, and the role of simulation. 
The complete show notes for this episode can be found at twimlai.com/go/588</description>
      <pubDate>Mon, 22 Aug 2022 18:02:15 -0000</pubDate>
      <itunes:title>Synthetic Data Generation for Robotics with Bill Vass</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>588</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/faedcda0-2238-11ed-bc35-178e4332fa33/image/twiml-bill-vass-synthetic-data-generation-for-robotics-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Bill Vass, a VP of engineering at Amazon Web Services. Bill spoke at the most recent AWS re:MARS conference, where he delivered an engineering Keynote focused on some recent updates to Amazon sagemaker, including its support for synthetic data generation. In our conversation, we discussed all things synthetic data, including the importance of data quality when creating synthetic data, and some of the use cases that this data is being created for, including warehouses and in the case of one of their more recent acquisitions, iRobot, synthetic house generation. We also explore Astro, the household robot for home monitoring, including the types of models running it, is running, what type of on-device sensor suite it has, the relationship between the robot and the cloud, and the role of simulation. 
The complete show notes for this episode can be found at twimlai.com/go/588</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Bill Vass, a VP of engineering at Amazon Web Services. Bill spoke at the most recent AWS re:MARS conference, where he delivered an engineering Keynote focused on some recent updates to Amazon sagemaker, including its support for synthetic data generation. In our conversation, we discussed all things synthetic data, including the importance of data quality when creating synthetic data, and some of the use cases that this data is being created for, including warehouses and in the case of one of their more recent acquisitions, iRobot, synthetic house generation. We also explore Astro, the household robot for home monitoring, including the types of models running it, is running, what type of on-device sensor suite it has, the relationship between the robot and the cloud, and the role of simulation. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/588">twimlai.com/go/588</a></p>]]>
      </content:encoded>
      <itunes:duration>2177</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[faedcda0-2238-11ed-bc35-178e4332fa33]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3680125629.mp3?updated=1661186664"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multi-Device, Multi-Use-Case Optimization with Jeff Gehlhaar - #587</title>
      <link>https://twimlai.com/podcast/twimlai/multi-device-multi-use-case-optimization-with-jeff-gehlhaar/</link>
      <description>Today we’re joined by Jeff Gehlhaar, vice president of technology at Qualcomm Technologies. In our annual conversation with Jeff, we dig into the relationship between Jeff’s team on the product side and the research team, many of whom we’ve had on the podcast over the last few years. We discuss the challenges of real-world neural network deployment and doing quantization on-device, as well as a look at the tools that power their AI Stack. We also explore a few interesting automotive use cases, including automated driver assistance, and what advancements Jeff is looking forward to seeing in the next year.
The complete show notes for this episode can be found at twimlai.com/go/587</description>
      <pubDate>Mon, 15 Aug 2022 18:17:25 -0000</pubDate>
      <itunes:title>Multi-Device, Multi-Use-Case Optimization with Jeff Gehlhaar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>587</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e2743f0-1ca4-11ed-bcd8-0f6a4c96d5f6/image/twiml-jeff-gehlhaar-multi-device-multi-use-case-optimization-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jeff Gehlhaar, vice president of technology at Qualcomm Technologies. In our annual conversation with Jeff, we dig into the relationship between Jeff’s team on the product side and the research team, many of whom we’ve had on the podcast over the last few years. We discuss the challenges of real-world neural network deployment and doing quantization on-device, as well as a look at the tools that power their AI Stack. We also explore a few interesting automotive use cases, including automated driver assistance, and what advancements Jeff is looking forward to seeing in the next year.
The complete show notes for this episode can be found at twimlai.com/go/587</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:jbg@qti.qualcomm.com">Jeff Gehlhaar</a>, vice president of technology at Qualcomm Technologies. In our annual conversation with Jeff, we dig into the relationship between Jeff’s team on the product side and the research team, many of whom we’ve had on the podcast over the last few years. We discuss the challenges of real-world neural network deployment and doing quantization on-device, as well as a look at the tools that power their AI Stack. We also explore a few interesting automotive use cases, including automated driver assistance, and what advancements Jeff is looking forward to seeing in the next year.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/587">twimlai.com/go/587</a></p>]]>
      </content:encoded>
      <itunes:duration>2610</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5e2743f0-1ca4-11ed-bcd8-0f6a4c96d5f6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9167325985.mp3?updated=1660573080"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Causal Conceptions of Fairness and their Consequences with Sharad Goel - #586</title>
      <link>https://twimlai.com/podcast/twimlai/causal-conceptions-of-fairness-and-their-consequences-with-sharad-goel/</link>
      <description>Today we close out our ICML 2022 coverage joined by Sharad Goel, a professor of public policy at Harvard University. In our conversation with Sharad, we discuss his Outstanding Paper award winner Causal Conceptions of Fairness and their Consequences, which seeks to understand what it means to apply causality to the idea of fairness in ML. We explore the two broad classes of intent that have been conceptualized under the subfield of causal fairness and how they differ, the distinct ways causality is treated in economic and statistical contexts vs a computer science and algorithmic context, and why policies are created in the context of causal definitions are suboptimal broadly.
The complete show notes for this episode can be found at twimlai.com/go/586</description>
      <pubDate>Mon, 08 Aug 2022 16:57:58 -0000</pubDate>
      <itunes:title>Causal Conceptions of Fairness and their Consequences with Sharad Goel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>586</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/c9092da2-1725-11ed-9472-035d8ba1f039/image/twiml-sharad-goel-causal-conceptions-of-fairness-and-their-consequences-sq__1_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we close out our ICML 2022 coverage joined by Sharad Goel, a professor of public policy at Harvard University. In our conversation with Sharad, we discuss his Outstanding Paper award winner Causal Conceptions of Fairness and their Consequences, which seeks to understand what it means to apply causality to the idea of fairness in ML. We explore the two broad classes of intent that have been conceptualized under the subfield of causal fairness and how they differ, the distinct ways causality is treated in economic and statistical contexts vs a computer science and algorithmic context, and why policies are created in the context of causal definitions are suboptimal broadly.
The complete show notes for this episode can be found at twimlai.com/go/586</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out our ICML 2022 coverage joined by Sharad Goel, a professor of public policy at Harvard University. In our conversation with Sharad, we discuss his Outstanding Paper award winner <a href="https://arxiv.org/abs/2207.05302"><em>Causal Conceptions of Fairness and their Consequences</em></a><em>, </em>which seeks to understand what it means to apply causality to the idea of fairness in ML. We explore the two broad classes of intent that have been conceptualized under the subfield of causal fairness and how they differ, the distinct ways causality is treated in economic and statistical contexts vs a computer science and algorithmic context, and why policies are created in the context of causal definitions are suboptimal broadly.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/586">twimlai.com/go/586</a></p>]]>
      </content:encoded>
      <itunes:duration>2237</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c9092da2-1725-11ed-9472-035d8ba1f039]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2714753035.mp3?updated=1659976341"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand - #585</title>
      <link>https://twimlai.com/podcast/twimlai/brain-inspired-hardware-and-algorithm-co-design-with-melika-payvand/</link>
      <description>Today we continue our ICML coverage joined by Melika Payvand, a research scientist at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Melika spoke at the Hardware Aware Efficient Training (HAET) Workshop, delivering a keynote on Brain-inspired hardware and algorithm co-design for low power online training on the edge. In our conversation with Melika, we explore her work at the intersection of ML and neuroinformatics, what makes the proposed architecture “brain-inspired”, and how techniques like online learning fit into the picture. We also discuss the characteristics of the devices that are running the algorithms she’s creating, and the challenges of adapting online learning-style algorithms to this hardware.
The complete show notes for this episode can be found at twimlai.com/go/585</description>
      <pubDate>Mon, 01 Aug 2022 18:01:25 -0000</pubDate>
      <itunes:title>Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>585</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/11b3f74e-11a4-11ed-a60b-03c84b0bddf8/image/twiml-melika-payvand-brain-inspired-hardware-and-algorithm-co-design-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our ICML coverage joined by Melika Payvand, a research scientist at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Melika spoke at the Hardware Aware Efficient Training (HAET) Workshop, delivering a keynote on Brain-inspired hardware and algorithm co-design for low power online training on the edge. In our conversation with Melika, we explore her work at the intersection of ML and neuroinformatics, what makes the proposed architecture “brain-inspired”, and how techniques like online learning fit into the picture. We also discuss the characteristics of the devices that are running the algorithms she’s creating, and the challenges of adapting online learning-style algorithms to this hardware.
The complete show notes for this episode can be found at twimlai.com/go/585</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our ICML coverage joined by Melika Payvand, a research scientist at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. Melika spoke at the Hardware Aware Efficient Training (HAET) Workshop, delivering a keynote on <strong><em>Brain-inspired hardware and algorithm co-design for low power online training on the edge. </em></strong>In our conversation with Melika, we explore her work at the intersection of ML and neuroinformatics, what makes the proposed architecture “brain-inspired”, and how techniques like online learning fit into the picture. We also discuss the characteristics of the devices that are running the algorithms she’s creating, and the challenges of adapting online learning-style algorithms to this hardware.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/585">twimlai.com/go/585</a></p>]]>
      </content:encoded>
      <itunes:duration>2640</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[11b3f74e-11a4-11ed-a60b-03c84b0bddf8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6945094266.mp3?updated=1659367900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Equivariant Priors for Compressed Sensing with Arash Behboodi - #584</title>
      <link>https://twimlai.com/podcast/twimlai/equivariant-priors-for-compressed-sensing-with-arash-behboodi/</link>
      <description>Today we’re joined by Arash Behboodi, a machine learning researcher at Qualcomm Technologies. In our conversation with Arash, we explore his paper Equivariant Priors for Compressed Sensing with Unknown Orientation, which proposes using equivariant generative models as a prior means to show that signals with unknown orientations can be recovered with iterative gradient descent on the latent space of these models and provide additional theoretical recovery guarantees. We discuss the differences between compression and compressed sensing, how he was able to evolve a traditional VAE architecture to understand equivalence, and some of the research areas he’s applying this work, including cryo-electron microscopy. We also discuss a few of the other papers that his colleagues have submitted to the conference, including Overcoming Oscillations in Quantization-Aware Training, Variational On-the-Fly Personalization, and CITRIS: Causal Identifiability from Temporal Intervened Sequences.

The complete show notes for this episode can be found at twimlai.com/go/584</description>
      <pubDate>Mon, 25 Jul 2022 17:26:00 -0000</pubDate>
      <itunes:title>Equivariant Priors for Compressed Sensing with Arash Behboodi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>584</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fb00227a-0c21-11ed-b351-7b0ee305a5e2/image/twiml-arash-behboodi-equivariant-priors-for-compressed-sensing-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Arash Behboodi, a machine learning researcher at Qualcomm Technologies. In our conversation with Arash, we explore his paper Equivariant Priors for Compressed Sensing with Unknown Orientation, which proposes using equivariant generative models as a prior means to show that signals with unknown orientations can be recovered with iterative gradient descent on the latent space of these models and provide additional theoretical recovery guarantees. We discuss the differences between compression and compressed sensing, how he was able to evolve a traditional VAE architecture to understand equivalence, and some of the research areas he’s applying this work, including cryo-electron microscopy. We also discuss a few of the other papers that his colleagues have submitted to the conference, including Overcoming Oscillations in Quantization-Aware Training, Variational On-the-Fly Personalization, and CITRIS: Causal Identifiability from Temporal Intervened Sequences.

The complete show notes for this episode can be found at twimlai.com/go/584</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Arash Behboodi, a machine learning researcher at Qualcomm Technologies. In our conversation with Arash, we explore his paper <a href="https://arxiv.org/abs/2206.14069">Equivariant Priors for Compressed Sensing with Unknown Orientation</a>, which proposes using equivariant generative models as a prior means to show that signals with unknown orientations can be recovered with iterative gradient descent on the latent space of these models and provide additional theoretical recovery guarantees. We discuss the differences between compression and compressed sensing, how he was able to evolve a traditional VAE architecture to understand equivalence, and some of the research areas he’s applying this work, including cryo-electron microscopy. We also discuss a few of the other papers that his colleagues have submitted to the conference, including <a href="https://arxiv.org/abs/2203.11086">Overcoming Oscillations in Quantization-Aware Training</a>, Variational On-the-Fly Personalization, and <a href="https://arxiv.org/abs/2202.03169">CITRIS: Causal Identifiability from Temporal Intervened Sequences</a>.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/584">twimlai.com/go/584</a></p>]]>
      </content:encoded>
      <itunes:duration>2370</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fb00227a-0c21-11ed-b351-7b0ee305a5e2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3343945308.mp3?updated=1658766769"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Managing Data Labeling Ops for Success with Audrey Smith - #583</title>
      <link>https://twimlai.com/podcast/twimlai/managing-data-labeling-ops-for-success-with-audrey-smith/</link>
      <description>Today we continue our Data-Centric AI Series joined by Audrey Smith, the COO at MLtwist, and a recent participant in our panel on DCAI. In our conversation, we do a deep dive into data labeling for ML, exploring the typical journey for an organization to get started with labeling, her experience when making decisions around in-house vs outsourced labeling, and what commitments need to be made to achieve high-quality labels. We discuss how organizations that have made significant investments in labelops typically function, how someone working on an in-house labeling team approaches new projects, the ethical considerations that need to be taken for remote labeling workforces, and much more!
The complete show notes for this episode can be found at twimlai.com/go/583</description>
      <pubDate>Mon, 18 Jul 2022 17:18:28 -0000</pubDate>
      <itunes:title>Managing Data Labeling Ops for Success with Audrey Smith</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>583</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a9f7cc4a-06b1-11ed-b695-079edc0cdf95/image/twiml-audrey-smith-managing-data-labeling-ops-for-success-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our Data-Centric AI Series joined by Audrey Smith, the COO at MLtwist, and a recent participant in our panel on DCAI. In our conversation, we do a deep dive into data labeling for ML, exploring the typical journey for an organization to get started with labeling, her experience when making decisions around in-house vs outsourced labeling, and what commitments need to be made to achieve high-quality labels. We discuss how organizations that have made significant investments in labelops typically function, how someone working on an in-house labeling team approaches new projects, the ethical considerations that need to be taken for remote labeling workforces, and much more!
The complete show notes for this episode can be found at twimlai.com/go/583</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our Data-Centric AI Series joined by Audrey Smith, the COO at MLtwist, and a recent participant in our <a href="http://twimlai.com/dcaipanel">panel</a> on DCAI. In our conversation, we do a deep dive into data labeling for ML, exploring the typical journey for an organization to get started with labeling, her experience when making decisions around in-house vs outsourced labeling, and what commitments need to be made to achieve high-quality labels. We discuss how organizations that have made significant investments in labelops typically function, how someone working on an in-house labeling team approaches new projects, the ethical considerations that need to be taken for remote labeling workforces, and much more!</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/583">twimlai.com/go/583</a></p>]]>
      </content:encoded>
      <itunes:duration>2846</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a9f7cc4a-06b1-11ed-b695-079edc0cdf95]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2169604639.mp3?updated=1658160205"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Engineering an ML-Powered Developer-First Search Engine with Richard Socher - #582</title>
      <link>https://twimlai.com/podcast/twimlai/engineering-an-ml-powered-developer-first-search-engine-with-richard-socher/</link>
      <description>Today we’re joined by Richard Socher, the CEO of You.com. In our conversation with Richard, we explore the inspiration and motivation behind the You.com search engine, and how it differs from the traditional google search engine experience. We discuss some of the various ways that machine learning is used across the platform including how they surface relevant search results and some of the recent additions like code completion and a text generator that can write complete essays and blog posts. Finally, we talk through some of the projects we covered in our last conversation with Richard, namely his work on Salesforce’s AI Economist project. 
The complete show notes for this episode can be found at twimlai.com/go/582</description>
      <pubDate>Mon, 11 Jul 2022 17:09:00 -0000</pubDate>
      <itunes:title>Engineering an ML-Powered Developer-First Search Engine with Richard Socher</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>582</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/673e73fc-0122-11ed-b8ea-97a77182534a/image/twiml-richard-socher-engineering-an-ml-powered-developer-first-search-engine-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Richard Socher, the CEO of You.com. In our conversation with Richard, we explore the inspiration and motivation behind the You.com search engine, and how it differs from the traditional google search engine experience. We discuss some of the various ways that machine learning is used across the platform including how they surface relevant search results and some of the recent additions like code completion and a text generator that can write complete essays and blog posts. Finally, we talk through some of the projects we covered in our last conversation with Richard, namely his work on Salesforce’s AI Economist project. 
The complete show notes for this episode can be found at twimlai.com/go/582</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Richard Socher, the CEO of You.com. In our conversation with Richard, we explore the inspiration and motivation behind the You.com search engine, and how it differs from the traditional google search engine experience. We discuss some of the various ways that machine learning is used across the platform including how they surface relevant search results and some of the recent additions like code completion and a text generator that can write complete essays and blog posts. Finally, we talk through some of the projects we covered in our last conversation with Richard, namely his work on Salesforce’s AI Economist project. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/582">twimlai.com/go/582</a></p>]]>
      </content:encoded>
      <itunes:duration>2792</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[673e73fc-0122-11ed-b8ea-97a77182534a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6476385861.mp3?updated=1657816487"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>On The Path Towards Robot Vision with Aljosa Osep - #581</title>
      <link>https://twimlai.com/podcast/twimlai/on-the-path-towards-robot-vision-with-aljosa-osep/</link>
      <description>Today we wrap up our coverage of the 2022 CVPR conference joined by Aljosa Osep, a postdoc at the Technical University of Munich &amp; Carnegie Mellon University. In our conversation with Aljosa, we explore his broader research interests in achieving robot vision, and his vision for what it will look like when that goal is achieved. The first paper we dig into is Text2Pos: Text-to-Point-Cloud Cross-Modal Localization, which proposes a cross-modal localization module that learns to align textual descriptions with localization cues in a coarse-to-fine manner. Next up, we explore the paper Forecasting from LiDAR via Future Object Detection, which proposes an end-to-end approach for detection and motion forecasting based on raw sensor measurement as opposed to ground truth tracks. Finally, we discuss Aljosa’s third and final paper Opening up Open-World Tracking, which proposes a new benchmark to analyze existing efforts in multi-object tracking and constructs a baseline for these tasks.
The complete show notes for this episode can be found at twimlai.com/go/581</description>
      <pubDate>Mon, 04 Jul 2022 14:55:42 -0000</pubDate>
      <itunes:title>On The Path Towards Robot Vision with Aljosa Osep</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>581</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/392c1a38-fba4-11ec-9efc-9b39834f8698/image/twiml-aljosa-osep-on-the-path-towards-robot-vision-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we wrap up our coverage of the 2022 CVPR conference joined by Aljosa Osep, a postdoc at the Technical University of Munich &amp; Carnegie Mellon University. In our conversation with Aljosa, we explore his broader research interests in achieving robot vision, and his vision for what it will look like when that goal is achieved. The first paper we dig into is Text2Pos: Text-to-Point-Cloud Cross-Modal Localization, which proposes a cross-modal localization module that learns to align textual descriptions with localization cues in a coarse-to-fine manner. Next up, we explore the paper Forecasting from LiDAR via Future Object Detection, which proposes an end-to-end approach for detection and motion forecasting based on raw sensor measurement as opposed to ground truth tracks. Finally, we discuss Aljosa’s third and final paper Opening up Open-World Tracking, which proposes a new benchmark to analyze existing efforts in multi-object tracking and constructs a baseline for these tasks.
The complete show notes for this episode can be found at twimlai.com/go/581</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we wrap up our coverage of the 2022 CVPR conference joined by Aljosa Osep, a postdoc at the Technical University of Munich &amp; Carnegie Mellon University. In our conversation with Aljosa, we explore his broader research interests in achieving robot vision, and his vision for what it will look like when that goal is achieved. The first paper we dig into is <a href="https://arxiv.org/abs/2203.15125">Text2Pos: Text-to-Point-Cloud Cross-Modal Localization</a>, which proposes a cross-modal localization module that learns to align textual descriptions with localization cues in a coarse-to-fine manner. Next up, we explore the paper <a href="https://arxiv.org/abs/2203.16297">Forecasting from LiDAR via Future Object Detection</a>, which proposes an end-to-end approach for detection and motion forecasting based on raw sensor measurement as opposed to ground truth tracks. Finally, we discuss Aljosa’s third and final paper <a href="https://arxiv.org/abs/2104.11221">Opening up Open-World Tracking</a>, which proposes a new benchmark to analyze existing efforts in multi-object tracking and constructs a baseline for these tasks.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/581">twimlai.com/go/581</a></p>]]>
      </content:encoded>
      <itunes:duration>2853</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[392c1a38-fba4-11ec-9efc-9b39834f8698]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5601181899.mp3?updated=1656944629"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>More Language, Less Labeling with Kate Saenko - #580</title>
      <link>https://twimlai.com/podcast/twimlai/more-language-less-labeling-with-kate-saenko/</link>
      <description>Today we continue our CVPR series joined by Kate Saenko, an associate professor at Boston University and a consulting professor for the MIT-IBM Watson AI Lab. In our conversation with Kate, we explore her research in multimodal learning, which she spoke about at the Multimodal Learning and Applications Workshop, one of a whopping 6 workshops she spoke at. We discuss the emergence of multimodal learning, the current research frontier, and Kate’s thoughts on the inherent bias in LLMs and how to deal with it. We also talk through some of the challenges that come up when building out applications, including the cost of labeling, and some of the methods she’s had success with. Finally, we discuss Kate’s perspective on the monopolizing of computing resources for “foundational” models, and her paper Unsupervised Domain Generalization by learning a Bridge Across Domains.
The complete show notes for this episode can be found at twimlai.com/go/580</description>
      <pubDate>Mon, 27 Jun 2022 16:30:45 -0000</pubDate>
      <itunes:title>More Language, Less Labeling with Kate Saenko</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>580</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f74f4d50-f62c-11ec-870a-479f9506d8d8/image/twiml-kate-saenko-more-language-less-labeling-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our CVPR series joined by Kate Saenko, an associate professor at Boston University and a consulting professor for the MIT-IBM Watson AI Lab. In our conversation with Kate, we explore her research in multimodal learning, which she spoke about at the Multimodal Learning and Applications Workshop, one of a whopping 6 workshops she spoke at. We discuss the emergence of multimodal learning, the current research frontier, and Kate’s thoughts on the inherent bias in LLMs and how to deal with it. We also talk through some of the challenges that come up when building out applications, including the cost of labeling, and some of the methods she’s had success with. Finally, we discuss Kate’s perspective on the monopolizing of computing resources for “foundational” models, and her paper Unsupervised Domain Generalization by learning a Bridge Across Domains.
The complete show notes for this episode can be found at twimlai.com/go/580</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our CVPR series joined by Kate Saenko, an associate professor at Boston University and a consulting professor for the MIT-IBM Watson AI Lab. In our conversation with Kate, we explore her research in multimodal learning, which she spoke about at the <a href="https://mula-workshop.github.io/">Multimodal Learning and Applications Workshop</a>, one of a whopping 6 workshops she spoke at. We discuss the emergence of multimodal learning, the current research frontier, and Kate’s thoughts on the inherent bias in LLMs and how to deal with it. We also talk through some of the challenges that come up when building out applications, including the cost of labeling, and some of the methods she’s had success with. Finally, we discuss Kate’s perspective on the monopolizing of computing resources for “foundational” models, and her paper <a href="http://arxiv.org/abs/2112.02300"><em>Unsupervised Domain Generalization by learning a Bridge Across Domains</em></a>.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/580">twimlai.com/go/580</a></p>]]>
      </content:encoded>
      <itunes:duration>2821</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f74f4d50-f62c-11ec-870a-479f9506d8d8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1978141586.mp3?updated=1656343653"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Optical Flow Estimation, Panoptic Segmentation, and Vision Transformers with Fatih Porikli - #579</title>
      <link>https://twimlai.com/podcast/twimlai/optical-flow-estimation-panoptic-segmentation-and-vision-transformers-with-fatih-porikli/</link>
      <description>Today we kick off our annual coverage of the CVPR conference joined by Fatih Porikli, Senior Director of Engineering at Qualcomm AI Research. In our conversation with Fatih, we explore a trio of CVPR-accepted papers, as well as a pair of upcoming workshops at the event. The first paper, Panoptic, Instance and Semantic Relations: A Relational Context Encoder to Enhance Panoptic Segmentation, presents a novel framework to integrate semantic and instance contexts for panoptic segmentation. Next up, we discuss Imposing Consistency for Optical Flow Estimation, a paper that introduces novel and effective consistency strategies for optical flow estimation. The final paper we discuss is IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes, which proposes a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness, and lighting from a single image of an indoor scene. For each paper, we explore the motivations and challenges and get concrete examples to demonstrate each problem and solution presented.

The complete show notes for this episode can be found at twimlai.com/go/579</description>
      <pubDate>Mon, 20 Jun 2022 17:18:00 -0000</pubDate>
      <itunes:title>Optical Flow Estimation, Panoptic Segmentation, and Vision Transformers with Fatih Porikli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>579</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aab69e3c-f0ad-11ec-9f09-1b7537a718da/image/twiml-faith-porikli-optical-flow-estimation-panoptic-segmentation-and-vision-transformers-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we kick off our annual coverage of the CVPR conference joined by Fatih Porikli, Senior Director of Engineering at Qualcomm AI Research. In our conversation with Fatih, we explore a trio of CVPR-accepted papers, as well as a pair of upcoming workshops at the event. The first paper, Panoptic, Instance and Semantic Relations: A Relational Context Encoder to Enhance Panoptic Segmentation, presents a novel framework to integrate semantic and instance contexts for panoptic segmentation. Next up, we discuss Imposing Consistency for Optical Flow Estimation, a paper that introduces novel and effective consistency strategies for optical flow estimation. The final paper we discuss is IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes, which proposes a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness, and lighting from a single image of an indoor scene. For each paper, we explore the motivations and challenges and get concrete examples to demonstrate each problem and solution presented.

The complete show notes for this episode can be found at twimlai.com/go/579</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our annual coverage of the CVPR conference joined by Fatih Porikli, Senior Director of Engineering at Qualcomm AI Research. In our conversation with Fatih, we explore a trio of CVPR-accepted papers, as well as a pair of upcoming workshops at the event. The first paper, <a href="https://arxiv.org/abs/2204.05370">Panoptic, Instance and Semantic Relations: A Relational Context Encoder to Enhance Panoptic Segmentation</a>, presents a novel framework to integrate semantic and instance contexts for panoptic segmentation. Next up, we discuss <a href="https://arxiv.org/abs/2204.07262">Imposing Consistency for Optical Flow Estimation</a>, a paper that introduces novel and effective consistency strategies for optical flow estimation. The final paper we discuss is <a href="http://www.porikli.com/mysite/pdfs/porikli%202022%20-%20IRISformer%20Dense%20vision%20transformers%20for%20single-image%20inverse%20rendering%20in%20indoor%20scenes.pdf">IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes</a>, which proposes a transformer architecture to simultaneously estimate depths, normals, spatially-varying albedo, roughness, and lighting from a single image of an indoor scene. For each paper, we explore the motivations and challenges and get concrete examples to demonstrate each problem and solution presented.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/579">twimlai.com/go/579</a></p>]]>
      </content:encoded>
      <itunes:duration>3077</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aab69e3c-f0ad-11ec-9f09-1b7537a718da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7412560198.mp3?updated=1656004317"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Governance for Data Science with Adam Wood - #578</title>
      <link>https://twimlai.com/podcast/twimlai/data-governance-for-data-science-with-adam-wood/</link>
      <description>Today we’re joined by Adam Wood, Director of Data Governance and Data Quality at Mastercard. In our conversation with Adam, we explore the challenges that come along with data governance at a global scale, including dealing with regional regulations like GDPR and federating records at scale. We discuss the role of feature stores in keeping track of data lineage and how Adam and his team have dealt with the challenges of metadata management, how large organizations like Mastercard are dealing with enabling feature reuse, and the steps they take to alleviate bias, especially in scenarios like acquisitions. Finally, we explore data quality for data science and why Adam sees it as an encouraging area of growth within the company, as well as the investments they’ve made in tooling around data management, catalog, feature management, and more.
The complete show notes for this episode can be found at twimlai.com/go/578</description>
      <pubDate>Mon, 13 Jun 2022 16:38:42 -0000</pubDate>
      <itunes:title>Data Governance for Data Science with Adam Wood</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>578</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f31fff22-eb1e-11ec-b791-6b716e5f2dbe/image/twiml-adam-wood-data-governance-for-data-science-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Adam Wood, Director of Data Governance and Data Quality at Mastercard. In our conversation with Adam, we explore the challenges that come along with data governance at a global scale, including dealing with regional regulations like GDPR and federating records at scale. We discuss the role of feature stores in keeping track of data lineage and how Adam and his team have dealt with the challenges of metadata management, how large organizations like Mastercard are dealing with enabling feature reuse, and the steps they take to alleviate bias, especially in scenarios like acquisitions. Finally, we explore data quality for data science and why Adam sees it as an encouraging area of growth within the company, as well as the investments they’ve made in tooling around data management, catalog, feature management, and more.
The complete show notes for this episode can be found at twimlai.com/go/578</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Adam Wood, Director of Data Governance and Data Quality at Mastercard. In our conversation with Adam, we explore the challenges that come along with data governance at a global scale, including dealing with regional regulations like GDPR and federating records at scale. We discuss the role of feature stores in keeping track of data lineage and how Adam and his team have dealt with the challenges of metadata management, how large organizations like Mastercard are dealing with enabling feature reuse, and the steps they take to alleviate bias, especially in scenarios like acquisitions. Finally, we explore data quality for data science and why Adam sees it as an encouraging area of growth within the company, as well as the investments they’ve made in tooling around data management, catalog, feature management, and more.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/578">twimlai.com/go/578</a></p>]]>
      </content:encoded>
      <itunes:duration>2390</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f31fff22-eb1e-11ec-b791-6b716e5f2dbe]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6420733548.mp3?updated=1655131926"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Feature Platforms for Data-Centric AI with Mike Del Balso - #577</title>
      <link>https://twimlai.com/podcast/twimlai/feature-platforms-for-data-centric-ai-mike-del-balso/</link>
      <description>In the latest installment of our Data-Centric AI series, we’re joined by a friend of the show Mike Del Balso, Co-founder and CEO of Tecton. If you’ve heard any of our other conversations with Mike, you know we spend a lot of time discussing feature stores, or as he now refers to them, feature platforms. We explore the current complexity of data infrastructure broadly and how that has changed over the last five years, as well as the maturation of streaming data platforms. We discuss the wide vs deep paradox that exists around ML tooling, and the idea around the “ML Flywheel”, a strategy that leverages data to accelerate machine learning. Finally, we spend time discussing internal ML team construction, some of the challenges that organizations face when building their ML platforms teams, and how they can avoid the pitfalls as they arise.
The complete show notes for this episode can be found at twimlai.com/go/577</description>
      <pubDate>Mon, 06 Jun 2022 19:28:59 -0000</pubDate>
      <itunes:title>Feature Platforms for Data-Centric AI with Mike Del Balso</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>577</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/78a141bc-e5a5-11ec-b74c-d7b1ae07956f/image/twiml-mike-del-balso-feature-platforms-for-data-centric-ai-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In the latest installment of our Data-Centric AI series, we’re joined by a friend of the show Mike Del Balso, Co-founder and CEO of Tecton. If you’ve heard any of our other conversations with Mike, you know we spend a lot of time discussing feature stores, or as he now refers to them, feature platforms. We explore the current complexity of data infrastructure broadly and how that has changed over the last five years, as well as the maturation of streaming data platforms. We discuss the wide vs deep paradox that exists around ML tooling, and the idea around the “ML Flywheel”, a strategy that leverages data to accelerate machine learning. Finally, we spend time discussing internal ML team construction, some of the challenges that organizations face when building their ML platforms teams, and how they can avoid the pitfalls as they arise.
The complete show notes for this episode can be found at twimlai.com/go/577</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In the latest installment of our Data-Centric AI series, we’re joined by a friend of the show Mike Del Balso, Co-founder and CEO of Tecton. If you’ve heard any of our other conversations with Mike, you know we spend a lot of time discussing feature stores, or as he now refers to them, feature platforms. We explore the current complexity of data infrastructure broadly and how that has changed over the last five years, as well as the maturation of streaming data platforms. We discuss the wide vs deep paradox that exists around ML tooling, and the idea around the “ML Flywheel”, a strategy that leverages data to accelerate machine learning. Finally, we spend time discussing internal ML team construction, some of the challenges that organizations face when building their ML platforms teams, and how they can avoid the pitfalls as they arise.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/577">twimlai.com/go/577</a></p>]]>
      </content:encoded>
      <itunes:duration>2763</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[78a141bc-e5a5-11ec-b74c-d7b1ae07956f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1943469301.mp3?updated=1654531488"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Fallacy of "Ground Truth" with Shayan Mohanty - #576</title>
      <link>https://twimlai.com/podcast/twimlai/the-fallacy-of-ground-turth-with-shayan-mohanty/</link>
      <description>Today we continue our Data-centric AI series joined by Shayan Mohanty, CEO at Watchful. In our conversation with Shayan, we focus on the data labeling aspect of the machine learning process, and ways that a data-centric approach could add value and reduce cost by multiple orders of magnitude. Shayan helps us define “data-centric”, while discussing the main challenges that organizations face when dealing with labeling, how these problems are currently being solved, and how techniques like active learning and weak supervision could be used to more effectively label. We also explore the idea of machine teaching, which focuses on using techniques that make the model training process more efficient, and what organizations need to be successful when trying to make the aforementioned mindset shift to DCAI. 

The complete show notes for this episode can be found at twimlai.com/go/576</description>
      <pubDate>Mon, 30 May 2022 19:21:51 -0000</pubDate>
      <itunes:title>The Fallacy of "Ground Truth" with Shayan Mohanty</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>576</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/af8cc574-e024-11ec-ac97-df971b319981/image/twiml-shayan-mohanty-fallacy_of_ground_truth-sq.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our Data-centric AI series joined by Shayan Mohanty, CEO at Watchful. In our conversation with Shayan, we focus on the data labeling aspect of the machine learning process, and ways that a data-centric approach could add value and reduce cost by multiple orders of magnitude. Shayan helps us define “data-centric”, while discussing the main challenges that organizations face when dealing with labeling, how these problems are currently being solved, and how techniques like active learning and weak supervision could be used to more effectively label. We also explore the idea of machine teaching, which focuses on using techniques that make the model training process more efficient, and what organizations need to be successful when trying to make the aforementioned mindset shift to DCAI. 

The complete show notes for this episode can be found at twimlai.com/go/576</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our Data-centric AI series joined by Shayan Mohanty, CEO at Watchful. In our conversation with Shayan, we focus on the data labeling aspect of the machine learning process, and ways that a data-centric approach could add value and reduce cost by multiple orders of magnitude. Shayan helps us define “data-centric”, while discussing the main challenges that organizations face when dealing with labeling, how these problems are currently being solved, and how techniques like active learning and weak supervision could be used to more effectively label. We also explore the idea of machine teaching, which focuses on using techniques that make the model training process more efficient, and what organizations need to be successful when trying to make the aforementioned mindset shift to DCAI. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/576">twimlai.com/go/576</a></p>]]>
      </content:encoded>
      <itunes:duration>3070</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[af8cc574-e024-11ec-ac97-df971b319981]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3166472959.mp3?updated=1653938198"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Principle-centric AI with Adrien Gaidon - #575</title>
      <link>https://twimlai.com/podcast/twimlai/principal-centric-ai/</link>
      <description>This week, we continue our conversations around the topic of Data-Centric AI joined by a friend of the show Adrien Gaidon, the head of ML research at the Toyota Research Institute (TRI). In our chat, Adrien expresses a fourth, somewhat contrarian, viewpoint to the three prominent schools of thought that organizations tend to fall into, as well as a great story about how the breakthrough came via an unlikely source. We explore his principle-centric approach to machine learning as well as the role of self-supervised machine learning and synthetic data in this and other research threads. Make sure you’re following along with the entire DCAI series at twimlai.com/go/dcai.
The complete show notes for this episode can be found at twimlai.com/go/575</description>
      <pubDate>Mon, 23 May 2022 18:49:46 -0000</pubDate>
      <itunes:title>Principle-centric AI with Adrien Gaidon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>575</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/df5aeac4-daac-11ec-963f-67ce2b6e0070/image/twiml-adrien-gaidon-principal-centric-ai-sq__1_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>This week, we continue our conversations around the topic of Data-Centric AI joined by a friend of the show Adrien Gaidon, the head of ML research at the Toyota Research Institute (TRI). In our chat, Adrien expresses a fourth, somewhat contrarian, viewpoint to the three prominent schools of thought that organizations tend to fall into, as well as a great story about how the breakthrough came via an unlikely source. We explore his principle-centric approach to machine learning as well as the role of self-supervised machine learning and synthetic data in this and other research threads. Make sure you’re following along with the entire DCAI series at twimlai.com/go/dcai.
The complete show notes for this episode can be found at twimlai.com/go/575</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This week, we continue our conversations around the topic of Data-Centric AI joined by a friend of the show Adrien Gaidon, the head of ML research at the Toyota Research Institute (TRI). In our chat, Adrien expresses a fourth, somewhat contrarian, viewpoint to the three prominent schools of thought that organizations tend to fall into, as well as a great story about how the breakthrough came via an unlikely source. We explore his principle-centric approach to machine learning as well as the role of self-supervised machine learning and synthetic data in this and other research threads. Make sure you’re following along with the entire DCAI series at <a href="twimlai.com/go/dcai">twimlai.com/go/dcai</a>.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/575">twimlai.com/go/575</a></p>]]>
      </content:encoded>
      <itunes:duration>2862</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[df5aeac4-daac-11ec-963f-67ce2b6e0070]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6448544023.mp3?updated=1653319957"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Debt in Machine Learning with D. Sculley - #574</title>
      <link>https://twimlai.com/podcast/twimlai/data-debt-in-machine-learning/</link>
      <description>Today we kick things off with a conversation with D. Sculley, a director on the Google Brain team. Many listeners of today’s show will know D. from his work on the paper, The Hidden Technical Debt in Machine Learning Systems, and of course, the infamous diagram. D. has recently translated the idea of technical debt into data debt, something we spend a bit of time on in the interview.
We discuss his view of the concept of DCAI, where debt fits into the conversation of data quality, and what a shift towards data-centrism looks like in a world of increasingly larger models i.e. GPT-3 and the recent PALM models. We also explore common sources of data debt, what are things that the community can and have done to mitigate these issues, the usefulness of causal inference graphs in this work, and much more! If you enjoyed this interview or want to hear more on this topic, check back on the DCAI series page weekly at https://twimlai.com/podcast/twimlai/series/data-centric-ai.
The complete show notes for this episode can be found at twimlai.com/go/574</description>
      <pubDate>Thu, 19 May 2022 19:31:00 -0000</pubDate>
      <itunes:title>Data Debt in Machine Learning with D. Sculley</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>574</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/102cf3c6-d789-11ec-b799-679b3b5a7051/image/twiml-d-sculley-data-debt-machine-learning-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we kick things off with a conversation with D. Sculley, a director on the Google Brain team. Many listeners of today’s show will know D. from his work on the paper, The Hidden Technical Debt in Machine Learning Systems, and of course, the infamous diagram. D. has recently translated the idea of technical debt into data debt, something we spend a bit of time on in the interview.
We discuss his view of the concept of DCAI, where debt fits into the conversation of data quality, and what a shift towards data-centrism looks like in a world of increasingly larger models i.e. GPT-3 and the recent PALM models. We also explore common sources of data debt, what are things that the community can and have done to mitigate these issues, the usefulness of causal inference graphs in this work, and much more! If you enjoyed this interview or want to hear more on this topic, check back on the DCAI series page weekly at https://twimlai.com/podcast/twimlai/series/data-centric-ai.
The complete show notes for this episode can be found at twimlai.com/go/574</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick things off with a conversation with D. Sculley, a director on the Google Brain team. Many listeners of today’s show will know D. from his work on the paper, <a href="http://papers.neurips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf"><em>The Hidden Technical Debt in Machine Learning Systems</em></a>, and of course, the infamous diagram. D. has recently translated the idea of technical debt into data debt, something we spend a bit of time on in the interview.</p><p>We discuss his view of the concept of DCAI, where debt fits into the conversation of data quality, and what a shift towards data-centrism looks like in a world of increasingly larger models i.e. GPT-3 and the recent PALM models. We also explore common sources of data debt, what are things that the community can and have done to mitigate these issues, the usefulness of causal inference graphs in this work, and much more! If you enjoyed this interview or want to hear more on this topic, check back on the DCAI series page weekly at <a href="https://twimlai.com/podcast/twimlai/series/data-centric-ai">https://twimlai.com/podcast/twimlai/series/data-centric-ai</a>.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/574">twimlai.com/go/574</a></p>]]>
      </content:encoded>
      <itunes:duration>2215</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[102cf3c6-d789-11ec-b799-679b3b5a7051]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9846508088.mp3?updated=1652989017"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Enterprise Decisioning at Scale with Rob Walker - #573</title>
      <link>https://twimlai.com/podcast/twimlai/ai-for-enterprise-decisioning-at-scale/</link>
      <description>Today we’re joined by Rob Walker, VP of decisioning &amp; analytics and gm of one-to-one customer engagement at Pegasystems. Rob, who you might know from his previous appearances on the podcast, joins us to discuss his work on AI and ML in the context of customer engagement and decisioning, the various problems that need to be solved, including solving the “next best” problem. We explore the distinction between the idea of the next best action and determining it from a recommender system, how the combination of machine learning and heuristics are currently co-existing in engagements, scaling model evaluation, and some of the challenges they’re facing when dealing with problems of responsible AI and how they’re managed. Finally, we spend a few minutes digging into the upcoming PegaWorld conference, and what attendees should anticipate at the event.
The complete show notes for this episode can be found at twimlai.com/go/573</description>
      <pubDate>Mon, 16 May 2022 15:36:00 -0000</pubDate>
      <itunes:title>AI for Enterprise Decisioning at Scale with Rob Walker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>573</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d5472c68-d521-11ec-a690-9feac6c2665f/image/twiml-rob-walker-ai-enterprise-decisioning-scale-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Rob Walker, VP of decisioning &amp; analytics and gm of one-to-one customer engagement at Pegasystems. Rob, who you might know from his previous appearances on the podcast, joins us to discuss his work on AI and ML in the context of customer engagement and decisioning, the various problems that need to be solved, including solving the “next best” problem. We explore the distinction between the idea of the next best action and determining it from a recommender system, how the combination of machine learning and heuristics are currently co-existing in engagements, scaling model evaluation, and some of the challenges they’re facing when dealing with problems of responsible AI and how they’re managed. Finally, we spend a few minutes digging into the upcoming PegaWorld conference, and what attendees should anticipate at the event.
The complete show notes for this episode can be found at twimlai.com/go/573</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Rob Walker, VP of decisioning &amp; analytics and gm of one-to-one customer engagement at Pegasystems. Rob, who you might know from his previous appearances on the podcast, joins us to discuss his work on AI and ML in the context of customer engagement and decisioning, the various problems that need to be solved, including solving the “next best” problem. We explore the distinction between the idea of the next best action and determining it from a recommender system, how the combination of machine learning and heuristics are currently co-existing in engagements, scaling model evaluation, and some of the challenges they’re facing when dealing with problems of responsible AI and how they’re managed. Finally, we spend a few minutes digging into the upcoming PegaWorld conference, and what attendees should anticipate at the event.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/573">twimlai.com/go/573</a></p>]]>
      </content:encoded>
      <itunes:duration>2356</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d5472c68-d521-11ec-a690-9feac6c2665f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2318949961.mp3?updated=1652715169"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell - #572</title>
      <link>https://twimlai.com/podcast/twimlai/data-rights-quantification-and-governance-for-ethical-ai-with-margaret-mitchell/</link>
      <description>Today we close out our coverage of the ICLR series joined by Meg Mitchell, chief ethics scientist and researcher at Hugging Face. In our conversation with Meg, we discuss her participation in the WikiM3L Workshop, as well as her transition into her new role at Hugging Face, which has afforded her the ability to prioritize coding in her work around AI ethics. We explore her thoughts on the work happening in the fields of data curation and data governance, her interest in the inclusive sharing of datasets and creation of models that don't disproportionately underperform or exploit subpopulations, and how data collection practices have changed over the years. 
We also touch on changes to data protection laws happening in some pretty uncertain places, the evolution of her work on Model Cards, and how she’s using this and recent Data Cards work to lower the barrier to entry to responsibly informed development of data and sharing of data.
The complete show notes for this episode can be found at twimlai.com/go/572</description>
      <pubDate>Thu, 12 May 2022 16:43:39 -0000</pubDate>
      <itunes:title>Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>572</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a9df273e-d20f-11ec-b834-1b5f8fc44474/image/twiml-margaret-mitchell-data-rights-quantification-governance-ethical-ai-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we close out our coverage of the ICLR series joined by Meg Mitchell, chief ethics scientist and researcher at Hugging Face. In our conversation with Meg, we discuss her participation in the WikiM3L Workshop, as well as her transition into her new role at Hugging Face, which has afforded her the ability to prioritize coding in her work around AI ethics. We explore her thoughts on the work happening in the fields of data curation and data governance, her interest in the inclusive sharing of datasets and creation of models that don't disproportionately underperform or exploit subpopulations, and how data collection practices have changed over the years. 
We also touch on changes to data protection laws happening in some pretty uncertain places, the evolution of her work on Model Cards, and how she’s using this and recent Data Cards work to lower the barrier to entry to responsibly informed development of data and sharing of data.
The complete show notes for this episode can be found at twimlai.com/go/572</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out our coverage of the ICLR series joined by Meg Mitchell, chief ethics scientist and researcher at Hugging Face. In our conversation with Meg, we discuss her participation in the WikiM3L Workshop, as well as her transition into her new role at Hugging Face, which has afforded her the ability to prioritize coding in her work around AI ethics. We explore her thoughts on the work happening in the fields of data curation and data governance, her interest in the inclusive sharing of datasets and creation of models that don't disproportionately underperform or exploit subpopulations, and how data collection practices have changed over the years. </p><p>We also touch on changes to data protection laws happening in some pretty uncertain places, the evolution of her work on Model Cards, and how she’s using this and recent Data Cards work to lower the barrier to entry to responsibly informed development of data and sharing of data.</p><p>The complete show notes for this episode can be found at twimlai.com/go/572</p>]]>
      </content:encoded>
      <itunes:duration>2516</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a9df273e-d20f-11ec-b834-1b5f8fc44474]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3630063058.mp3?updated=1652373889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Studying Machine Intelligence with Been Kim - #571</title>
      <link>https://twimlai.com/podcast/twimlai/studying-machine-intelligence-with-been-kim/</link>
      <description>Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level. 
Before we dig into Been’s talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been’s choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more.
The complete show notes for this episode can be found at twimlai.com/go/571</description>
      <pubDate>Mon, 09 May 2022 15:59:00 -0000</pubDate>
      <itunes:title>Studying Machine Intelligence with Been Kim</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>571</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fca24638-cfa1-11ec-9040-bf1ee1eeafee/image/twiml-been-kim-studying-machine-intelligence-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote Beyond interpretability: developing a language to shape our relationships with AI, which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level. 
Before we dig into Been’s talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been’s choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more.
The complete show notes for this episode can be found at twimlai.com/go/571</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our ICLR coverage joined by Been Kim, a staff research scientist at Google Brain, and an ICLR 2022 Invited Speaker. Been, whose research has historically been focused on interpretability in machine learning, delivered the keynote <em>Beyond interpretability: developing a language to shape our relationships with AI, </em>which explores the need to study AI machines as scientific objects, in isolation and with humans, which will provide principles for tools, but also is necessary to take our working relationship with AI to the next level. </p><p>Before we dig into Been’s talk, she characterizes where we are as an industry and community with interpretability, and what the current state of the art is for interpretability techniques. We explore how the Gestalt principles appear in neural networks, Been’s choice to characterize communication with machines as a language as opposed to a set of principles or foundational understanding, and much much more.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/571">twimlai.com/go/571</a></p>]]>
      </content:encoded>
      <itunes:duration>3163</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fca24638-cfa1-11ec-9040-bf1ee1eeafee]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4267556716.mp3?updated=1652106035"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advances in Neural Compression with Auke Wiggers - #570</title>
      <link>https://twimlai.com/podcast/twimlai/advances-in-neural-compression-with-auke-wiggers/</link>
      <description>Today we’re joined by Auke Wiggers, an AI research scientist at Qualcomm. In our conversation with Auke, we discuss his team’s recent research on data compression using generative models. We discuss the relationship between historical compression research and the current trend of neural compression, and the benefit of neural codecs, which learn to compress data from examples. We also explore the performance evaluation process and the recent developments that show that these models can operate in real-time on a mobile device. Finally, we discuss another ICLR paper, “Transformer-based transform coding”, that proposes a vision transformer-based architecture for image and video coding, and some of his team’s other accepted works at the conference. 
The complete show notes for this episode can be found at twimlai.com/go/570</description>
      <pubDate>Mon, 02 May 2022 16:00:00 -0000</pubDate>
      <itunes:title>Advances in Neural Compression with Auke Wiggers</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>570</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5069bdf8-c72a-11ec-af81-9391e70b46cd/image/twiml-auke-wiggers-advances-neural-compression-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Auke Wiggers, an AI research scientist at Qualcomm. In our conversation with Auke, we discuss his team’s recent research on data compression using generative models. We discuss the relationship between historical compression research and the current trend of neural compression, and the benefit of neural codecs, which learn to compress data from examples. We also explore the performance evaluation process and the recent developments that show that these models can operate in real-time on a mobile device. Finally, we discuss another ICLR paper, “Transformer-based transform coding”, that proposes a vision transformer-based architecture for image and video coding, and some of his team’s other accepted works at the conference. 
The complete show notes for this episode can be found at twimlai.com/go/570</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Auke Wiggers, an AI research scientist at Qualcomm. In our conversation with Auke, we discuss his team’s recent research on data compression using generative models. We discuss the relationship between historical compression research and the current trend of neural compression, and the benefit of neural codecs, which learn to compress data from examples. We also explore the performance evaluation process and the recent developments that show that these models can operate in real-time on a mobile device. Finally, we discuss another ICLR paper, “Transformer-based transform coding”, that proposes a vision transformer-based architecture for image and video coding, and some of his team’s other accepted works at the conference. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/570">twimlai.com/go/570</a></p>]]>
      </content:encoded>
      <itunes:duration>2259</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5069bdf8-c72a-11ec-af81-9391e70b46cd]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4995340338.mp3?updated=1653338112"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569</title>
      <link>https://twimlai.com/mixture-of-experts-and-trends-in-large-scale-language-modeling-with-irwan-bello</link>
      <description>Today we’re joined by Irwan Bello, formerly a research scientist at Google Brain, and now on the founding team at a stealth AI startup. We begin our conversation with an exploration of Irwan’s recent paper, Designing Effective Sparse Expert Models, which acts as a design guide for building sparse large language model architectures. We discuss mixture of experts as a technique, the scalability of this method, and it's applicability beyond NLP tasks the data sets this experiment was benchmarked against. We also explore Irwan’s interest in the research areas of alignment and retrieval, talking through interesting lines of work for each area including instruction tuning and direct alignment.
The complete show notes for this episode can be found at twimlai.com/go/569</description>
      <pubDate>Mon, 25 Apr 2022 16:55:00 -0000</pubDate>
      <itunes:title>Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>569</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/16725c28-c49a-11ec-8d33-0344711978a8/image/twiml-irwan-bello-mixture-experts-trends-large-scale-language-modeling-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Irwan Bello, formerly a research scientist at Google Brain, and now on the founding team at a stealth AI startup. We begin our conversation with an exploration of Irwan’s recent paper, Designing Effective Sparse Expert Models, which acts as a design guide for building sparse large language model architectures. We discuss mixture of experts as a technique, the scalability of this method, and it's applicability beyond NLP tasks the data sets this experiment was benchmarked against. We also explore Irwan’s interest in the research areas of alignment and retrieval, talking through interesting lines of work for each area including instruction tuning and direct alignment.
The complete show notes for this episode can be found at twimlai.com/go/569</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Irwan Bello, formerly a research scientist at Google Brain, and now on the founding team at a stealth AI startup. We begin our conversation with an exploration of Irwan’s recent paper, <a href="https://arxiv.org/abs/2202.08906">Designing Effective Sparse Expert Models</a>, which acts as a design guide for building sparse large language model architectures. We discuss mixture of experts as a technique, the scalability of this method, and it's applicability beyond NLP tasks the data sets this experiment was benchmarked against. We also explore Irwan’s interest in the research areas of alignment and retrieval, talking through interesting lines of work for each area including instruction tuning and direct alignment.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/569">twimlai.com/go/569</a></p>]]>
      </content:encoded>
      <itunes:duration>2782</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[16725c28-c49a-11ec-8d33-0344711978a8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8997480332.mp3?updated=1650905108"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Daring to DAIR: Distributed AI Research with Timnit Gebru - #568</title>
      <link>https://twimlai.com/daring-to-dair-distributed-ai-research-with-timnit-gebru</link>
      <description>Today we’re joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being “resignated” from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the “distributed” nature of the institute, how they’re going about figuring out what is in scope and out of scope for the institute’s research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more!

The complete show notes for this episode can be found at twimlai.com/go/568</description>
      <pubDate>Mon, 18 Apr 2022 16:00:00 -0000</pubDate>
      <itunes:title>Daring to DAIR: Distributed AI Research with Timnit Gebru</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>568</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/21ee40f8-bc16-11ec-a489-5b97557dd51e/image/twiml-timnit-gebru-daring-to-dair-distributed-ai-research-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being “resignated” from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the “distributed” nature of the institute, how they’re going about figuring out what is in scope and out of scope for the institute’s research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what not to do when building out the institute, and much much more!

The complete show notes for this episode can be found at twimlai.com/go/568</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by friend of the show Timnit Gebru, the founder and executive director of DAIR, the Distributed Artificial Intelligence Research Institute. In our conversation with Timnit, we discuss her journey to create DAIR, their goals and some of the challenges shes faced along the way. We start is the obvious place, Timnit being “resignated” from Google after writing and publishing a paper detailing the dangers of large language models, the fallout from that paper and her firing, and the eventual founding of DAIR. We discuss the importance of the “distributed” nature of the institute, how they’re going about figuring out what is in scope and out of scope for the institute’s research charter, and what building an institution means to her. We also explore the importance of independent alternatives to traditional research structures, if we should be pessimistic about the impact of internal ethics and responsible AI teams in industry due to the overwhelming power they wield, examples she looks to of what <strong>not</strong> to do when building out the institute, and much much more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/568">twimlai.com/go/568</a></p>]]>
      </content:encoded>
      <itunes:duration>3091</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[21ee40f8-bc16-11ec-a489-5b97557dd51e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4393614459.mp3?updated=1650294617"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Hierarchical and Continual RL with Doina Precup - #567</title>
      <link>https://twimlai.com/hierarchical-and-continual-rl-with-doina-precup</link>
      <description>Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including On the Expressivity of Markov Reward, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward.

The complete show notes for this episode can be found at twimlai.com/go/567</description>
      <pubDate>Mon, 11 Apr 2022 16:38:00 -0000</pubDate>
      <itunes:title>Hierarchical and Continual RL with Doina Precup</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>567</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cacfd330-b9a0-11ec-9c82-63f526d39084/image/twiml-doina-precup-hierarchical-continual-rl-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including On the Expressivity of Markov Reward, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward.

The complete show notes for this episode can be found at twimlai.com/go/567</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including <a href="https://openreview.net/forum?id=9DlCh34E1bN">On the Expressivity of Markov Reward</a>, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/567">twimlai.com/go/567</a></p>]]>
      </content:encoded>
      <itunes:duration>3014</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cacfd330-b9a0-11ec-9c82-63f526d39084]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8692483452.mp3?updated=1649695228"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566</title>
      <link>https://twimlai.com/open-source-drug-discovery-with-deepchem-with-bharath-ramsundar</link>
      <description>Today we’re joined by Bharath Ramsundar, founder and CEO of Deep Forest Sciences. In our conversation with Bharath, we explore his work on the DeepChem, an open-source library for drug discovery, materials science, quantum chemistry, and biology tools. We discuss the challenges that biotech and pharmaceutical companies are facing as they attempt to incorporate AI into the drug discovery process, where the innovation frontier is, and what the promise is for AI in this field in the near term. We also dig into the origins of DeepChem and the problems it's solving for practitioners, the capabilities that are enabled when using this library as opposed to others, and MoleculeNET, a dataset and benchmark focused on molecular design that lives within the DeepChem suite.

The complete show notes for this episode can be found at twimlai.com/go/566</description>
      <pubDate>Mon, 04 Apr 2022 16:01:00 -0000</pubDate>
      <itunes:title>Open-Source Drug Discovery with DeepChem with Bharath Ramsundar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>566</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ffcbc766-b42c-11ec-950b-3f09f7b9d1ef/image/twiml-bharath-ramsundar-open-source-drug-discovery-with-deepchem-sq.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Bharath Ramsundar, founder and CEO of Deep Forest Sciences. In our conversation with Bharath, we explore his work on the DeepChem, an open-source library for drug discovery, materials science, quantum chemistry, and biology tools. We discuss the challenges that biotech and pharmaceutical companies are facing as they attempt to incorporate AI into the drug discovery process, where the innovation frontier is, and what the promise is for AI in this field in the near term. We also dig into the origins of DeepChem and the problems it's solving for practitioners, the capabilities that are enabled when using this library as opposed to others, and MoleculeNET, a dataset and benchmark focused on molecular design that lives within the DeepChem suite.

The complete show notes for this episode can be found at twimlai.com/go/566</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Bharath Ramsundar, founder and CEO of Deep Forest Sciences. In our conversation with Bharath, we explore his work on the DeepChem, an open-source library for drug discovery, materials science, quantum chemistry, and biology tools. We discuss the challenges that biotech and pharmaceutical companies are facing as they attempt to incorporate AI into the drug discovery process, where the innovation frontier is, and what the promise is for AI in this field in the near term. We also dig into the origins of DeepChem and the problems it's solving for practitioners, the capabilities that are enabled when using this library as opposed to others, and MoleculeNET, a dataset and benchmark focused on molecular design that lives within the DeepChem suite.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/566">twimlai.com/go/566</a></p>]]>
      </content:encoded>
      <itunes:duration>1781</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ffcbc766-b42c-11ec-950b-3f09f7b9d1ef]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1027529188.mp3?updated=1649088203"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565</title>
      <link>https://twimlai.com/advancing-hands-on-machine-learning-education-with-sebastian-raschka</link>
      <description>Today we’re joined by Sebastian Raschka, an assistant professor at the University of Wisconsin-Madison and lead AI educator at Grid.ai. In our conversation with Sebastian, we explore his work around AI education, including the “hands-on” philosophy that he takes when building these courses, his recent book Machine Learning with PyTorch and Scikit-Learn, his advise to beginners in the field when they’re trying to choose tools and frameworks, and more. 
We also discuss his work on Pytorch Lightning, a platform that allows users to organize their code and integrate it into other technologies, before switching gears and discuss his recent research efforts around ordinal regression, including a ton of great references that we’ll link on the show notes page below! 
The complete show notes for this episode can be found at twimlai.com/go/565</description>
      <pubDate>Mon, 28 Mar 2022 16:18:00 -0000</pubDate>
      <itunes:title>Advancing Hands-On Machine Learning Education with Sebastian Raschka</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>565</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8360f79c-aea4-11ec-8f34-8389704d0219/image/twiml-sebastian-raschka-advancing-hands-on-machine-learning-education-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sebastian Raschka, an assistant professor at the University of Wisconsin-Madison and lead AI educator at Grid.ai. In our conversation with Sebastian, we explore his work around AI education, including the “hands-on” philosophy that he takes when building these courses, his recent book Machine Learning with PyTorch and Scikit-Learn, his advise to beginners in the field when they’re trying to choose tools and frameworks, and more. 
We also discuss his work on Pytorch Lightning, a platform that allows users to organize their code and integrate it into other technologies, before switching gears and discuss his recent research efforts around ordinal regression, including a ton of great references that we’ll link on the show notes page below! 
The complete show notes for this episode can be found at twimlai.com/go/565</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sebastian Raschka, an assistant professor at the University of Wisconsin-Madison and lead AI educator at Grid.ai. In our conversation with Sebastian, we explore his work around AI education, including the “hands-on” philosophy that he takes when building these courses, his recent book Machine Learning with PyTorch and Scikit-Learn, his advise to beginners in the field when they’re trying to choose tools and frameworks, and more. </p><p>We also discuss his work on Pytorch Lightning, a platform that allows users to organize their code and integrate it into other technologies, before switching gears and discuss his recent research efforts around ordinal regression, including a ton of great references that we’ll link on the show notes page below! </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/565">twimlai.com/go/565</a></p>]]>
      </content:encoded>
      <itunes:duration>2456</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8360f79c-aea4-11ec-8f34-8389704d0219]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2914060869.mp3?updated=1648478514"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Big Science and Embodied Learning at Hugging Face &#129303; with Thomas Wolf - #564</title>
      <link>https://twimlai.com/big-science-and-embodied-learning-at-hugging-face-%F0%9F%A4%97-with-thomas-wolf</link>
      <description>Today we’re joined by Thomas Wolf, co-founder and chief science officer at Hugging Face &#129303;. We cover a ton of ground In our conversation, starting with Thomas’ interesting backstory as a quantum physicist and patent lawyer, and how that lead him to a career in machine learning. We explore how Hugging Face began, what the current direction is for the company, and how much of their focus is NLP and language models versus other disciplines. We also discuss the BigScience project, a year-long research workshop where 1000+ researchers of all backgrounds and disciplines have come together to create an 800GB multilingual dataset and model. We talk through their approach to curating the dataset, model evaluation at this scale, and how they differentiate their work from projects like Eluther AI. Finally, we dig into Thomas’ work on multimodality, his thoughts on the metaverse, his new book NLP with Transformers, and much more!
The complete show notes for this episode can be found at twimlai.com/go/564</description>
      <pubDate>Mon, 21 Mar 2022 16:00:00 -0000</pubDate>
      <itunes:title>Big Science and Embodied Learning at Hugging Face &#129303; with Thomas Wolf</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>564</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/43caa848-a923-11ec-8852-47b78ede9717/image/twiml-thomas-wolf-big-science-embodied-learning-hugging-face-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Thomas Wolf, co-founder and chief science officer at Hugging Face &#129303;. We cover a ton of ground In our conversation, starting with Thomas’ interesting backstory as a quantum physicist and patent lawyer, and how that lead him to a career in machine learning. We explore how Hugging Face began, what the current direction is for the company, and how much of their focus is NLP and language models versus other disciplines. We also discuss the BigScience project, a year-long research workshop where 1000+ researchers of all backgrounds and disciplines have come together to create an 800GB multilingual dataset and model. We talk through their approach to curating the dataset, model evaluation at this scale, and how they differentiate their work from projects like Eluther AI. Finally, we dig into Thomas’ work on multimodality, his thoughts on the metaverse, his new book NLP with Transformers, and much more!
The complete show notes for this episode can be found at twimlai.com/go/564</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Thomas Wolf, co-founder and chief science officer at Hugging Face 🤗. We cover a ton of ground In our conversation, starting with Thomas’ interesting backstory as a quantum physicist and patent lawyer, and how that lead him to a career in machine learning. We explore how Hugging Face began, what the current direction is for the company, and how much of their focus is NLP and language models versus other disciplines. We also discuss the BigScience project, a year-long research workshop where 1000+ researchers of all backgrounds and disciplines have come together to create an 800GB multilingual dataset and model. We talk through their approach to curating the dataset, model evaluation at this scale, and how they differentiate their work from projects like Eluther AI. Finally, we dig into Thomas’ work on multimodality, his thoughts on the metaverse, his new book NLP with Transformers, and much more!</p><p>The complete show notes for this episode can be found at twimlai.com/go/564</p>]]>
      </content:encoded>
      <itunes:duration>2844</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[43caa848-a923-11ec-8852-47b78ede9717]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3025083723.mp3?updated=1647874472"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Full-Stack AI Systems Development with Murali Akula - #563</title>
      <link>https://twimlai.com/full-stack-ai-systems-development-with-murali-akula</link>
      <description>Today we’re joined by Murali Akula, a Sr. director of Software Engineering at Qualcomm. In our conversation with Murali, we explore his role at Qualcomm, where he leads the corporate research team focused on the development and deployment of AI onto Snapdragon chips, their unique definition of “full stack”, and how that philosophy permeates into every step of the software development process. We explore the complexities that are unique to doing machine learning on resource constrained devices, some of the techniques that are being applied to get complex models working on mobile devices, and the process for taking these models from research into real-world applications. We also discuss a few more tools and recent developments, including DONNA for neural architecture search, X-Distill, a method of improving the self-supervised training of monocular depth, and the AI Model Effeciency Toolkit, a library that provides advanced quantization and compression techniques for trained neural network models.
The complete show notes for this episode can be found at twimlai.com/go/563</description>
      <pubDate>Mon, 14 Mar 2022 16:07:00 -0000</pubDate>
      <itunes:title>Full-Stack AI Systems Development with Murali Akula</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>563</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/bca1d8bc-a3a2-11ec-a086-1b5661e80123/image/twiml-murali-akula-full-stack-ai-systems-development-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Murali Akula, a Sr. director of Software Engineering at Qualcomm. In our conversation with Murali, we explore his role at Qualcomm, where he leads the corporate research team focused on the development and deployment of AI onto Snapdragon chips, their unique definition of “full stack”, and how that philosophy permeates into every step of the software development process. We explore the complexities that are unique to doing machine learning on resource constrained devices, some of the techniques that are being applied to get complex models working on mobile devices, and the process for taking these models from research into real-world applications. We also discuss a few more tools and recent developments, including DONNA for neural architecture search, X-Distill, a method of improving the self-supervised training of monocular depth, and the AI Model Effeciency Toolkit, a library that provides advanced quantization and compression techniques for trained neural network models.
The complete show notes for this episode can be found at twimlai.com/go/563</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Murali Akula, a Sr. director of Software Engineering at Qualcomm. In our conversation with Murali, we explore his role at Qualcomm, where he leads the corporate research team focused on the development and deployment of AI onto Snapdragon chips, their unique definition of “full stack”, and how that philosophy permeates into every step of the software development process. We explore the complexities that are unique to doing machine learning on resource constrained devices, some of the techniques that are being applied to get complex models working on mobile devices, and the process for taking these models from research into real-world applications. We also discuss a few more tools and recent developments, including DONNA for neural architecture search, X-Distill, a method of improving the self-supervised training of monocular depth, and the AI Model Effeciency Toolkit, a library that provides advanced quantization and compression techniques for trained neural network models.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/563">twimlai.com/go/563</a></p>]]>
      </content:encoded>
      <itunes:duration>2641</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bca1d8bc-a3a2-11ec-a086-1b5661e80123]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6447919312.mp3?updated=1647274353"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562</title>
      <link>https://twimlai.com/100x-improvements-in-deep-learning-performance-with-sparsity-with-subutai-ahmad</link>
      <description>Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.
 
We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models. 

The complete show notes for this episode can be found at twimlai.com/go/562</description>
      <pubDate>Mon, 07 Mar 2022 17:08:00 -0000</pubDate>
      <itunes:title>100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>562</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2b4b2140-9e23-11ec-8fcb-4f9cdabe554b/image/twiml-subutai-ahmad-100x-improvements-in-deep-learning-performance-with-sparsity-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.
 
We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models. 

The complete show notes for this episode can be found at twimlai.com/go/562</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.</p><p> </p><p>We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/562">twimlai.com/go/562</a></p>]]>
      </content:encoded>
      <itunes:duration>3057</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2b4b2140-9e23-11ec-8fcb-4f9cdabe554b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8969271295.mp3?updated=1646757287"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling BERT and GPT for Financial Services with Jennifer Glore - #561</title>
      <link>https://twimlai.com/scaling-bert-and-gpt-3-for-financial-services-with-jennifer-glore</link>
      <description>Today we’re joined by Jennifer Glore, VP of customer engineering at SambaNova Systems. In our conversation with Jennifer, we discuss how, and why, Sambanova, who is primarily focused on building hardware to support machine learning applications, has built a GPT language model for the financial services industry. Jennifer shares her thoughts on the progress of industries like banking and finance, as well as other traditional organizations, in their attempts at using transformers and other models, and where they’ve begun to see success, as well as some of the hidden challenges that orgs run into that impede their progress. Finally, we explore their experience replicating the GPT-3 paper from a R&amp;D perspective, how they’re addressing issues of predictability, controllability, governance, etc, and much more.

The complete show notes for this episode can be found at twimlai.com/go/561</description>
      <pubDate>Mon, 28 Feb 2022 16:55:00 -0000</pubDate>
      <itunes:title>Scaling BERT and GPT for Financial Services with Jennifer Glore</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>561</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/39979608-98a0-11ec-9782-af0ff7ea19dc/image/_twiml-jennifer-glore-scaling-bert-and-gpt-for-financial-services-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jennifer Glore, VP of customer engineering at SambaNova Systems. In our conversation with Jennifer, we discuss how, and why, Sambanova, who is primarily focused on building hardware to support machine learning applications, has built a GPT language model for the financial services industry. Jennifer shares her thoughts on the progress of industries like banking and finance, as well as other traditional organizations, in their attempts at using transformers and other models, and where they’ve begun to see success, as well as some of the hidden challenges that orgs run into that impede their progress. Finally, we explore their experience replicating the GPT-3 paper from a R&amp;D perspective, how they’re addressing issues of predictability, controllability, governance, etc, and much more.

The complete show notes for this episode can be found at twimlai.com/go/561</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jennifer Glore, VP of customer engineering at SambaNova Systems. In our conversation with Jennifer, we discuss how, and why, Sambanova, who is primarily focused on building hardware to support machine learning applications, has built a GPT language model for the financial services industry. Jennifer shares her thoughts on the progress of industries like banking and finance, as well as other traditional organizations, in their attempts at using transformers and other models, and where they’ve begun to see success, as well as some of the hidden challenges that orgs run into that impede their progress. Finally, we explore their experience replicating the GPT-3 paper from a R&amp;D perspective, how they’re addressing issues of predictability, controllability, governance, etc, and much more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/561">twimlai.com/go/561</a></p>]]>
      </content:encoded>
      <itunes:duration>2650</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[39979608-98a0-11ec-9782-af0ff7ea19dc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2647368573.mp3?updated=1647457811"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Deep Reinforcement Learning with Kamyar Azizzadenesheli - #560</title>
      <link>https://twimlai.com/trends-in-deep-reinforcement-learning-with-kamyar-azizzadenesheli</link>
      <description>Today we’re joined by Kamyar Azizzadenesheli, an assistant professor at Purdue University, to close out our AI Rewind 2021 series! In this conversation, we focused on all things deep reinforcement learning, starting with a general overview of the direction of the field, and though it might seem to be slowing, thats just a product of the light being shined constantly on the CV and NLP spaces. We dig into themes like the convergence of RL methodology with both robotics and control theory, as well as a few trends that Kamyar sees over the horizon, such as self-supervised learning approaches in RL. We also talk through Kamyar’s predictions for RL in 2022 and beyond. This was a fun conversation, and I encourage you to look through all the great resources that Kamyar shared on the show notes page at twimlai.com/go/560!</description>
      <pubDate>Mon, 21 Feb 2022 17:05:54 -0000</pubDate>
      <itunes:title>Trends in Deep Reinforcement Learning with Kamyar Azizzadenesheli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>560</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5ac42dd0-9320-11ec-86f9-57751217a757/image/twiml-kamyar-azizzadenesheli-trends-deep-reinforcement-learning-sq.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kamyar Azizzadenesheli, an assistant professor at Purdue University, to close out our AI Rewind 2021 series! In this conversation, we focused on all things deep reinforcement learning, starting with a general overview of the direction of the field, and though it might seem to be slowing, thats just a product of the light being shined constantly on the CV and NLP spaces. We dig into themes like the convergence of RL methodology with both robotics and control theory, as well as a few trends that Kamyar sees over the horizon, such as self-supervised learning approaches in RL. We also talk through Kamyar’s predictions for RL in 2022 and beyond. This was a fun conversation, and I encourage you to look through all the great resources that Kamyar shared on the show notes page at twimlai.com/go/560!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:kaazizzad@gmail.com">Kamyar Azizzadenesheli</a>, an assistant professor at Purdue University, to close out our AI Rewind 2021 series! In this conversation, we focused on all things deep reinforcement learning, starting with a general overview of the direction of the field, and though it might <em>seem</em> to be slowing, thats just a product of the light being shined constantly on the CV and NLP spaces. We dig into themes like the convergence of RL methodology with both robotics and control theory, as well as a few trends that Kamyar sees over the horizon, such as self-supervised learning approaches in RL. We also talk through Kamyar’s predictions for RL in 2022 and beyond. This was a fun conversation, and I encourage you to look through all the great resources that Kamyar shared on the show notes page at <a href="twimlai.com/go/560">twimlai.com/go/560</a>!</p>]]>
      </content:encoded>
      <itunes:duration>4677</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5ac42dd0-9320-11ec-86f9-57751217a757]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8379623439.mp3?updated=1645456168"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Reinforcement Learning at the Edge of the Statistical Precipice with Rishabh Agarwal - #559</title>
      <link>https://twimlai.com/deep-reinforcement-learning-at-the-edge-of-the-statistical-precipice-with-rishabh-agarwal</link>
      <description>Today we’re joined by Rishabh Agarwal, a research scientist at Google Brain in Montreal. In our conversation with Rishabh, we discuss his recent paper Deep Reinforcement Learning at the Edge of the Statistical Precipice, which won an outstanding paper award at the most recent NeurIPS conference. In this paper, Rishabh and his coauthors call for a change in how deep RL performance is reported on benchmarks when using only a few runs, acknowledging that typically, DeepRL algorithms are evaluated by the performance on a large suite of tasks. Using the Atari 100k benchmark, they found substantial disparities in the conclusions from point estimates alone versus statistical analysis. We explore the reception of this paper from the research community, some of the more surprising results, what incentives researchers have to implement these types of changes in self-reporting when publishing, and much more.

The complete show notes for this episode can be found at twimlai.com/go/559</description>
      <pubDate>Mon, 14 Feb 2022 17:57:14 -0000</pubDate>
      <itunes:title>Deep Reinforcement Learning at the Edge of the Statistical Precipice with Rishabh Agarwal</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>559</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e9dc4502-8db7-11ec-b3ab-07b979d35e07/image/TWIML_COVER_800x800_RA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Rishabh Agarwal, a research scientist at Google Brain in Montreal. In our conversation with Rishabh, we discuss his recent paper Deep Reinforcement Learning at the Edge of the Statistical Precipice, which won an outstanding paper award at the most recent NeurIPS conference. In this paper, Rishabh and his coauthors call for a change in how deep RL performance is reported on benchmarks when using only a few runs, acknowledging that typically, DeepRL algorithms are evaluated by the performance on a large suite of tasks. Using the Atari 100k benchmark, they found substantial disparities in the conclusions from point estimates alone versus statistical analysis. We explore the reception of this paper from the research community, some of the more surprising results, what incentives researchers have to implement these types of changes in self-reporting when publishing, and much more.

The complete show notes for this episode can be found at twimlai.com/go/559</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Rishabh Agarwal, a research scientist at Google Brain in Montreal. In our conversation with Rishabh, we discuss his recent paper <a href="https://openreview.net/forum?id=uqv8-U4lKBe">Deep Reinforcement Learning at the Edge of the Statistical Precipice</a>, which won an outstanding paper award at the most recent NeurIPS conference. In this paper, Rishabh and his coauthors call for a change in how deep RL performance is reported on benchmarks when using only a few runs, acknowledging that typically, DeepRL algorithms are evaluated by the performance on a large suite of tasks. Using the Atari 100k benchmark, they found substantial disparities in the conclusions from point estimates alone versus statistical analysis. We explore the reception of this paper from the research community, some of the more surprising results, what incentives researchers have to implement these types of changes in self-reporting when publishing, and much more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/559">twimlai.com/go/559</a></p>]]>
      </content:encoded>
      <itunes:duration>3111</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e9dc4502-8db7-11ec-b3ab-07b979d35e07]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2730828604.mp3?updated=1644861670"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Designing New Energy Materials with Machine Learning with Rafael Gomez-Bombarelli - #558</title>
      <link>https://twimlai.com/designing-new-energy-materials-with-machine-learning-with-rafael-gomez-bombarelli</link>
      <description>Today we’re joined by Rafael Gomez-Bombarelli, an assistant professor in the department of material science and engineering at MIT. In our conversation with Rafa, we explore his goal of ​​fusing machine learning and atomistic simulations for designing materials, a topic he spoke about at the recent SigOpt AI &amp; HPC Summit. We discuss the two ways in which he thinks of material design, virtual screening and inverse design, as well as the unique challenges each technique presents. We also talk through the use of generative models for simulation, the type of training data necessary for these tasks, and if he’s building hand-coded simulations vs existing packages or tools. Finally, we explore the dynamic relationship between simulation and modeling and how the results of one drive the others efforts, and how hyperparameter optimization gets incorporated into the various projects.
The complete show notes for this episode can be found at twimlai.com/go/558</description>
      <pubDate>Mon, 07 Feb 2022 17:00:00 -0000</pubDate>
      <itunes:title>Designing New Energy Materials with Machine Learning with Rafael Gomez-Bombarelli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>563</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/229e87de-8826-11ec-8c0c-1fb77dcd59dc/image/TWIML_COVER_800x800_RGB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Rafael Gomez-Bombarelli, an assistant professor in the department of material science and engineering at MIT. In our conversation with Rafa, we explore his goal of ​​fusing machine learning and atomistic simulations for designing materials, a topic he spoke about at the recent SigOpt AI &amp; HPC Summit. We discuss the two ways in which he thinks of material design, virtual screening and inverse design, as well as the unique challenges each technique presents. We also talk through the use of generative models for simulation, the type of training data necessary for these tasks, and if he’s building hand-coded simulations vs existing packages or tools. Finally, we explore the dynamic relationship between simulation and modeling and how the results of one drive the others efforts, and how hyperparameter optimization gets incorporated into the various projects.
The complete show notes for this episode can be found at twimlai.com/go/558</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Rafael Gomez-Bombarelli, an assistant professor in the department of material science and engineering at MIT. In our conversation with Rafa, we explore his goal of ​​fusing machine learning and atomistic simulations for designing materials, a topic he spoke about at the recent SigOpt AI &amp; HPC Summit. We discuss the two ways in which he thinks of material design, virtual screening and inverse design, as well as the unique challenges each technique presents. We also talk through the use of generative models for simulation, the type of training data necessary for these tasks, and if he’s building hand-coded simulations vs existing packages or tools. Finally, we explore the dynamic relationship between simulation and modeling and how the results of one drive the others efforts, and how hyperparameter optimization gets incorporated into the various projects.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/558">twimlai.com/go/558</a></p>]]>
      </content:encoded>
      <itunes:duration>3209</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[229e87de-8826-11ec-8c0c-1fb77dcd59dc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3976996749.mp3?updated=1644246268"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Differentiable Programming for Oceanography with Patrick Heimbach - #557</title>
      <link>https://twimlai.com/differentiable-programming-for-oceanography-with-patrick-heimbach</link>
      <description>Today we’re joined by Patrick Heimbach, a professor at the University of Texas working at the intersection of ML and oceanography. In our conversation with Patrick, we explore some of the challenges of computational oceanography, the potential use cases for machine learning in this field, as well as how it can be used to support scientists in solving simulation problems, and the role of differential programming and how it is expressed in his work. 
The complete show notes for this episode can be found at twimlai.com/go/557</description>
      <pubDate>Mon, 31 Jan 2022 17:42:00 -0000</pubDate>
      <itunes:title>Differentiable Programming for Oceanography with Patrick Heimbach</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>557</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ae51d752-82b0-11ec-a29b-1717865a83a0/image/TWIML_COVER_800x800_PH_3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Patrick Heimbach, a professor at the University of Texas working at the intersection of ML and oceanography. In our conversation with Patrick, we explore some of the challenges of computational oceanography, the potential use cases for machine learning in this field, as well as how it can be used to support scientists in solving simulation problems, and the role of differential programming and how it is expressed in his work. 
The complete show notes for this episode can be found at twimlai.com/go/557</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Patrick Heimbach, a professor at the University of Texas working at the intersection of ML and oceanography. In our conversation with Patrick, we explore some of the challenges of computational oceanography, the potential use cases for machine learning in this field, as well as how it can be used to support scientists in solving simulation problems, and the role of differential programming and how it is expressed in his work. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/557">twimlai.com/go/557</a></p>]]>
      </content:encoded>
      <itunes:duration>2050</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ae51d752-82b0-11ec-a29b-1717865a83a0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2566750606.mp3?updated=1643646243"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Machine Learning &amp; Deep Learning with Zachary Lipton - #556</title>
      <link>https://twimlai.com/trends-in-machine-learning-deep-learning-with-zachary-lipton</link>
      <description>Today we continue our AI Rewind 2021 series joined by a friend of the show, assistant professor at Carnegie Mellon University, and AI Rewind veteran, Zack Lipton! In our conversation with Zack, we touch on recurring themes like “NLP Eating AI” and the recent slowdown in innovation in the field, the redistribution of resources across research problems, and where the opportunities for real breakthroughs lie. We also discuss problems facing the current peer-review system, notable research from last year like the introduction of the WILDS library, and the evolution of problems (and potential solutions) in fairness, bias, and equity. Of course, we explore some of the use cases and application areas that made notable progress in 2021, what Zack is looking forward to in 2022 and beyond, and much more!

The complete show notes for this episode can be found at twimlai.com/go/556</description>
      <pubDate>Thu, 27 Jan 2022 17:31:53 -0000</pubDate>
      <itunes:title>Trends in Machine Learning &amp; Deep Learning with Zachary Lipton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>556</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5659bd54-7edb-11ec-a5f0-13168b49f394/image/TWIML_COVER_800x800_ZL2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our AI Rewind 2021 series joined by a friend of the show, assistant professor at Carnegie Mellon University, and AI Rewind veteran, Zack Lipton! In our conversation with Zack, we touch on recurring themes like “NLP Eating AI” and the recent slowdown in innovation in the field, the redistribution of resources across research problems, and where the opportunities for real breakthroughs lie. We also discuss problems facing the current peer-review system, notable research from last year like the introduction of the WILDS library, and the evolution of problems (and potential solutions) in fairness, bias, and equity. Of course, we explore some of the use cases and application areas that made notable progress in 2021, what Zack is looking forward to in 2022 and beyond, and much more!

The complete show notes for this episode can be found at twimlai.com/go/556</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our AI Rewind 2021 series joined by a friend of the show, assistant professor at Carnegie Mellon University, and AI Rewind veteran, Zack Lipton! In our conversation with Zack, we touch on recurring themes like “NLP Eating AI” and the recent slowdown in innovation in the field, the redistribution of resources across research problems, and where the opportunities for real breakthroughs lie. We also discuss problems facing the current peer-review system, notable research from last year like the introduction of the WILDS library, and the evolution of problems (and potential solutions) in fairness, bias, and equity. Of course, we explore some of the use cases and application areas that made notable progress in 2021, what Zack is looking forward to in 2022 and beyond, and much more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/556">twimlai.com/go/556</a></p>]]>
      </content:encoded>
      <itunes:duration>4127</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5659bd54-7edb-11ec-a5f0-13168b49f394]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2239640387.mp3?updated=1643304907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Solving the Cocktail Party Problem with Machine Learning, w/ ‪Jonathan Le Roux - #555</title>
      <link>https://twimlai.com/solving-the-cocktail-party-problem-with-machine-learning-w-jonathan-le-roux/</link>
      <description>Today we’re joined by Jonathan Le Roux, a senior principal research scientist at Mitsubishi Electric Research Laboratories (MERL). At MERL, Jonathan and his team are focused on using machine learning to solve the “cocktail party problem”, focusing on not only the separation of speech from noise, but also the separation of speech from speech. In our conversation with Jonathan, we focus on his paper The Cocktail Fork Problem: Three-Stem Audio Separation For Real-World Soundtracks, which looks to separate and enhance a complex acoustic scene into three distinct categories, speech, music, and sound effects. We explore the challenges of working with such noisy data, the model architecture used to solve this problem, how ML/DL fits into solving the larger cocktail party problem, future directions for this line of research, and much more!

The complete show notes for this episode can be found at twimlai.com/go/555</description>
      <pubDate>Mon, 24 Jan 2022 17:14:00 -0000</pubDate>
      <itunes:title>Solving the Cocktail Party Problem with Machine Learning with ‪Jonathan Le Roux</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>555</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f9558cde-7d34-11ec-a52b-23992b739957/image/TWIML_COVER_800x800_JLR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Jonathan Le Roux, a senior principal research scientist at Mitsubishi Electric Research Laboratories (MERL). At MERL, Jonathan and his team are focused on using machine learning to solve the “cocktail party problem”, focusing on not only the separation of speech from noise, but also the separation of speech from speech. In our conversation with Jonathan, we focus on his paper The Cocktail Fork Problem: Three-Stem Audio Separation For Real-World Soundtracks, which looks to separate and enhance a complex acoustic scene into three distinct categories, speech, music, and sound effects. We explore the challenges of working with such noisy data, the model architecture used to solve this problem, how ML/DL fits into solving the larger cocktail party problem, future directions for this line of research, and much more!

The complete show notes for this episode can be found at twimlai.com/go/555</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:leroux@merl.com">Jonathan Le Roux</a>, a senior principal research scientist at Mitsubishi Electric Research Laboratories (MERL). At MERL, Jonathan and his team are focused on using machine learning to solve the “cocktail party problem”, focusing on not only the separation of speech from noise, but also the separation of speech from speech. In our conversation with Jonathan, we focus on his paper <a href="https://arxiv.org/abs/2110.09958"><em>The Cocktail Fork Problem: Three-Stem Audio Separation For Real-World Soundtracks</em></a><em>, </em>which looks to separate and enhance a complex acoustic scene into three distinct categories, speech, music, and sound effects. We explore the challenges of working with such noisy data, the model architecture used to solve this problem, how ML/DL fits into solving the larger cocktail party problem, future directions for this line of research, and much more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/555">twimlai.com/go/555</a></p>]]>
      </content:encoded>
      <itunes:duration>2136</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f9558cde-7d34-11ec-a52b-23992b739957]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8311241228.mp3?updated=1643044781"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning for Earthquake Seismology with Karianne Bergen - #554</title>
      <link>https://twimlai.com/machine-learning-for-earthquake-seismology-with-karianne-bergen</link>
      <description>Today we’re joined by Karianne Bergen, an assistant professor at Brown University. In our conversation with Karianne, we explore her work at the intersection of earthquake seismology and machine learning, where she’s working on interpretable data classification for seismology. We discuss some of the challenges that present themselves when trying to solve this problem, and the state of applying machine learning to seismological events and earth sciences. Karianne also shares her thoughts on the different relationships that computer scientists and natural scientists have with machine learning, and how to bridge that gap to create tools that work broadly for all scientists.

The complete show notes for this episode can be found at twimlai.com/go/554</description>
      <pubDate>Thu, 20 Jan 2022 17:12:57 -0000</pubDate>
      <itunes:title>Machine Learning for Earthquake Seismology with Karianne Bergen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>554</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/715579cc-7a0d-11ec-8e27-4f4c4b400248/image/TWIML_COVER_800x800_KB5.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Karianne Bergen, an assistant professor at Brown University. In our conversation with Karianne, we explore her work at the intersection of earthquake seismology and machine learning, where she’s working on interpretable data classification for seismology. We discuss some of the challenges that present themselves when trying to solve this problem, and the state of applying machine learning to seismological events and earth sciences. Karianne also shares her thoughts on the different relationships that computer scientists and natural scientists have with machine learning, and how to bridge that gap to create tools that work broadly for all scientists.

The complete show notes for this episode can be found at twimlai.com/go/554</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Karianne Bergen, an assistant professor at Brown University. In our conversation with Karianne, we explore her work at the intersection of earthquake seismology and machine learning, where she’s working on interpretable data classification for seismology. We discuss some of the challenges that present themselves when trying to solve this problem, and the state of applying machine learning to seismological events and earth sciences. Karianne also shares her thoughts on the different relationships that computer scientists and natural scientists have with machine learning, and how to bridge that gap to create tools that work broadly for all scientists.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/554">twimlai.com/go/554</a></p>]]>
      </content:encoded>
      <itunes:duration>2145</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[715579cc-7a0d-11ec-8e27-4f4c4b400248]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6724353404.mp3?updated=1642696550"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The New DBfication of ML/AI with Arun Kumar - #553</title>
      <link>https://twimlai.com/the-new-dbfication-of-ml-ai-with-arun-kumar</link>
      <description>Today we’re joined by Arun Kumarm, an associate professor at UC San Diego. We had the pleasure of catching up with Arun prior to the Workshop on Databases and AI at NeurIPS 2021, where he delivered the talk “The New DBfication of ML/AI.” In our conversation, we explore this “database-ification” of machine learning, a concept analogous to the transformation of relational SQL computation. We discuss the relationship between the ML and database fields and how the merging of the two could have positive outcomes for the end-to-end ML workflow, and a few tools that his team has developed, Cerebro, a tool for reproducible model selection, and SortingHat, a tool for automating data prep, and how tools like these and others affect Arun’s outlook on the future of machine learning platforms and MLOps.

The complete show notes for this episode can be found at twimlai.com/go/553</description>
      <pubDate>Mon, 17 Jan 2022 17:22:40 -0000</pubDate>
      <itunes:title>The New DBfication of ML/AI with Arun Kumar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>553</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d169c12a-77af-11ec-a563-87fb602f1e11/image/TWIML_COVER_800x800_AK4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Arun Kumarm, an associate professor at UC San Diego. We had the pleasure of catching up with Arun prior to the Workshop on Databases and AI at NeurIPS 2021, where he delivered the talk “The New DBfication of ML/AI.” In our conversation, we explore this “database-ification” of machine learning, a concept analogous to the transformation of relational SQL computation. We discuss the relationship between the ML and database fields and how the merging of the two could have positive outcomes for the end-to-end ML workflow, and a few tools that his team has developed, Cerebro, a tool for reproducible model selection, and SortingHat, a tool for automating data prep, and how tools like these and others affect Arun’s outlook on the future of machine learning platforms and MLOps.

The complete show notes for this episode can be found at twimlai.com/go/553</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Arun Kumarm, an associate professor at UC San Diego. We had the pleasure of catching up with Arun prior to the Workshop on Databases and AI at NeurIPS 2021, where he delivered the talk <em>“The New DBfication of ML/AI.” </em>In our conversation, we explore this “database-ification” of machine learning, a concept analogous to the transformation of relational SQL computation. We discuss the relationship between the ML and database fields and how the merging of the two could have positive outcomes for the end-to-end ML workflow, and a few tools that his team has developed, Cerebro, a tool for reproducible model selection, and SortingHat, a tool for automating data prep, and how tools like these and others affect Arun’s outlook on the future of machine learning platforms and MLOps.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/553">twimlai.com/go/553</a></p>]]>
      </content:encoded>
      <itunes:duration>2768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d169c12a-77af-11ec-a563-87fb602f1e11]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5271518840.mp3?updated=1642440195"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Public Interest Technology with Meredith Broussard - #552</title>
      <link>https://twimlai.com/building-public-interest-technology-with-meredith-broussard</link>
      <description>Today we’re joined by Meredith Broussard, an associate professor at NYU &amp; research director at the NYU Alliance for Public Interest Technology. Meredith was a keynote speaker at the recent NeurIPS conference, and we had the pleasure of speaking with her to discuss her talk from the event, and her upcoming book, tentatively titled More Than A Glitch: What Everyone Needs To Know About Making Technology Anti-Racist, Accessible, And Otherwise Useful To All.
In our conversation, we explore Meredith’s work in the field of public interest technology, and her view of the relationship between technology and artificial intelligence. Meredith and Sam talk through real-world scenarios where an emphasis on monitoring bias and responsibility would positively impact outcomes, and how this type of monitoring parallels the infrastructure that many organizations are already building out. Finally, we talk through the main takeaways from Meredith’s NeurIPS talk, and how practitioners can get involved in the work of building and deploying public interest technology.
The complete show notes for this episode can be found at twimlai.com/go/552</description>
      <pubDate>Thu, 13 Jan 2022 18:05:00 -0000</pubDate>
      <itunes:title>Building Public Interest Technology with Meredith Broussard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>552</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3430854c-7485-11ec-b7f7-1b9c82558ca0/image/TWIML_COVER_800x800_MB3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Meredith Broussard, an associate professor at NYU &amp; research director at the NYU Alliance for Public Interest Technology. Meredith was a keynote speaker at the recent NeurIPS conference, and we had the pleasure of speaking with her to discuss her talk from the event, and her upcoming book, tentatively titled More Than A Glitch: What Everyone Needs To Know About Making Technology Anti-Racist, Accessible, And Otherwise Useful To All.
In our conversation, we explore Meredith’s work in the field of public interest technology, and her view of the relationship between technology and artificial intelligence. Meredith and Sam talk through real-world scenarios where an emphasis on monitoring bias and responsibility would positively impact outcomes, and how this type of monitoring parallels the infrastructure that many organizations are already building out. Finally, we talk through the main takeaways from Meredith’s NeurIPS talk, and how practitioners can get involved in the work of building and deploying public interest technology.
The complete show notes for this episode can be found at twimlai.com/go/552</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Meredith Broussard, an associate professor at NYU &amp; research director at the NYU Alliance for Public Interest Technology. Meredith was a keynote speaker at the recent NeurIPS conference, and we had the pleasure of speaking with her to discuss her talk from the event, and her upcoming book, tentatively titled <em>More Than A Glitch: What Everyone Needs To Know About Making Technology Anti-Racist, Accessible, And Otherwise Useful To All.</em></p><p>In our conversation, we explore Meredith’s work in the field of public interest technology, and her view of the relationship between technology and artificial intelligence. Meredith and Sam talk through real-world scenarios where an emphasis on monitoring bias and responsibility would positively impact outcomes, and how this type of monitoring parallels the infrastructure that many organizations are already building out. Finally, we talk through the main takeaways from Meredith’s NeurIPS talk, and how practitioners can get involved in the work of building and deploying public interest technology.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/552">twimlai.com/go/552</a></p>]]>
      </content:encoded>
      <itunes:duration>1816</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3430854c-7485-11ec-b7f7-1b9c82558ca0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9824697941.mp3?updated=1642088065"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551</title>
      <link>https://twimlai.com/a-universal-law-of-robustness-via-isoperimetry-with-sebastien-bubeck</link>
      <description>Today we’re joined by Sebastian Bubeck a sr principal research manager at Microsoft, and author of the paper A Universal Law of Robustness via Isoperimetry, a NeurIPS 2021 Outstanding Paper Award recipient. We begin our conversation with Sebastian with a bit of a primer on convex optimization, a topic that hasn’t come up much in previous interviews. We explore the problem that convex optimization is trying to solve, the application of convex optimization to multi-armed bandit problems, metrical task systems and solving the K-server problem. We then dig into Sebastian’s paper, which looks to prove that for a broad class of data distributions and model classes, overparameterization is necessary if one wants to interpolate the data. Finally, we discussed the relationship between the paper and the work being done in the adversarial robustness community.

The complete show notes for this episode can be found at twimlai.com/go/551</description>
      <pubDate>Mon, 10 Jan 2022 17:23:00 -0000</pubDate>
      <itunes:title>A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>551</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/473e2380-7235-11ec-b316-57b5d5fb5114/image/TWIML_COVER_800x800_SB3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sebastian Bubeck a sr principal research manager at Microsoft, and author of the paper A Universal Law of Robustness via Isoperimetry, a NeurIPS 2021 Outstanding Paper Award recipient. We begin our conversation with Sebastian with a bit of a primer on convex optimization, a topic that hasn’t come up much in previous interviews. We explore the problem that convex optimization is trying to solve, the application of convex optimization to multi-armed bandit problems, metrical task systems and solving the K-server problem. We then dig into Sebastian’s paper, which looks to prove that for a broad class of data distributions and model classes, overparameterization is necessary if one wants to interpolate the data. Finally, we discussed the relationship between the paper and the work being done in the adversarial robustness community.

The complete show notes for this episode can be found at twimlai.com/go/551</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sebastian Bubeck a sr principal research manager at Microsoft, and author of the paper <a href="https://openreview.net/forum?id=z71OSKqTFh7">A Universal Law of Robustness via Isoperimetry</a>, a NeurIPS 2021 Outstanding Paper Award recipient. We begin our conversation with Sebastian with a bit of a primer on convex optimization, a topic that hasn’t come up much in previous interviews. We explore the problem that convex optimization is trying to solve, the application of convex optimization to multi-armed bandit problems, metrical task systems and solving the K-server problem. We then dig into Sebastian’s paper, which looks to prove that for a broad class of data distributions and model classes, overparameterization is necessary if one wants to interpolate the data. Finally, we discussed the relationship between the paper and the work being done in the adversarial robustness community.</p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/551</p>]]>
      </content:encoded>
      <itunes:duration>2344</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[473e2380-7235-11ec-b316-57b5d5fb5114]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4474024742.mp3?updated=1641836796"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in NLP with John Bohannon - #550</title>
      <link>https://twimlai.com/trends-in-nlp-with-john-bohannon</link>
      <description>Today we’re joined by friend of the show John Bohannon, the director of science at Primer AI, to help us showcase all of the great achievements and accomplishments in NLP in 2021! In our conversation, John shares his two major takeaways from last year, 1) NLP as we know it has changed, and we’re back into the incremental phase of the science, and 2) NLP is “eating” the rest of machine learning. We explore the implications of these two major themes across the discipline, as well as best papers, up and coming startups, great things that did happen, and even a few bad things that didn’t. Finally, we explore what 2022 and beyond will look like for NLP, from multilingual NLP to use cases for the influx of large auto-regressive language models like GPT-3 and others, as well as ethical implications that are reverberating across domains and the changes that have been ushered in in that vein.

The complete show notes for this episode can be found at twimlai.com/go/550</description>
      <pubDate>Thu, 06 Jan 2022 18:07:55 -0000</pubDate>
      <itunes:title>Trends in NLP with John Bohannon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>550</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ac1448d2-6f17-11ec-899b-7f1f4f7f0f1f/image/TWIML_COVER_800x800_JB3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by friend of the show John Bohannon, the director of science at Primer AI, to help us showcase all of the great achievements and accomplishments in NLP in 2021! In our conversation, John shares his two major takeaways from last year, 1) NLP as we know it has changed, and we’re back into the incremental phase of the science, and 2) NLP is “eating” the rest of machine learning. We explore the implications of these two major themes across the discipline, as well as best papers, up and coming startups, great things that did happen, and even a few bad things that didn’t. Finally, we explore what 2022 and beyond will look like for NLP, from multilingual NLP to use cases for the influx of large auto-regressive language models like GPT-3 and others, as well as ethical implications that are reverberating across domains and the changes that have been ushered in in that vein.

The complete show notes for this episode can be found at twimlai.com/go/550</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by friend of the show John Bohannon, the director of science at Primer AI, to help us showcase all of the great achievements and accomplishments in NLP in 2021! In our conversation, John shares his two major takeaways from last year, 1) NLP as we know it has changed, and we’re back into the incremental phase of the science, and 2) NLP is “eating” the rest of machine learning. We explore the implications of these two major themes across the discipline, as well as best papers, up and coming startups, great things that <strong>did</strong> happen, and even a few bad things that <strong>didn’t</strong>. Finally, we explore what 2022 and beyond will look like for NLP, from multilingual NLP to use cases for the influx of large auto-regressive language models like GPT-3 and others, as well as ethical implications that are reverberating across domains and the changes that have been ushered in in that vein.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/550">twimlai.com/go/550</a></p>]]>
      </content:encoded>
      <itunes:duration>4706</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ac1448d2-6f17-11ec-899b-7f1f4f7f0f1f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6865397821.mp3?updated=1641491337"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Computer Vision with Georgia Gkioxari - #549</title>
      <link>https://twimlai.com/trends-in-computer-vision-with-georgia-gkioxari</link>
      <description>Happy New Year! We’re excited to kick off 2022 joined by Georgia Gkioxari, a research scientist at Meta AI, to showcase the best advances in the field of computer vision over the past 12 months, and what the future holds for this domain. 
Welcome back to AI Rewind!
In our conversation Georgia highlights the emergence of the transformer model in CV research, what kind of performance results we’re seeing vs CNNs, and the immediate impact of NeRF, amongst a host of other great research. We also explore what is ImageNet’s place in the current landscape, and if it's time to make big changes to push the boundaries of what is possible with image, video and even 3D data, with challenges like the Metaverse, amongst others, on the horizon. Finally, we touch on the startups to keep an eye on, the collaborative efforts of software and hardware researchers, and the vibe of the “ImageNet moment” being upon us once again.
The complete show notes for this episode can be found at twimlai.com/go/549</description>
      <pubDate>Mon, 03 Jan 2022 20:09:00 -0000</pubDate>
      <itunes:title>Trends in Computer Vision with Georgia Gkioxari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>549</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ca95e0ce-6cc2-11ec-91bc-fb5c84eb9e0f/image/TWIML_COVER_800x800_GG2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Happy New Year! We’re excited to kick off 2022 joined by Georgia Gkioxari, a research scientist at Meta AI, to showcase the best advances in the field of computer vision over the past 12 months, and what the future holds for this domain. 
Welcome back to AI Rewind!
In our conversation Georgia highlights the emergence of the transformer model in CV research, what kind of performance results we’re seeing vs CNNs, and the immediate impact of NeRF, amongst a host of other great research. We also explore what is ImageNet’s place in the current landscape, and if it's time to make big changes to push the boundaries of what is possible with image, video and even 3D data, with challenges like the Metaverse, amongst others, on the horizon. Finally, we touch on the startups to keep an eye on, the collaborative efforts of software and hardware researchers, and the vibe of the “ImageNet moment” being upon us once again.
The complete show notes for this episode can be found at twimlai.com/go/549</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Happy New Year! We’re excited to kick off 2022 joined by Georgia Gkioxari, a research scientist at Meta AI, to showcase the best advances in the field of computer vision over the past 12 months, and what the future holds for this domain. </p><p>Welcome back to AI Rewind!</p><p>In our conversation Georgia highlights the emergence of the transformer model in CV research, what kind of performance results we’re seeing vs CNNs, and the immediate impact of NeRF, amongst a host of other great research. We also explore what is ImageNet’s place in the current landscape, and if it's time to make big changes to push the boundaries of what is possible with image, video and even 3D data, with challenges like the Metaverse, amongst others, on the horizon. Finally, we touch on the startups to keep an eye on, the collaborative efforts of software and hardware researchers, and the vibe of the “ImageNet moment” being upon us once again.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/549">twimlai.com/go/549</a></p>]]>
      </content:encoded>
      <itunes:duration>3497</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ca95e0ce-6cc2-11ec-91bc-fb5c84eb9e0f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6045719325.mp3?updated=1641235093"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548</title>
      <link>https://twimlai.com/kids-run-the-darndest-experiments-causal-learning-in-children-with-alison-gopnik</link>
      <description>Today we close out the 2021 NeurIPS series joined by Alison Gopnik, a professor at UC Berkeley and an invited speaker at the Causal Inference &amp; Machine Learning: Why now? Workshop. In our conversation with Alison, we explore the question, “how is it that we can know so much about the world around us from so little information?,” and how her background in psychology, philosophy, and epistemology has guided her along the path to finding this answer through the actions of children. We discuss the role of causality as a means to extract representations of the world and how the “theory theory” came about, and how it was demonstrated to have merit. We also explore the complexity of causal relationships that children are able to deal with and what that can tell us about our current ML models, how the training and inference stages of the ML lifecycle are akin to childhood and adulthood, and much more!
The complete show notes for this episode can be found at twimlai.com/go/548</description>
      <pubDate>Mon, 27 Dec 2021 17:10:00 -0000</pubDate>
      <itunes:title>Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>548</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1920c30e-64c5-11ec-ac22-eb223aaad2ce/image/TWIML_COVER_800x800_AG3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we close out the 2021 NeurIPS series joined by Alison Gopnik, a professor at UC Berkeley and an invited speaker at the Causal Inference &amp; Machine Learning: Why now? Workshop. In our conversation with Alison, we explore the question, “how is it that we can know so much about the world around us from so little information?,” and how her background in psychology, philosophy, and epistemology has guided her along the path to finding this answer through the actions of children. We discuss the role of causality as a means to extract representations of the world and how the “theory theory” came about, and how it was demonstrated to have merit. We also explore the complexity of causal relationships that children are able to deal with and what that can tell us about our current ML models, how the training and inference stages of the ML lifecycle are akin to childhood and adulthood, and much more!
The complete show notes for this episode can be found at twimlai.com/go/548</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out the 2021 NeurIPS series joined by Alison Gopnik, a professor at UC Berkeley and an invited speaker at the <em>Causal Inference &amp; Machine Learning: Why now? Workshop. </em>In our conversation with Alison, we explore the question, “<strong><em>how is it that we can know so much about the world around us from so little information?,” </em></strong>and how her background in psychology, philosophy, and epistemology has guided her along the path to finding this answer through the actions of children. We discuss the role of causality as a means to extract representations of the world and how the “theory theory” came about, and how it was demonstrated to have merit. We also explore the complexity of causal relationships that children are able to deal with and what that can tell us about our current ML models, how the training and inference stages of the ML lifecycle are akin to childhood and adulthood, and much more!</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/548">twimlai.com/go/548</a></p>]]>
      </content:encoded>
      <itunes:duration>2214</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1920c30e-64c5-11ec-ac22-eb223aaad2ce]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9628929497.mp3?updated=1640368884"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547</title>
      <link>https://twimlai.com/hypergraphs-simplicial-complexes-and-graph-representations-of-complex-systems-with-tina-eliassi-rad</link>
      <description>Today we continue our NeurIPS coverage joined by Tina Eliassi-Rad, a professor at Northeastern University, and an invited speaker at the I Still Can't Believe It's Not Better! Workshop. In our conversation with Tina, we explore her research at the intersection of network science, complex networks, and machine learning, how graphs are used in her work and how it differs from typical graph machine learning use cases. We also discuss her talk from the workshop, “The Why, How, and When of Representations for Complex Systems”, in which Tina argues that one of the reasons practitioners have struggled to model complex systems is because of the lack of connection to the data sourcing and generation process. This is definitely a NERD ALERT approved interview!

The complete show notes for this episode can be found at twimlai.com/go/547</description>
      <pubDate>Thu, 23 Dec 2021 17:46:34 -0000</pubDate>
      <itunes:title>Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>547</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ff3ddd26-6412-11ec-84ee-fbd59697abd4/image/TWIML_COVER_800x800_TER.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our NeurIPS coverage joined by Tina Eliassi-Rad, a professor at Northeastern University, and an invited speaker at the I Still Can't Believe It's Not Better! Workshop. In our conversation with Tina, we explore her research at the intersection of network science, complex networks, and machine learning, how graphs are used in her work and how it differs from typical graph machine learning use cases. We also discuss her talk from the workshop, “The Why, How, and When of Representations for Complex Systems”, in which Tina argues that one of the reasons practitioners have struggled to model complex systems is because of the lack of connection to the data sourcing and generation process. This is definitely a NERD ALERT approved interview!

The complete show notes for this episode can be found at twimlai.com/go/547</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our NeurIPS coverage joined by Tina Eliassi-Rad, a professor at Northeastern University, and an invited speaker at the <a href="https://i-cant-believe-its-not-better.github.io/neurips2021/">I Still Can't Believe It's Not Better! Workshop</a>. In our conversation with Tina, we explore her research at the intersection of network science, complex networks, and machine learning, how graphs are used in her work and how it differs from typical graph machine learning use cases. We also discuss her talk from the workshop, “<em>The Why, How, and When of Representations for Complex Systems</em>”, in which Tina argues that one of the reasons practitioners have struggled to model complex systems is because of the lack of connection to the data sourcing and generation process. This is definitely a NERD ALERT approved interview!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/547">twimlai.com/go/547</a></p>]]>
      </content:encoded>
      <itunes:duration>2140</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ff3ddd26-6412-11ec-84ee-fbd59697abd4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4465302159.mp3?updated=1640279981"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546</title>
      <link>https://twimlai.com/deep-learning-transformers-and-the-consequences-of-scale-with-oriol-vinyals</link>
      <description>Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.  

The complete show notes for this episode can be found at twimlai.com/go/546</description>
      <pubDate>Mon, 20 Dec 2021 16:29:33 -0000</pubDate>
      <itunes:title>Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>546</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/27d24614-61b1-11ec-8b02-738e831bf079/image/TWIML_COVER_800x800_OV.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper StarCraft II Unplugged: Large Scale Offline Reinforcement Learning, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.  

The complete show notes for this episode can be found at twimlai.com/go/546</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re excited to kick off our annual NeurIPS, joined by Oriol Vinyals, the lead of the deep learning team at Deepmind. We cover a lot of ground in our conversation with Oriol, beginning with a look at his research agenda and why the scope has remained wide even through the maturity of the field, his thoughts on transformer models and if they will get us beyond the current state of DL, or if some other model architecture would be more advantageous. We also touch on his thoughts on the large language models craze, before jumping into his recent paper <a href="https://neurips.cc/Conferences/2021/ScheduleMultitrack?event=35708">StarCraft II Unplugged: Large Scale Offline Reinforcement Learning</a>, a follow up to their popular AlphaStar work from a few years ago. Finally, we discuss the degree to which the work that Deepmind and others are doing around games actually translates into real-world, non-game scenarios, recent work on multimodal few-shot learning, and we close with a discussion of the consequences of the level of scale that we’ve achieved thus far.  </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/546">twimlai.com/go/546</a></p>]]>
      </content:encoded>
      <itunes:duration>3163</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[27d24614-61b1-11ec-8b02-738e831bf079]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8066118479.mp3?updated=1640018051"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Optimization, Machine Learning and Intelligent Experimentation with Michael McCourt - #545</title>
      <link>https://twimlai.com/optimization-machine-learning-and-intelligent-experimentation-with-michael-mccourt</link>
      <description>Today we’re joined by Michael McCourt the head of engineering at SigOpt. In our conversation with Michael, we explore the vast space around the topic of optimization, including the technical differences between ML and optimization and where they’re applied, what the path to increasing complexity looks like for a practitioner and the relationship between optimization and active learning. We also discuss the research frontier for optimization and how folks think about the interesting challenges and open questions for this field, how optimization approaches appeared at the latest NeurIPS conference, and Mike’s excitement for the emergence of interdisciplinary work between the machine learning community and other fields like the natural sciences.
The complete show notes for this episode can be found at twimlai.com/go/545</description>
      <pubDate>Thu, 16 Dec 2021 17:49:07 -0000</pubDate>
      <itunes:title>Optimization, Machine Learning and Intelligent Experimentation with Michael McCourt</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>545</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d8ed3cca-5e90-11ec-9f73-43f0d5a8d2b3/image/TWIML_COVER_800x800_MM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Michael McCourt the head of engineering at SigOpt. In our conversation with Michael, we explore the vast space around the topic of optimization, including the technical differences between ML and optimization and where they’re applied, what the path to increasing complexity looks like for a practitioner and the relationship between optimization and active learning. We also discuss the research frontier for optimization and how folks think about the interesting challenges and open questions for this field, how optimization approaches appeared at the latest NeurIPS conference, and Mike’s excitement for the emergence of interdisciplinary work between the machine learning community and other fields like the natural sciences.
The complete show notes for this episode can be found at twimlai.com/go/545</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Michael McCourt the head of engineering at SigOpt. In our conversation with Michael, we explore the vast space around the topic of optimization, including the technical differences between ML and optimization and where they’re applied, what the path to increasing complexity looks like for a practitioner and the relationship between optimization and active learning. We also discuss the research frontier for optimization and how folks think about the interesting challenges and open questions for this field, how optimization approaches appeared at the latest NeurIPS conference, and Mike’s excitement for the emergence of interdisciplinary work between the machine learning community and other fields like the natural sciences.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/545">twimlai.com/go/545</a></p>]]>
      </content:encoded>
      <itunes:duration>2757</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d8ed3cca-5e90-11ec-9f73-43f0d5a8d2b3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8341176772.mp3?updated=1639674802"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Jupyter and the Evolution of ML Tooling with Brian Granger - #544</title>
      <link>https://twimlai.com/jupyter-and-the-evolution-of-ml-tooling-with-brian-granger</link>
      <description>Today we conclude our AWS re:Invent coverage joined by Brian Granger, a senior principal technologist at Amazon Web Services, and a co-creator of Project Jupyter. In our conversion with Brian, we discuss the inception and early vision of Project Jupyter, including how the explosion of machine learning and deep learning shifted the landscape for the notebook, and how they balanced the needs of these new user bases vs their existing community of scientific computing users. We also explore AWS’s role with Jupyter and why they’ve decided to invest resources in the project, Brian's thoughts on the broader ML tooling space, and how they’ve applied (and the impact of) HCI principles to the building of these tools. Finally, we dig into the recent Sagemaker Canvas and Studio Lab releases and Brian’s perspective on the future of notebooks and the Jupyter community at large.
The complete show notes for this episode can be found at twimlai.com/go/544</description>
      <pubDate>Mon, 13 Dec 2021 17:00:00 -0000</pubDate>
      <itunes:title>Jupyter and the Evolution of ML Tooling with Brian Granger</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>544</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8fcd2492-5c2e-11ec-9d2c-4f4ff466eeb3/image/TWIML_COVER_800x800_BG2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we conclude our AWS re:Invent coverage joined by Brian Granger, a senior principal technologist at Amazon Web Services, and a co-creator of Project Jupyter. In our conversion with Brian, we discuss the inception and early vision of Project Jupyter, including how the explosion of machine learning and deep learning shifted the landscape for the notebook, and how they balanced the needs of these new user bases vs their existing community of scientific computing users. We also explore AWS’s role with Jupyter and why they’ve decided to invest resources in the project, Brian's thoughts on the broader ML tooling space, and how they’ve applied (and the impact of) HCI principles to the building of these tools. Finally, we dig into the recent Sagemaker Canvas and Studio Lab releases and Brian’s perspective on the future of notebooks and the Jupyter community at large.
The complete show notes for this episode can be found at twimlai.com/go/544</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we conclude our AWS re:Invent coverage joined by Brian Granger, a senior principal technologist at Amazon Web Services, and a co-creator of Project Jupyter. In our conversion with Brian, we discuss the inception and early vision of Project Jupyter, including how the explosion of machine learning and deep learning shifted the landscape for the notebook, and how they balanced the needs of these new user bases vs their existing community of scientific computing users. We also explore AWS’s role with Jupyter and why they’ve decided to invest resources in the project, Brian's thoughts on the broader ML tooling space, and how they’ve applied (and the impact of) HCI principles to the building of these tools. Finally, we dig into the recent Sagemaker Canvas and Studio Lab releases and Brian’s perspective on the future of notebooks and the Jupyter community at large.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/544">twimlai.com/go/544</a></p>]]>
      </content:encoded>
      <itunes:duration>3429</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8fcd2492-5c2e-11ec-9d2c-4f4ff466eeb3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8689987287.mp3?updated=1639412134"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Creating a Data-Driven Culture at ADP with Jack Berkowitz - #543</title>
      <link>https://twimlai.com/creating-a-data-driven-culture-at-adp-with-jack-berkowitz</link>
      <description>Today we continue our 2021 re:Invent series joined by Jack Berkowitz, chief data officer at ADP. In our conversation with Jack, we explore the ever evolving role and growth of machine learning at the company, from the evolution of their ML platform, to the unique team structure. We discuss Jack’s perspective on data governance, the broad use cases for ML, how they approached the decision to move to the cloud, and the impact of scale in the way they deal with data. Finally, we touch on where innovation comes from at ADP, and the challenge of getting the talent it needs to innovate as a large “legacy” company.

The complete show notes for this episode can be found at twimlai.com/go/543</description>
      <pubDate>Thu, 09 Dec 2021 16:46:00 -0000</pubDate>
      <itunes:title>Creating a Data-Driven Culture at ADP with Jack Berkowitz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>543</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fcb5480a-5904-11ec-928b-bfb9249deb0d/image/TWIML_COVER_800x800_JB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we continue our 2021 re:Invent series joined by Jack Berkowitz, chief data officer at ADP. In our conversation with Jack, we explore the ever evolving role and growth of machine learning at the company, from the evolution of their ML platform, to the unique team structure. We discuss Jack’s perspective on data governance, the broad use cases for ML, how they approached the decision to move to the cloud, and the impact of scale in the way they deal with data. Finally, we touch on where innovation comes from at ADP, and the challenge of getting the talent it needs to innovate as a large “legacy” company.

The complete show notes for this episode can be found at twimlai.com/go/543</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our 2021 re:Invent series joined by Jack Berkowitz, chief data officer at ADP. In our conversation with Jack, we explore the ever evolving role and growth of machine learning at the company, from the evolution of their ML platform, to the unique team structure. We discuss Jack’s perspective on data governance, the broad use cases for ML, how they approached the decision to move to the cloud, and the impact of scale in the way they deal with data. Finally, we touch on where innovation comes from at ADP, and the challenge of getting the talent it needs to innovate as a large “legacy” company.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/543">twimlai.com/go/543</a></p>]]>
      </content:encoded>
      <itunes:duration>2084</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fcb5480a-5904-11ec-928b-bfb9249deb0d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2746439881.mp3?updated=1639172238"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>re:Invent Roundup 2021 with Bratin Saha - #542</title>
      <link>http://twimlai.com/reinvent-roundup-2021-with-bratin-saha</link>
      <description>Today we’re joined by Bratin Saha, vice president and general manager at Amazon.
In our conversation with Bratin, we discuss quite a few of the recent ML-focused announcements coming out of last weeks re:Invent conference, including new products like Canvas and Studio Lab, as well as upgrades to existing services like Ground Truth Plus. We explore what no-code environments like the aforementioned Canvas mean for the democratization of ML tooling, and some of the key challenges to delivering it as a consumable product. We also discuss industrialization as a subset of MLOps, and how customer patterns inform the creation of these tools, and much more!

The complete show notes for this episode can be found at twimlai.com/go/542.</description>
      <pubDate>Mon, 06 Dec 2021 18:33:35 -0000</pubDate>
      <itunes:title>re:Invent Roundup 2021 with Bratin Saha</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>542</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/e12bd87c-56b8-11ec-9aa2-3fa9bf4c220a/image/TWIML_COVER_800x800_BS3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Bratin Saha, vice president and general manager at Amazon.
In our conversation with Bratin, we discuss quite a few of the recent ML-focused announcements coming out of last weeks re:Invent conference, including new products like Canvas and Studio Lab, as well as upgrades to existing services like Ground Truth Plus. We explore what no-code environments like the aforementioned Canvas mean for the democratization of ML tooling, and some of the key challenges to delivering it as a consumable product. We also discuss industrialization as a subset of MLOps, and how customer patterns inform the creation of these tools, and much more!

The complete show notes for this episode can be found at twimlai.com/go/542.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Bratin Saha, vice president and general manager at Amazon.</p><p>In our conversation with Bratin, we discuss quite a few of the recent ML-focused announcements coming out of last weeks re:Invent conference, including new products like Canvas and Studio Lab, as well as upgrades to existing services like Ground Truth Plus. We explore what no-code environments like the aforementioned Canvas mean for the democratization of ML tooling, and some of the key challenges to delivering it as a consumable product. We also discuss industrialization as a subset of MLOps, and how customer patterns inform the creation of these tools, and much more!</p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/542.</p>]]>
      </content:encoded>
      <itunes:duration>2494</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e12bd87c-56b8-11ec-9aa2-3fa9bf4c220a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4464933497.mp3?updated=1638814479"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multi-modal Deep Learning for Complex Document Understanding with Doug Burdick - #541</title>
      <link>https://twimlai.com/multi-modal-deep-learning-for-complex-document-understanding-with-doug-burdick</link>
      <description>Today we’re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug’s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she’s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That’s where Doug and his team come in.
In our conversation, we discuss the multimodal approach they’ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they’ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he’s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.
The complete show notes for this episode can be found at twimlai.com/go/541</description>
      <pubDate>Thu, 02 Dec 2021 16:31:39 -0000</pubDate>
      <itunes:title>Multi-modal Deep Learning for Complex Document Understanding with Doug Burdick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>541</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/89b906de-537d-11ec-80c8-4719b6ad8285/image/TWIML_COVER_800x800_DB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug’s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she’s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That’s where Doug and his team come in.
In our conversation, we discuss the multimodal approach they’ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they’ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he’s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.
The complete show notes for this episode can be found at twimlai.com/go/541</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Doug Burdick, a principal research staff member at IBM Research. In a recent interview, Doug’s colleague Yunyao Li joined us to talk through some of the broader enterprise NLP problems she’s working on. One of those problems is making documents machine consumable, especially with the traditionally archival file type, the PDF. That’s where Doug and his team come in.</p><p>In our conversation, we discuss the multimodal approach they’ve taken to identify, interpret, contextualize and extract things like tables from a document, the challenges they’ve faced when dealing with the tables and how they evaluate the performance of models on tables. We also explore how he’s handled generalizing across different formats, how fine-tuning has to be in order to be effective, the problems that appear on the NLP side of things, and how deep learning models are being leveraged within the group.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/541">twimlai.com/go/541</a></p>]]>
      </content:encoded>
      <itunes:duration>2732</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[89b906de-537d-11ec-80c8-4719b6ad8285]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7086620209.mp3?updated=1638463010"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi - #540</title>
      <link>https://twimlai.com/predictive-maintenance-using-deep-learning-and-reliability-engineering-with-shayan-mortazavi</link>
      <description>Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. 
In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC &amp; AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more!
The complete show notes for this episode can be found at twimlai.com/go/540</description>
      <pubDate>Mon, 29 Nov 2021 18:58:41 -0000</pubDate>
      <itunes:title>Predictive Maintenance Using Deep Learning and Reliability Engineering with Shayan Mortazavi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>540</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8e28c1f8-5131-11ec-9d9d-ab8430aca506/image/TWIML_COVER_800x800_SM3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. 
In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC &amp; AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more!
The complete show notes for this episode can be found at twimlai.com/go/540</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Shayan Mortazavi, a data science manager at Accenture. </p><p>In our conversation with Shayan, we discuss his talk from the recent SigOpt HPC &amp; AI Summit, titled A Novel Framework Predictive Maintenance Using Dl and Reliability Engineering. In the talk, Shayan proposes a novel deep learning-based approach for prognosis prediction of oil and gas plant equipment in an effort to prevent critical damage or failure. We explore the evolution of reliability engineering, the decision to use a residual-based approach rather than traditional anomaly detection to determine when an anomaly was happening, the challenges of using LSTMs when building these models, the amount of human labeling required to build the models, and much more!</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/540">twimlai.com/go/540</a></p>]]>
      </content:encoded>
      <itunes:duration>2941</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8e28c1f8-5131-11ec-9d9d-ab8430aca506]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9126478953.mp3?updated=1638204336"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building a Deep Tech Startup in NLP with Nasrin Mostafazadeh - #539</title>
      <link>https://twimlai.com/building-a-deep-tech-startup-in-nlp-with-nasrin-mostafazadeh</link>
      <description>Today we’re joined by friend-of-the-show Nasrin Mostafazadeh, co-founder of Verneek. 
Though Verneek is still in stealth, Nasrin was gracious enough to share a bit about the company, including their goal of enabling anyone to make data-informed decisions without the need for a technical background, through the use of innovative human-machine interfaces. In our conversation, we explore the state of AI research in the domains relevant to the problem they’re trying to solve and how they use those insights to inform and prioritize their research agenda. We also discuss what advice Nasrin would give to someone thinking about starting a deep tech startup or going from research to product development. 
The complete show notes for today’s show can be found at twimlai.com/go/539.</description>
      <pubDate>Wed, 24 Nov 2021 17:17:27 -0000</pubDate>
      <itunes:title>Building a Deep Tech Startup in NLP with Nasrin Mostafazadeh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>539</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/90aa51d2-4d3d-11ec-8ea4-cf0b586ab568/image/TWIML_COVER_800x800_NM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by friend-of-the-show Nasrin Mostafazadeh, co-founder of Verneek. 
Though Verneek is still in stealth, Nasrin was gracious enough to share a bit about the company, including their goal of enabling anyone to make data-informed decisions without the need for a technical background, through the use of innovative human-machine interfaces. In our conversation, we explore the state of AI research in the domains relevant to the problem they’re trying to solve and how they use those insights to inform and prioritize their research agenda. We also discuss what advice Nasrin would give to someone thinking about starting a deep tech startup or going from research to product development. 
The complete show notes for today’s show can be found at twimlai.com/go/539.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by friend-of-the-show Nasrin Mostafazadeh, co-founder of Verneek. </p><p>Though Verneek is still in stealth, Nasrin was gracious enough to share a bit about the company, including their goal of enabling anyone to make data-informed decisions without the need for a technical background, through the use of innovative human-machine interfaces. In our conversation, we explore the state of AI research in the domains relevant to the problem they’re trying to solve and how they use those insights to inform and prioritize their research agenda. We also discuss what advice Nasrin would give to someone thinking about starting a deep tech startup or going from research to product development. </p><p>The complete show notes for today’s show can be found at <a href="twimlai.com/go/539">twimlai.com/go/539</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3080</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[90aa51d2-4d3d-11ec-8ea4-cf0b586ab568]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2316771707.mp3?updated=1637772968"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Models for Human-Robot Collaboration with Julie Shah - #538</title>
      <link>https://twimlai.com/models-for-human-robot-collaboration-with-julie-shah</link>
      <description>Today we’re joined by Julie Shah, a professor at the Massachusetts Institute of Technology (MIT). Julie’s work lies at the intersection of aeronautics, astronautics, and robotics, with a specific focus on collaborative and interactive robotics. In our conversation, we explore how robots would achieve the ability to predict what their human collaborators are thinking, what the process of building knowledge into these systems looks like, and her big picture idea of developing a field robot that doesn’t “require a human to be a robot” to work with it. We also discuss work Julie has done on cross-training between humans and robots with the focus on getting them to co-learn how to work together, as well as future projects that she’s excited about.

The complete show notes for this episode can be found at twimlai.com/go/538.</description>
      <pubDate>Mon, 22 Nov 2021 19:07:30 -0000</pubDate>
      <itunes:title>Models for Human-Robot Collaboration with Julie Shah</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>538</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9a592886-4bb0-11ec-8e96-6bb0ace874c6/image/TWIML_COVER_800x800_JS3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Julie Shah, a professor at the Massachusetts Institute of Technology (MIT). Julie’s work lies at the intersection of aeronautics, astronautics, and robotics, with a specific focus on collaborative and interactive robotics. In our conversation, we explore how robots would achieve the ability to predict what their human collaborators are thinking, what the process of building knowledge into these systems looks like, and her big picture idea of developing a field robot that doesn’t “require a human to be a robot” to work with it. We also discuss work Julie has done on cross-training between humans and robots with the focus on getting them to co-learn how to work together, as well as future projects that she’s excited about.

The complete show notes for this episode can be found at twimlai.com/go/538.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Julie Shah, a professor at the Massachusetts Institute of Technology (MIT). Julie’s work lies at the intersection of aeronautics, astronautics, and robotics, with a specific focus on collaborative and interactive robotics. In our conversation, we explore how robots would achieve the ability to predict what their human collaborators are thinking, what the process of building knowledge into these systems looks like, and her big picture idea of developing a field robot that doesn’t “require a human to be a robot” to work with it. We also discuss work Julie has done on cross-training between humans and robots with the focus on getting them to co-learn how to work together, as well as future projects that she’s excited about.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/538">twimlai.com/go/538</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2531</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9a592886-4bb0-11ec-8e96-6bb0ace874c6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7932649064.mp3?updated=1637608300"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Four Key Tools for Robust Enterprise NLP with Yunyao Li - #537</title>
      <link>https://twimlai.com/four-key-tools-for-robust-enterprise-nlp-with-yunyao-li</link>
      <description>Today we’re joined by Yunyao Li, a senior research manager at IBM Research. 
Yunyao is in a somewhat unique position at IBM, addressing the challenges of enterprise NLP in a traditional research environment, while also having customer engagement responsibilities. In our conversation with Yunyao, we explore the challenges associated with productizing NLP in the enterprise, and if she focuses on solving these problems independent of one another, or through a more unified approach. 
We then ground the conversation with real-world examples of these enterprise challenges, including enabling level document discovery at scale using combinations of techniques like deep neural networks and supervised and/or unsupervised learning, and entity extraction and semantic parsing to identify text. Finally, we talk through data augmentation in the context of NLP, and how we enable the humans in-the-loop to generate high-quality data.
The complete show notes for this episode can be found at twimlai.com/go/537</description>
      <pubDate>Thu, 18 Nov 2021 18:29:49 -0000</pubDate>
      <itunes:title>Four Key Tools for Robust Enterprise NLP with Yunyao Li</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>537</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/907241a4-488b-11ec-a926-9bcb15a8c105/image/TWIML_COVER_800x800_YL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Yunyao Li, a senior research manager at IBM Research. 
Yunyao is in a somewhat unique position at IBM, addressing the challenges of enterprise NLP in a traditional research environment, while also having customer engagement responsibilities. In our conversation with Yunyao, we explore the challenges associated with productizing NLP in the enterprise, and if she focuses on solving these problems independent of one another, or through a more unified approach. 
We then ground the conversation with real-world examples of these enterprise challenges, including enabling level document discovery at scale using combinations of techniques like deep neural networks and supervised and/or unsupervised learning, and entity extraction and semantic parsing to identify text. Finally, we talk through data augmentation in the context of NLP, and how we enable the humans in-the-loop to generate high-quality data.
The complete show notes for this episode can be found at twimlai.com/go/537</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:yunyaoli@us.ibm.com">Yunyao Li</a>, a senior research manager at IBM Research. </p><p>Yunyao is in a somewhat unique position at IBM, addressing the challenges of enterprise NLP in a traditional research environment, while also having customer engagement responsibilities. In our conversation with Yunyao, we explore the challenges associated with productizing NLP in the enterprise, and if she focuses on solving these problems independent of one another, or through a more unified approach. </p><p>We then ground the conversation with real-world examples of these enterprise challenges, including enabling level document discovery at scale using combinations of techniques like deep neural networks and supervised and/or unsupervised learning, and entity extraction and semantic parsing to identify text. Finally, we talk through data augmentation in the context of NLP, and how we enable the humans in-the-loop to generate high-quality data.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/537">twimlai.com/go/537</a></p>]]>
      </content:encoded>
      <itunes:duration>3481</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[907241a4-488b-11ec-a926-9bcb15a8c105]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4167430111.mp3?updated=1637254403"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning at GSK with Kim Branson - #536</title>
      <link>https://twimlai.com/machine-learning-at-gsk-with-kim-branson</link>
      <description>Today we’re joined by Kim Branson, the SVP and global head of artificial intelligence and machine learning at GSK. 
We cover a lot of ground in our conversation, starting with a breakdown of GSK’s core pharmaceutical business, and how ML/AI fits into that equation, use cases that appear using genetics data as a data source, including sequential learning for drug discovery. We also explore the 500 billion node knowledge graph Kim’s team built to mine scientific literature, and their “AI Hub”, the ML/AI infrastructure team that handles all tooling and engineering problems within their organization. Finally, we explore their recent cancer research collaboration with King’s College, which is tasked with understanding the individualized needs of high- and low-risk cancer patients using ML/AI amongst other technologies. 
The complete show notes for this episode can be found at twimlai.com/go/536.</description>
      <pubDate>Mon, 15 Nov 2021 19:30:00 -0000</pubDate>
      <itunes:title>Machine Learning at GSK with Kim Branson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>536</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/de4fad7e-4630-11ec-a4fb-a3891b78d907/image/TWIML_COVER_800x800_KB4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kim Branson, the SVP and global head of artificial intelligence and machine learning at GSK. 
We cover a lot of ground in our conversation, starting with a breakdown of GSK’s core pharmaceutical business, and how ML/AI fits into that equation, use cases that appear using genetics data as a data source, including sequential learning for drug discovery. We also explore the 500 billion node knowledge graph Kim’s team built to mine scientific literature, and their “AI Hub”, the ML/AI infrastructure team that handles all tooling and engineering problems within their organization. Finally, we explore their recent cancer research collaboration with King’s College, which is tasked with understanding the individualized needs of high- and low-risk cancer patients using ML/AI amongst other technologies. 
The complete show notes for this episode can be found at twimlai.com/go/536.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kim Branson, the SVP and global head of artificial intelligence and machine learning at GSK. </p><p>We cover a lot of ground in our conversation, starting with a breakdown of GSK’s core pharmaceutical business, and how ML/AI fits into that equation, use cases that appear using genetics data as a data source, including sequential learning for drug discovery. We also explore the 500 billion node knowledge graph Kim’s team built to mine scientific literature, and their “AI Hub”, the ML/AI infrastructure team that handles all tooling and engineering problems within their organization. Finally, we explore their recent cancer research collaboration with King’s College, which is tasked with understanding the individualized needs of high- and low-risk cancer patients using ML/AI amongst other technologies. </p><p>The complete show notes for this episode can be found at twimlai.com/go/536.</p>]]>
      </content:encoded>
      <itunes:duration>3636</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[de4fad7e-4630-11ec-a4fb-a3891b78d907]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6932992097.mp3?updated=1645156627"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha - #535</title>
      <link>https://twimlai.com/the-benefit-of-bottlenecks-in-evolving-artificial-intelligence-with-david-ha</link>
      <description>Today we’re joined by David Ha, a research scientist at Google. 
In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning.
This interview is Nerd Alert certified, so get your notes ready! 
PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work!
The complete show notes for this episode can be found at twimlai.com/go/535</description>
      <pubDate>Thu, 11 Nov 2021 17:57:25 -0000</pubDate>
      <itunes:title>The Benefit of Bottlenecks in Evolving Artificial Intelligence with David Ha</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>535</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f04379fc-4310-11ec-9f39-233e1b20cafc/image/TWIML_COVER_800x800_DH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by David Ha, a research scientist at Google. 
In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including Neuroevolution of self-interpretable agents, World Models and Attention for Reinforcement Learning, and The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning.
This interview is Nerd Alert certified, so get your notes ready! 
PS. David is one of our favorite follows on Twitter (@hardmaru), so check him out and share your thoughts on this interview and his work!
The complete show notes for this episode can be found at twimlai.com/go/535</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by David Ha, a research scientist at Google. </p><p>In nature, there are many examples of “bottlenecks”, or constraints, that have shaped our development as a species. Building upon this idea, David posits that these same evolutionary bottlenecks could work when training neural network models as well. In our conversation with David, we cover a TON of ground, including the aforementioned biological inspiration for his work, then digging deeper into the different types of constraints he’s applied to ML systems. We explore abstract generative models and how advanced training agents inside of generative models has become, and quite a few papers including <a href="https://dl.acm.org/doi/abs/10.1145/3377930.3389847">Neuroevolution of self-interpretable agents</a>, <a href="https://direct.mit.edu/isal/proceedings/isal/33/8/102973">World Models and Attention for Reinforcement Learning</a>, and <a href="https://arxiv.org/abs/2109.02869">The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning</a>.</p><p>This interview is Nerd Alert certified, so get your notes ready! </p><p>PS. David is one of our favorite follows on Twitter (<a href="https://twitter.com/hardmaru">@hardmaru</a>), so check him out and share your thoughts on this interview and his work!</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/535">twimlai.com/go/535</a></p>]]>
      </content:encoded>
      <itunes:duration>3544</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f04379fc-4310-11ec-9f39-233e1b20cafc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8752647669.mp3?updated=1636651092"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark - #534</title>
      <link>https://twimlai.com/facebook-abandons-facial-recognition-should-everyone-else-follow-suit-with-luke-stark</link>
      <description>Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. 
In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “Physiognomic Artificial Intelligence”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. 
Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. 
The complete show notes for this episode can be found at twimlai.com/go/534.</description>
      <pubDate>Mon, 08 Nov 2021 18:24:53 -0000</pubDate>
      <itunes:title>Facebook Abandons Facial Recognition. Should Everyone Else Follow Suit? With Luke Stark</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>534</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ad0be54-40ba-11ec-a1a1-cbf12806a313/image/TWIML_COVER_800x800_LS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. 
In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “Physiognomic Artificial Intelligence”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. 
Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. 
The complete show notes for this episode can be found at twimlai.com/go/534.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Luke Stark, an assistant professor at Western University in London, Ontario. </p><p>In our conversation with Luke, we explore the existence and use of facial recognition technology, something Luke has been critical of in his work over the past few years, comparing it to plutonium. We discuss Luke’s recent paper, “<a href="https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3927300_code1466832.pdf?abstractid=3927300&amp;mirid=1">Physiognomic Artificial Intelligence</a>”, in which he critiques studies that will attempt to use faces and facial expressions and features to make determinations about people, a practice fundamental to facial recognition, also one that Luke believes is inherently racist at its core. </p><p>Finally, briefly discuss the recent wave of hires at the FTC, and the news that broke (mid-recording) announcing that Facebook will be shutting down their facial recognition system and why it's not necessarily the game-changing announcement it seemed on its… face. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/534">twimlai.com/go/534</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2528</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6ad0be54-40ba-11ec-a1a1-cbf12806a313]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6837127374.mp3?updated=1636394075"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Blocks of Machine Learning at LEGO with Francesc Joan Riera - #533</title>
      <link>https://twimlai.com/building-blocks-of-machine-learning-at-lego-with-francesc-joan-riera</link>
      <description>Today we’re joined by Francesc Joan Riera, an applied machine learning engineer at The LEGO Group. 

In our conversation, we explore the ML infrastructure at LEGO, specifically around two use cases, content moderation and user engagement. While content moderation is not a new or novel task, but because their apps and products are marketed towards children, their need for heightened levels of moderation makes it very interesting. 

We discuss if the moderation system is built specifically to weed out bad actors or passive behaviors if their system has a human-in-the-loop component, why they built a feature store as opposed to a traditional database, and challenges they faced along that journey. We also talk through the range of skill sets on their team, the use of MLflow for experimentation, the adoption of AWS for serverless, and so much more!

The complete show notes for this episode can be found at twimlai.com/go/534.</description>
      <pubDate>Thu, 04 Nov 2021 17:05:18 -0000</pubDate>
      <itunes:title>Building Blocks of Machine Learning at LEGO with Francesc Joan Riera</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>533</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/0c29a2f6-3d83-11ec-a0e9-5b573385faa6/image/TWIML_COVER_800x800_FJR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Francesc Joan Riera, an applied machine learning engineer at The LEGO Group. 

In our conversation, we explore the ML infrastructure at LEGO, specifically around two use cases, content moderation and user engagement. While content moderation is not a new or novel task, but because their apps and products are marketed towards children, their need for heightened levels of moderation makes it very interesting. 

We discuss if the moderation system is built specifically to weed out bad actors or passive behaviors if their system has a human-in-the-loop component, why they built a feature store as opposed to a traditional database, and challenges they faced along that journey. We also talk through the range of skill sets on their team, the use of MLflow for experimentation, the adoption of AWS for serverless, and so much more!

The complete show notes for this episode can be found at twimlai.com/go/534.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Francesc Joan Riera, an applied machine learning engineer at The LEGO Group. </p><p><br></p><p>In our conversation, we explore the ML infrastructure at LEGO, specifically around two use cases, content moderation and user engagement. While content moderation is not a new or novel task, but because their apps and products are marketed towards children, their need for heightened levels of moderation makes it very interesting. </p><p><br></p><p>We discuss if the moderation system is built specifically to weed out bad actors or passive behaviors if their system has a human-in-the-loop component, why they built a feature store as opposed to a traditional database, and challenges they faced along that journey. We also talk through the range of skill sets on their team, the use of MLflow for experimentation, the adoption of AWS for serverless, and so much more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/534">twimlai.com/go/534</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2593</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0c29a2f6-3d83-11ec-a0e9-5b573385faa6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7096052689.mp3?updated=1636039752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring the FastAI Tooling Ecosystem with Hamel Husain - #532</title>
      <link>https://twimlai.com/exploring-the-fastai-tooling-ecosystem-with-hamel-husain</link>
      <description>Today we’re joined by Hamel Husain, Staff Machine Learning Engineer at GitHub. 
Over the last few years, Hamel has had the opportunity to work on some of the most popular open source projects in the ML world, including fast.ai, nbdev, fastpages, and fastcore, just to name a few. In our conversation with Hamel, we discuss his journey into Silicon Valley, and how he discovered that the ML tooling and infrastructure weren’t quite as advanced as he’d assumed, and how that led him to help build some of the foundational pieces of Airbnb’s Bighead Platform. 
We also spend time exploring Hamel’s time working with Jeremy Howard and the team creating fast.ai, how nbdev came about, and how it plans to change the way practitioners interact with traditional jupyter notebooks. Finally, talk through a few more tools in the fast.ai ecosystem, fastpages, fastcore, how these tools interact with Github Actions, and the up and coming ML tools that Hamel is excited about. 
The complete show notes for this episode can be found at twimlai.com/go/532.</description>
      <pubDate>Mon, 01 Nov 2021 18:33:00 -0000</pubDate>
      <itunes:title>Exploring the FastAI Tooling Ecosystem with Hamel Husain</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>532</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/ced936fc-3b1e-11ec-91ef-736f64d9a9d8/image/TWIML_COVER_800x800_HH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Hamel Husain, Staff Machine Learning Engineer at GitHub. 
Over the last few years, Hamel has had the opportunity to work on some of the most popular open source projects in the ML world, including fast.ai, nbdev, fastpages, and fastcore, just to name a few. In our conversation with Hamel, we discuss his journey into Silicon Valley, and how he discovered that the ML tooling and infrastructure weren’t quite as advanced as he’d assumed, and how that led him to help build some of the foundational pieces of Airbnb’s Bighead Platform. 
We also spend time exploring Hamel’s time working with Jeremy Howard and the team creating fast.ai, how nbdev came about, and how it plans to change the way practitioners interact with traditional jupyter notebooks. Finally, talk through a few more tools in the fast.ai ecosystem, fastpages, fastcore, how these tools interact with Github Actions, and the up and coming ML tools that Hamel is excited about. 
The complete show notes for this episode can be found at twimlai.com/go/532.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Hamel Husain, Staff Machine Learning Engineer at GitHub. </p><p>Over the last few years, Hamel has had the opportunity to work on some of the most popular open source projects in the ML world, including fast.ai, nbdev, fastpages, and fastcore, just to name a few. In our conversation with Hamel, we discuss his journey into Silicon Valley, and how he discovered that the ML tooling and infrastructure weren’t quite as advanced as he’d assumed, and how that led him to help build some of the foundational pieces of Airbnb’s Bighead Platform. </p><p>We also spend time exploring Hamel’s time working with Jeremy Howard and the team creating fast.ai, how nbdev came about, and how it plans to change the way practitioners interact with traditional jupyter notebooks. Finally, talk through a few more tools in the fast.ai ecosystem, fastpages, fastcore, how these tools interact with Github Actions, and the up and coming ML tools that Hamel is excited about. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/532">twimlai.com/go/532</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2378</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ced936fc-3b1e-11ec-91ef-736f64d9a9d8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8772542123.mp3?updated=1635790862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Multi-task Learning for Melanoma Detection with Julianna Ianni - #531</title>
      <link>https://twimlai.com/multi-task-learning-for-melanoma-detection-with-julianna-ianni</link>
      <description>In today’s episode, we are joined by Julianna Ianni, vice president of AI research &amp; development at Proscia.

In our conversation, Julianna shares her and her team’s research focused on developing applications that would help make the life of pathologists easier by enabling tasks to quickly and accurately be diagnosed using deep learning and AI.

We also explore their paper “A Pathology Deep Learning System Capable of Triage of Melanoma Specimens Utilizing Dermatopathologist Consensus as Ground Truth”, while talking through how ML aids pathologists in diagnosing Melanoma by building a multitask classifier to distinguish between low-risk and high-risk cases. Finally, we discussed the challenges involved in designing a model that would help in identifying and classifying Melanoma, the results they’ve achieved, and what the future of this work could look like.

The complete show notes for this episode can be found at twimlai.com/go/531.</description>
      <pubDate>Thu, 28 Oct 2021 18:50:00 -0000</pubDate>
      <itunes:title>Multi-task Learning for Melanoma Detection with Julianna Ianni</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>531</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/dc4d06f2-380c-11ec-bdba-4fc07800e8f6/image/TWIML_COVER_800x800_JI.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In today’s episode, we are joined by Julianna Ianni, vice president of AI research &amp; development at Proscia.

In our conversation, Julianna shares her and her team’s research focused on developing applications that would help make the life of pathologists easier by enabling tasks to quickly and accurately be diagnosed using deep learning and AI.

We also explore their paper “A Pathology Deep Learning System Capable of Triage of Melanoma Specimens Utilizing Dermatopathologist Consensus as Ground Truth”, while talking through how ML aids pathologists in diagnosing Melanoma by building a multitask classifier to distinguish between low-risk and high-risk cases. Finally, we discussed the challenges involved in designing a model that would help in identifying and classifying Melanoma, the results they’ve achieved, and what the future of this work could look like.

The complete show notes for this episode can be found at twimlai.com/go/531.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In today’s episode, we are joined by Julianna Ianni, vice president of AI research &amp; development at Proscia.</p><p><br></p><p>In our conversation, Julianna shares her and her team’s research focused on developing applications that would help make the life of pathologists easier by enabling tasks to quickly and accurately be diagnosed using deep learning and AI.</p><p><br></p><p>We also explore their paper “<a href="https://drive.google.com/open?id=1bixnqGN7yV18X0O3XaP1dcNq0DRJbHEF&amp;authuser=imari%40cloudpulsestrat.com&amp;usp=drive_fs">A Pathology Deep Learning System Capable of Triage of Melanoma Specimens Utilizing Dermatopathologist Consensus as Ground Truth</a>”, while talking through how ML aids pathologists in diagnosing Melanoma by building a multitask classifier to distinguish between low-risk and high-risk cases. Finally, we discussed the challenges involved in designing a model that would help in identifying and classifying Melanoma, the results they’ve achieved, and what the future of this work could look like.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/531">twimlai.com/go/531</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2253</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dc4d06f2-380c-11ec-bdba-4fc07800e8f6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8690549328.mp3?updated=1635444920"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>House Hunters: Machine Learning at Redfin with Akshat Kaul - #530</title>
      <link>https://twimlai.com/house-hunters-ml-at-redfin-with-akshat-kaul</link>
      <description>Today we’re joined by Akshat Kaul, the head of data science and machine learning at Redfin. We’re all familiar with Redfin, but did you know that redfin.com is the largest real estate brokerage site in the US? In our conversation with Akshat, we discuss the history of ML at Redfin and a few of the key use cases that ML is currently being applied to, including recommendations, price estimates, and their “hot homes” feature. We explore their recent foray into building their own internal platform, which they’ve coined “Redeye”, how they’ve built Redeye to support modeling across the business, and how Akshat thinks about the role of the cloud when building and delivering their platform. Finally, we discuss the impact the pandemic has had on ML at the company, and Akshat’s vision for the future of their platform and machine learning at the company more broadly. 

The complete show notes for this episode can be found at twimlai.com/go/530.</description>
      <pubDate>Tue, 26 Oct 2021 06:20:00 -0000</pubDate>
      <itunes:title>House Hunters: Machine Learning at Redfin with Akshat Kaul</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>530</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/82fd5bf8-35a3-11ec-a477-5bb6c9524d0c/image/TWIML_COVER_800x800_AK3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Akshat Kaul, the head of data science and machine learning at Redfin. We’re all familiar with Redfin, but did you know that redfin.com is the largest real estate brokerage site in the US? In our conversation with Akshat, we discuss the history of ML at Redfin and a few of the key use cases that ML is currently being applied to, including recommendations, price estimates, and their “hot homes” feature. We explore their recent foray into building their own internal platform, which they’ve coined “Redeye”, how they’ve built Redeye to support modeling across the business, and how Akshat thinks about the role of the cloud when building and delivering their platform. Finally, we discuss the impact the pandemic has had on ML at the company, and Akshat’s vision for the future of their platform and machine learning at the company more broadly. 

The complete show notes for this episode can be found at twimlai.com/go/530.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Akshat Kaul, the head of data science and machine learning at Redfin. We’re all familiar with Redfin, but did you know that redfin.com is the largest real estate brokerage site in the US? In our conversation with Akshat, we discuss the history of ML at Redfin and a few of the key use cases that ML is currently being applied to, including recommendations, price estimates, and their “hot homes” feature. We explore their recent foray into building their own internal platform, which they’ve coined “Redeye”, how they’ve built Redeye to support modeling across the business, and how Akshat thinks about the role of the cloud when building and delivering their platform. Finally, we discuss the impact the pandemic has had on ML at the company, and Akshat’s vision for the future of their platform and machine learning at the company more broadly. </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/530.</p>]]>
      </content:encoded>
      <itunes:duration>2674</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[82fd5bf8-35a3-11ec-a477-5bb6c9524d0c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8521017012.mp3?updated=1635436308"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Attacking Malware with Adversarial Machine Learning, w/ Edward Raff - #529</title>
      <link>https://twimlai.com/attacking-malware-with-adversarial-machine-learning-w-edward-raff</link>
      <description>Today we’re joined by Edward Raff, chief scientist and head of the machine learning research group at Booz Allen Hamilton. Edward’s work sits at the intersection of machine learning and cybersecurity, with a particular interest in malware analysis and detection. In our conversation, we look at the evolution of adversarial ML over the last few years before digging into Edward’s recently released paper, Adversarial Transfer Attacks With Unknown Data and Class Overlap. In this paper, Edward and his team explore the use of adversarial transfer attacks and how they’re able to lower their success rate by simulating class disparity. Finally, we talk through quite a few future directions for adversarial attacks, including his interest in graph neural networks.

The complete show notes for this episode can be found at twimlai.com/go/529.</description>
      <pubDate>Thu, 21 Oct 2021 16:36:00 -0000</pubDate>
      <itunes:title>Attacking Malware with Adversarial Machine Learning, w/ Edward Raff</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>529</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/89313ace-3280-11ec-8e02-33fbec9be05a/image/TWIML_COVER_800x800_ER2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Edward Raff, chief scientist and head of the machine learning research group at Booz Allen Hamilton. Edward’s work sits at the intersection of machine learning and cybersecurity, with a particular interest in malware analysis and detection. In our conversation, we look at the evolution of adversarial ML over the last few years before digging into Edward’s recently released paper, Adversarial Transfer Attacks With Unknown Data and Class Overlap. In this paper, Edward and his team explore the use of adversarial transfer attacks and how they’re able to lower their success rate by simulating class disparity. Finally, we talk through quite a few future directions for adversarial attacks, including his interest in graph neural networks.

The complete show notes for this episode can be found at twimlai.com/go/529.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Edward Raff, chief scientist and head of the machine learning research group at Booz Allen Hamilton. Edward’s work sits at the intersection of machine learning and cybersecurity, with a particular interest in malware analysis and detection. In our conversation, we look at the evolution of adversarial ML over the last few years before digging into Edward’s recently released paper, <a href="https://arxiv.org/abs/2109.11125">Adversarial Transfer Attacks With Unknown Data and Class Overlap</a>. In this paper, Edward and his team explore the use of adversarial transfer attacks and how they’re able to lower their success rate by simulating class disparity. Finally, we talk through quite a few future directions for adversarial attacks, including his interest in graph neural networks.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/529">twimlai.com/go/529</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2798</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[89313ace-3280-11ec-8e02-33fbec9be05a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9823997975.mp3?updated=1634835318"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino - #528</title>
      <link>https://twimlai.com/learning-to-ponder-memory-in-deep-neural-networks-with-andrea-banino</link>
      <description>Today we’re joined by Andrea Banino, a research scientist at DeepMind. In our conversation with Andrea, we explore his interest in artificial general intelligence by way of episodic memory, the relationship between memory and intelligence, the challenges of applying memory in the context of neural networks, and how to overcome problems of generalization. 

We also discuss his work on the PonderNet, a neural network that “budgets” its computational investment in solving a problem, according to the inherent complexity of the problem, the impetus and goals of this research, and how PonderNet connects to his memory research. 

The complete show notes for this episode can be found at twimlai.com/go/528.</description>
      <pubDate>Mon, 18 Oct 2021 17:47:25 -0000</pubDate>
      <itunes:title>Learning to Ponder: Memory in Deep Neural Networks with Andrea Banino</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>528</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6a448156-3033-11ec-a3f1-5771a93dda7b/image/TWIML_COVER_800x800_AB3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Andrea Banino, a research scientist at DeepMind. In our conversation with Andrea, we explore his interest in artificial general intelligence by way of episodic memory, the relationship between memory and intelligence, the challenges of applying memory in the context of neural networks, and how to overcome problems of generalization. 

We also discuss his work on the PonderNet, a neural network that “budgets” its computational investment in solving a problem, according to the inherent complexity of the problem, the impetus and goals of this research, and how PonderNet connects to his memory research. 

The complete show notes for this episode can be found at twimlai.com/go/528.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:abanino@deepmind.com">Andrea Banino</a>, a research scientist at DeepMind. In our conversation with Andrea, we explore his interest in artificial general intelligence by way of episodic memory, the relationship between memory and intelligence, the challenges of applying memory in the context of neural networks, and how to overcome problems of generalization. </p><p><br></p><p>We also discuss his work on the PonderNet, a neural network that “budgets” its computational investment in solving a problem, according to the inherent complexity of the problem, the impetus and goals of this research, and how PonderNet connects to his memory research. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/528">twimlai.com/go/528</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2232</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6a448156-3033-11ec-a3f1-5771a93dda7b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9695778611.mp3?updated=1634577579"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing Deep Reinforcement Learning with NetHack, w/ Tim Rocktäschel - #527</title>
      <link>https://twimlai.com/advancing-deep-reinforcement-learning-with-nethack-w-tim-rocktaschel</link>
      <description>Take our survey at twimlai.com/survey21!

Today we’re joined by Tim Rocktäschel, a research scientist at Facebook AI Research and an associate professor at University College London (UCL). 

Tim’s work focuses on training RL agents in simulated environments, with the goal of these agents being able to generalize to novel situations. Typically, this is done in environments like OpenAI Gym, MuJuCo, or even using Atari games, but these all come with constraints. In Tim’s approach, he utilizes a game called NetHack, which is much more rich and complex than the aforementioned environments.  

In our conversation with Tim, we explore the ins and outs of using NetHack as a training environment, including how much control a user has when generating each individual game and the challenges he's faced when deploying the agents. We also discuss his work on MiniHack, an environment creation framework and suite of tasks that are based on NetHack, and future directions for this research.

The complete show notes for this episode can be found at twimlai.com/go/527.</description>
      <pubDate>Thu, 14 Oct 2021 15:51:00 -0000</pubDate>
      <itunes:title>Advancing Deep Reinforcement Learning with NetHack, w/ Tim Rocktäschel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>527</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/23edb710-2c53-11ec-b3b4-f3ad924a8c17/image/TWIML_COVER_800x800_TR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Take our survey at twimlai.com/survey21!

Today we’re joined by Tim Rocktäschel, a research scientist at Facebook AI Research and an associate professor at University College London (UCL). 

Tim’s work focuses on training RL agents in simulated environments, with the goal of these agents being able to generalize to novel situations. Typically, this is done in environments like OpenAI Gym, MuJuCo, or even using Atari games, but these all come with constraints. In Tim’s approach, he utilizes a game called NetHack, which is much more rich and complex than the aforementioned environments.  

In our conversation with Tim, we explore the ins and outs of using NetHack as a training environment, including how much control a user has when generating each individual game and the challenges he's faced when deploying the agents. We also discuss his work on MiniHack, an environment creation framework and suite of tasks that are based on NetHack, and future directions for this research.

The complete show notes for this episode can be found at twimlai.com/go/527.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Take our survey at <a href="https://twimlai.com/survey21">twimlai.com/survey21</a>!</p><p><br></p><p>Today we’re joined by Tim Rocktäschel, a research scientist at Facebook AI Research and an associate professor at University College London (UCL). </p><p><br></p><p>Tim’s work focuses on training RL agents in simulated environments, with the goal of these agents being able to generalize to novel situations. Typically, this is done in environments like OpenAI Gym, MuJuCo, or even using Atari games, but these all come with constraints. In Tim’s approach, he utilizes a game called NetHack, which is much more rich and complex than the aforementioned environments.  </p><p><br></p><p>In our conversation with Tim, we explore the ins and outs of using NetHack as a training environment, including how much control a user has when generating each individual game and the challenges he's faced when deploying the agents. We also discuss his work on MiniHack, an environment creation framework and suite of tasks that are based on NetHack, and future directions for this research.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/527">twimlai.com/go/527</a>.</p><p><br></p><p><br></p>]]>
      </content:encoded>
      <itunes:duration>2577</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[23edb710-2c53-11ec-b3b4-f3ad924a8c17]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8105888186.mp3?updated=1634307872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Technical Communities at Stack Overflow with Prashanth Chandrasekar - #526</title>
      <link>https://twimlai.com/building-technical-communities-at-stack-overflow-with-prashanth-chandrasekar</link>
      <description>In this special episode of the show, we’re excited to bring you our conversation with Prashanth Chandrasekar, CEO of Stack Overflow. This interview was recorded as a part of the annual Prosus AI Marketplace event. 

In our discussion with Prashanth, we explore the impact the pandemic has had on Stack Overflow, how they think about community and enable collaboration in over 100 million monthly users from around the world, and some of the challenges they’ve dealt with when managing a community of this scale. We also examine where Stack Overflow is in their AI journey, use cases illustrating how they’re currently utilizing ML, what their role is in the future of AI-based code generation, what other trends they’ve picked up on over the last few years, and how they’re using those insights to forge the path forward.

The complete show notes for this episode can be found at twimlai.com/go/526.</description>
      <pubDate>Mon, 11 Oct 2021 17:58:14 -0000</pubDate>
      <itunes:title>Building Technical Communities at Stack Overflow with Prashanth Chandrasekar with Prashanth Chandrasekar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>526</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/2ea99980-291a-11ec-898b-bf6e6ce44d14/image/TWIML_COVER_800x800_PC2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>In this special episode of the show, we’re excited to bring you our conversation with Prashanth Chandrasekar, CEO of Stack Overflow. This interview was recorded as a part of the annual Prosus AI Marketplace event. 

In our discussion with Prashanth, we explore the impact the pandemic has had on Stack Overflow, how they think about community and enable collaboration in over 100 million monthly users from around the world, and some of the challenges they’ve dealt with when managing a community of this scale. We also examine where Stack Overflow is in their AI journey, use cases illustrating how they’re currently utilizing ML, what their role is in the future of AI-based code generation, what other trends they’ve picked up on over the last few years, and how they’re using those insights to forge the path forward.

The complete show notes for this episode can be found at twimlai.com/go/526.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode of the show, we’re excited to bring you our conversation with Prashanth Chandrasekar, CEO of Stack Overflow. This interview was recorded as a part of the annual Prosus AI Marketplace event. </p><p><br></p><p>In our discussion with Prashanth, we explore the impact the pandemic has had on Stack Overflow, how they think about community and enable collaboration in over 100 million monthly users from around the world, and some of the challenges they’ve dealt with when managing a community of this scale. We also examine where Stack Overflow is in their AI journey, use cases illustrating how they’re currently utilizing ML, what their role is in the future of AI-based code generation, what other trends they’ve picked up on over the last few years, and how they’re using those insights to forge the path forward.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/526">twimlai.com/go/526</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2445</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2ea99980-291a-11ec-898b-bf6e6ce44d14]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2244964253.mp3?updated=1633967726"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning is Eating 5G. Here’s How, w/ Joseph Soriaga - #525</title>
      <link>https://twimlai.com/deep-learning-is-eating-5g-heres-how-w-joseph-soriaga</link>
      <description>Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm. 

In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement. 

We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future. 

The complete show notes for this episode can be found at twimlai.com/go/525.</description>
      <pubDate>Thu, 07 Oct 2021 16:21:00 -0000</pubDate>
      <itunes:title>Deep Learning is Eating 5G. Here’s How. w/ Joseph Soriaga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>525</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/cba3e77c-277b-11ec-aed7-0b96f3d110a2/image/TWIML_COVER_800x800_JS2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm. 

In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement. 

We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future. 

The complete show notes for this episode can be found at twimlai.com/go/525.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Joseph Soriaga, a senior director of technology at Qualcomm. </p><p><br></p><p>In our conversation with Joseph, we focus on a pair of papers that he and his team will be presenting at Globecom later this year. The first, <a href="https://arxiv.org/abs/2109.12561">Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking</a>, details the use of deep learning to augment an algorithm to address mismatches in models, allowing for more efficient training and making models more interpretable and predictable. The second paper, <a href="https://arxiv.org/abs/2107.01002">WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise Labels</a>, explores the use of rf signals to infer what the environment looks like, allowing for estimation of a person’s movement. </p><p><br></p><p>We also discuss the ability for machine learning and AI to help enable 5G and make it more efficient for these applications, as well as the scenarios that ML would allow for more effective delivery of connected services, and look towards what might be possible in the near future. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/525">twimlai.com/go/525</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2378</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cba3e77c-277b-11ec-aed7-0b96f3d110a2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4672052420.mp3?updated=1633627195"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Modeling Human Cognition with RNNs and Curriculum Learning, w/ Kanaka Rajan - #524</title>
      <link>https://twimlai.com/modeling-memory-with-rnns-and-curriculum-learning-w-kanaka-rajan</link>
      <description>Today we’re joined by Kanaka Rajan, an assistant professor at the Icahn School of Medicine at Mt Sinai. Kanaka, who is a recent recipient of the NSF Career Award, bridges the gap between the worlds of biology and artificial intelligence with her work in computer science. In our conversation, we explore how she builds “lego models” of the brain that mimic biological brain functions, then reverse engineers those models to answer the question “do these follow the same operating principles that the biological brain uses?”

We also discuss the relationship between memory and dynamically evolving system states, how close we are to understanding how memory actually works, how she uses RNNs for modeling these processes, and what training and data collection looks like. Finally, we touch on her use of curriculum learning (where the task you want a system to learn increases in complexity slowly), and of course, we look ahead at future directions for Kanaka’s research. 

The complete show notes for this episode can be found at twimlai.com/go/524.</description>
      <pubDate>Mon, 04 Oct 2021 16:36:00 -0000</pubDate>
      <itunes:title>Modeling Human Cognition with RNNs and Curriculum Learning w/ Kanaka Rajan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>524</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d3fd4c1e-24ca-11ec-b7d4-2b0a979b1967/image/TWIML_COVER_800x800_KR2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kanaka Rajan, an assistant professor at the Icahn School of Medicine at Mt Sinai. Kanaka, who is a recent recipient of the NSF Career Award, bridges the gap between the worlds of biology and artificial intelligence with her work in computer science. In our conversation, we explore how she builds “lego models” of the brain that mimic biological brain functions, then reverse engineers those models to answer the question “do these follow the same operating principles that the biological brain uses?”

We also discuss the relationship between memory and dynamically evolving system states, how close we are to understanding how memory actually works, how she uses RNNs for modeling these processes, and what training and data collection looks like. Finally, we touch on her use of curriculum learning (where the task you want a system to learn increases in complexity slowly), and of course, we look ahead at future directions for Kanaka’s research. 

The complete show notes for this episode can be found at twimlai.com/go/524.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kanaka Rajan, an assistant professor at the Icahn School of Medicine at Mt Sinai. Kanaka, who is a recent recipient of the NSF Career Award, bridges the gap between the worlds of biology and artificial intelligence with her work in computer science. In our conversation, we explore how she builds “lego models” of the brain that mimic biological brain functions, then reverse engineers those models to answer the question “do these follow the same operating principles that the biological brain uses?”</p><p><br></p><p>We also discuss the relationship between memory and dynamically evolving system states, how close we are to understanding how memory actually works, how she uses RNNs for modeling these processes, and what training and data collection looks like. Finally, we touch on her use of curriculum learning (where the task you want a system to learn increases in complexity slowly), and of course, we look ahead at future directions for Kanaka’s research. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/524">twimlai.com/go/524</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2828</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d3fd4c1e-24ca-11ec-b7d4-2b0a979b1967]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3653817139.mp3?updated=1633627174"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Do You Dare Run Your ML Experiments in Production? with Ville Tuulos - #523</title>
      <link>https://twimlai.com/do-you-dare-run-your-ml-experiments-in-production-with-ville-tuulos</link>
      <description>Today we’re joined by a friend of the show and return guest Ville Tuulos, CEO and co-founder of Outerbounds. In our previous conversations with Ville, we explored his experience building and deploying the open-source framework, Metaflow, while working at Netflix. Since our last chat, Ville has embarked on a few new journeys, including writing the upcoming book Effective Data Science Infrastructure, and commercializing Metaflow, both of which we dig into quite a bit in this conversation. 

We reintroduce the problem that Metaflow was built to solve and discuss some of the unique use cases that Ville has seen since it's release, the relationship between Metaflow and Kubernetes, and the maturity of services like batch and lambdas allowing a complete production ML system to be delivered. Finally, we discuss the degree to which Ville is catering is Outerbounds’ efforts to building tools for the MLOps community, and what the future looks like for him and Metaflow. 

The complete show notes for this episode can be found at twimlai.com/go/523.</description>
      <pubDate>Thu, 30 Sep 2021 16:15:24 -0000</pubDate>
      <itunes:title>Do You Dare Run Your ML Experiments in Production? with Ville Tuulos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>523</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/d08bbc52-2158-11ec-8a0f-777ea641a9d1/image/TWIML_COVER_800x800_VT2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by a friend of the show and return guest Ville Tuulos, CEO and co-founder of Outerbounds. In our previous conversations with Ville, we explored his experience building and deploying the open-source framework, Metaflow, while working at Netflix. Since our last chat, Ville has embarked on a few new journeys, including writing the upcoming book Effective Data Science Infrastructure, and commercializing Metaflow, both of which we dig into quite a bit in this conversation. 

We reintroduce the problem that Metaflow was built to solve and discuss some of the unique use cases that Ville has seen since it's release, the relationship between Metaflow and Kubernetes, and the maturity of services like batch and lambdas allowing a complete production ML system to be delivered. Finally, we discuss the degree to which Ville is catering is Outerbounds’ efforts to building tools for the MLOps community, and what the future looks like for him and Metaflow. 

The complete show notes for this episode can be found at twimlai.com/go/523.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by a friend of the show and return guest Ville Tuulos, CEO and co-founder of Outerbounds. In our <a href="https://twimlai.com/twiml-talk-326-metaflow-a-human-centric-framework-for-data-science-with-ville-tuulos/">previous</a> <a href="https://www.youtube.com/watch?v=2zbnJ37R7DQ">conversations</a> with Ville, we explored his experience building and deploying the open-source framework, Metaflow, while working at Netflix. Since our last chat, Ville has embarked on a few new journeys, including writing the upcoming book <a href="https://www.manning.com/books/effective-data-science-infrastructure">Effective Data Science Infrastructure</a>, and commercializing Metaflow, both of which we dig into quite a bit in this conversation. </p><p><br></p><p>We reintroduce the problem that Metaflow was built to solve and discuss some of the unique use cases that Ville has seen since it's release, the relationship between Metaflow and Kubernetes, and the maturity of services like batch and lambdas allowing a complete production ML system to be delivered. Finally, we discuss the degree to which Ville is catering is Outerbounds’ efforts to building tools for the MLOps community, and what the future looks like for him and Metaflow. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/523">twimlai.com/go/523</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2441</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d08bbc52-2158-11ec-8a0f-777ea641a9d1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6878174259.mp3?updated=1633018232"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Delivering Neural Speech Services at Scale with Li Jiang - #522</title>
      <link>https://twimlai.com/delivering-neural-speech-services-at-scale-with-li-jiang</link>
      <description>Today we’re joined by Li Jiang, a distinguished engineer at Microsoft working on Azure Speech. 

In our conversation with Li, we discuss his journey across 27 years at Microsoft, where he’s worked on, among other things, audio and speech recognition technologies. We explore his thoughts on the advancements in speech recognition over the past few years, the challenges, and advantages, of using either end-to-end or hybrid models. 

We also discuss the trade-offs between delivering accuracy or quality and the kind of runtime characteristics that you require as a service provider, in the context of engineering and delivering a service at the scale of Azure Speech. Finally, we walk through the data collection process for customizing a voice for TTS, what languages are currently supported, managing the responsibilities of threats like deep fakes, the future for services like these, and much more!

The complete show notes for this episode can be found at twimlai.com/go/522.</description>
      <pubDate>Mon, 27 Sep 2021 17:32:30 -0000</pubDate>
      <itunes:title>Delivering Neural Speech Services at Scale with Li Jiang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>522</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/addaa4c0-1faf-11ec-8dae-ffe557d8f82e/image/TWIML_COVER_800x800_LJ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Li Jiang, a distinguished engineer at Microsoft working on Azure Speech. 

In our conversation with Li, we discuss his journey across 27 years at Microsoft, where he’s worked on, among other things, audio and speech recognition technologies. We explore his thoughts on the advancements in speech recognition over the past few years, the challenges, and advantages, of using either end-to-end or hybrid models. 

We also discuss the trade-offs between delivering accuracy or quality and the kind of runtime characteristics that you require as a service provider, in the context of engineering and delivering a service at the scale of Azure Speech. Finally, we walk through the data collection process for customizing a voice for TTS, what languages are currently supported, managing the responsibilities of threats like deep fakes, the future for services like these, and much more!

The complete show notes for this episode can be found at twimlai.com/go/522.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Li Jiang, a distinguished engineer at Microsoft working on Azure Speech. </p><p><br></p><p>In our conversation with Li, we discuss his journey across 27 years at Microsoft, where he’s worked on, among other things, audio and speech recognition technologies. We explore his thoughts on the advancements in speech recognition over the past few years, the challenges, and advantages, of using either end-to-end or hybrid models. </p><p><br></p><p>We also discuss the trade-offs between delivering accuracy or quality and the kind of runtime characteristics that you require as a service provider, in the context of engineering and delivering a service at the scale of Azure Speech. Finally, we walk through the data collection process for customizing a voice for TTS, what languages are currently supported, managing the responsibilities of threats like deep fakes, the future for services like these, and much more!</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/522">twimlai.com/go/522</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2960</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[addaa4c0-1faf-11ec-8dae-ffe557d8f82e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5772250958.mp3?updated=1632761970"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI’s Legal and Ethical Implications with Sandra Wachter - #521</title>
      <link>https://twimlai.com/ais-legal-and-ethical-implications-with-sandra-wachter</link>
      <description>Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford. 

Sandra’s work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon.

The complete show notes for this episode can be found at twimlai.com/go/521.</description>
      <pubDate>Thu, 23 Sep 2021 16:27:16 -0000</pubDate>
      <itunes:title>AI’s Legal and Ethical Implications with Sandra Wachter</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>521</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/aad717da-1bc3-11ec-b732-93e625d38f23/image/TWIML_COVER_800x800_SW.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford. 

Sandra’s work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon.

The complete show notes for this episode can be found at twimlai.com/go/521.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sandra Wacther, an associate professor and senior research fellow at the University of Oxford. </p><p><br></p><p>Sandra’s work lies at the intersection of law and AI, focused on what she likes to call “algorithmic accountability”. In our conversation, we explore algorithmic accountability in three segments, explainability/transparency, data protection, and bias, fairness and discrimination. We discuss how the thinking around black boxes changes when discussing applying regulation and law, as well as a breakdown of counterfactual explanations and how they’re created. We also explore why factors like the lack of oversight lead to poor self-regulation, and the conditional demographic disparity test that she helped develop to test bias in models, which was recently adopted by Amazon.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/521">twimlai.com/go/521</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2967</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aad717da-1bc3-11ec-b732-93e625d38f23]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4579692160.mp3?updated=1632414266"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Compositional ML and the Future of Software Development with Dillon Erb - #520</title>
      <link>https://twimlai.com/compositional-ml-and-the-future-of-software-development-with-dillon-erb</link>
      <description>Today we’re joined by Dillon Erb, CEO of Paperspace. 

If you’re not familiar with Dillon, he joined us about a year ago to discuss Machine Learning as a Software Engineering Discipline; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.”

The complete show notes for this episode can be found at twimlai.com/go/520.</description>
      <pubDate>Mon, 20 Sep 2021 19:46:36 -0000</pubDate>
      <itunes:title>Compositional ML and the Future of Software Development with Dillon Erb</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>520</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1ace387c-1a28-11ec-801f-ef3d9460527c/image/TWIML_COVER_800x800_DE2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Dillon Erb, CEO of Paperspace. 

If you’re not familiar with Dillon, he joined us about a year ago to discuss Machine Learning as a Software Engineering Discipline; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.”

The complete show notes for this episode can be found at twimlai.com/go/520.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Dillon Erb, CEO of Paperspace. </p><p><br></p><p>If you’re not familiar with Dillon, he joined us about a year ago to discuss <a href="https://twimlai.com/machine-learning-as-a-software-engineering-discipline-with-dillon-erb/">Machine Learning as a Software Engineering Discipline</a>; we strongly encourage you to check out that interview as well. In our conversation, we explore the idea of compositional AI, and if it is the next frontier in a string of recent game-changing machine learning developments. We also discuss a source of constant back and forth in the community around the role of notebooks, and why Paperspace made the choice to pivot towards a more traditional engineering code artifact model after building a popular notebook service. Finally, we talk through their newest release Workflows, an automation and build system for ML applications, which Dillon calls their “most ambitious and comprehensive project yet.”</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/520">twimlai.com/go/520</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2474</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1ace387c-1a28-11ec-801f-ef3d9460527c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3219202242.mp3?updated=1632157746"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Generating SQL Database Queries from Natural Language with Yanshuai Cao - #519</title>
      <link>https://twimlai.com/generating-sql-database-queries-from-natural-language-with-yanshuai-cao</link>
      <description>Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model.

The complete show notes for this episode can be found at twimlai.com/go/519.</description>
      <pubDate>Thu, 16 Sep 2021 16:32:00 -0000</pubDate>
      <itunes:title>Generating SQL [Database Queries] from Natural Language with Yanshuai Cao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>519</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/020ae612-1705-11ec-9bb2-033560f4c414/image/TWIML_COVER_800x800_YC2__1_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released Codex Model from OpenAI, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model.

The complete show notes for this episode can be found at twimlai.com/go/519.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Yanshuai Cao, a senior research team lead at Borealis AI. In our conversation with Yanshuai, we explore his work on Turing, their natural language to SQL engine that allows users to get insights from relational databases without having to write code. We do a bit of compare and contrast with the recently released <a href="https://twimlai.com/codex-openais-automated-code-generation-api-with-greg-brockman/">Codex Model from OpenAI</a>, the role that reasoning plays in solving this problem, and how it is implemented in the model. We also talk through various challenges like data augmentation, the complexity of the queries that Turing can produce, and a paper that explores the explainability of this model.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/519">twimlai.com/go/519</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2308</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[020ae612-1705-11ec-9bb2-033560f4c414]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3462229801.mp3?updated=1632147787"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Social Commonsense Reasoning with Yejin Choi - #518</title>
      <link>https://twimlai.com/social-commonsense-reasoning-with-yejin-choi</link>
      <description>Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward. 

If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl. The complete show notes for today’s episode can be found at twimlai.com/go/518.</description>
      <pubDate>Mon, 13 Sep 2021 18:01:18 -0000</pubDate>
      <itunes:title>Social Commonsense Reasoning with Yejin Choi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>518</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/505360ca-14b1-11ec-96b0-63f9e250de43/image/TWIML_COVER_800x800_YC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward. 

If you enjoyed this episode, check out our conversation on AI Storytelling Systems with Mark Riedl. The complete show notes for today’s episode can be found at twimlai.com/go/518.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Yejin Choi, a professor at the University of Washington. We had the pleasure of catching up with Yejin after her keynote interview at the recent Stanford HAI “Foundational Models” workshop. In our conversation, we explore her work at the intersection of natural language generation and common sense reasoning, including how she defines common sense, and what the current state of the world is for that research. We discuss how this could be used for creative storytelling, how transformers could be applied to these tasks, and we dig into the subfields of physical and social common sense reasoning. Finally, we talk through the future of Yejin’s research and the areas that she sees as most promising going forward. </p><p><br></p><p>If you enjoyed this episode, check out our conversation on <a href="https://twimlai.com/ai-storytelling-systems-with-mark-riedl/">AI Storytelling Systems with Mark Riedl</a>. The complete show notes for today’s episode can be found at <a href="twimlai.com/go/518">twimlai.com/go/518</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3091</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[505360ca-14b1-11ec-96b0-63f9e250de43]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2005371453.mp3?updated=1631552137"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar - #517</title>
      <link>https://twimlai.com/deep-reinforcement-learning-for-game-testing-at-ea-with-konrad-tollmar</link>
      <description>Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH. 

In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers focused on the application of ML to game testing, discussing why deep reinforcement learning is at the top of their research agenda, the differences between training atari games and modern 3D games, using CNNs to detect glitches in games, and of course, Konrad gives us his outlook on the future of ML for games training.

The complete show notes for this episode can be found at twimlai.com/go/517.</description>
      <pubDate>Thu, 09 Sep 2021 17:35:00 -0000</pubDate>
      <itunes:title>Deep Reinforcement Learning for Game Testing at EA with Konrad Tollmar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>517</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/f748d854-1184-11ec-8f4d-334aa9dab0f2/image/TWIML_COVER_800x800_KT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH. 

In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers focused on the application of ML to game testing, discussing why deep reinforcement learning is at the top of their research agenda, the differences between training atari games and modern 3D games, using CNNs to detect glitches in games, and of course, Konrad gives us his outlook on the future of ML for games training.

The complete show notes for this episode can be found at twimlai.com/go/517.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Konrad Tollmar, research director at Electronic Arts and an associate professor at KTH. </p><p><br></p><p>In our conversation, we explore his role as the lead of EA’s applied research team SEED and the ways that they’re applying ML/AI across popular franchises like Apex Legends, Madden, and FIFA. We break down a few papers focused on the application of ML to game testing, discussing why deep reinforcement learning is at the top of their research agenda, the differences between training atari games and modern 3D games, using CNNs to detect glitches in games, and of course, Konrad gives us his outlook on the future of ML for games training.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/517">twimlai.com/go/517</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2421</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f748d854-1184-11ec-8f4d-334aa9dab0f2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7801979096.mp3?updated=1631212447"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring AI 2041 with Kai-Fu Lee - #516</title>
      <link>https://twimlai.com/exploring-ai-2041-with-kai-fu-lee</link>
      <description>Today we’re joined by Kai-Fu Lee, chairman and CEO of Sinovation Ventures and author of AI 2041: Ten Visions for Our Future. 

In AI 2041, Kai-Fu and co-author Chen Qiufan tell the story of how AI could shape our future through a series of 10 “scientific fiction” short stories. In our conversation with Kai-Fu, we explore why he chose 20 years as the time horizon for these stories, and dig into a few of the stories in more detail. We explore the potential for level 5 autonomous driving and what effect that will have on both established and developing nations, the potential outcomes when dealing with job displacement, and his perspective on how the book will be received. We also discuss the potential consequences of autonomous weapons, if we should actually worry about singularity or superintelligence, and the evolution of regulations around AI in 20 years.

We’d love to hear from you! What are your thoughts on any of the stories we discuss in the interview? Will you be checking this book out? Let us know in the comments on the show notes page at twimlai.com/go/516.</description>
      <pubDate>Mon, 06 Sep 2021 16:00:00 -0000</pubDate>
      <itunes:title>Exploring AI 2041 with Kai-Fu Lee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/769ab41a-0cd0-11ec-9365-b799eee8e622/image/TWIML_COVER_800x800_KFL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Kai-Fu Lee, chairman and CEO of Sinovation Ventures and author of AI 2041: Ten Visions for Our Future. 

In AI 2041, Kai-Fu and co-author Chen Qiufan tell the story of how AI could shape our future through a series of 10 “scientific fiction” short stories. In our conversation with Kai-Fu, we explore why he chose 20 years as the time horizon for these stories, and dig into a few of the stories in more detail. We explore the potential for level 5 autonomous driving and what effect that will have on both established and developing nations, the potential outcomes when dealing with job displacement, and his perspective on how the book will be received. We also discuss the potential consequences of autonomous weapons, if we should actually worry about singularity or superintelligence, and the evolution of regulations around AI in 20 years.

We’d love to hear from you! What are your thoughts on any of the stories we discuss in the interview? Will you be checking this book out? Let us know in the comments on the show notes page at twimlai.com/go/516.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kai-Fu Lee, chairman and CEO of Sinovation Ventures and author of AI 2041: Ten Visions for Our Future. </p><p><br></p><p>In AI 2041, Kai-Fu and co-author Chen Qiufan tell the story of how AI could shape our future through a series of 10 “scientific fiction” short stories. In our conversation with Kai-Fu, we explore why he chose 20 years as the time horizon for these stories, and dig into a few of the stories in more detail. We explore the potential for level 5 autonomous driving and what effect that will have on both established and developing nations, the potential outcomes when dealing with job displacement, and his perspective on how the book will be received. We also discuss the potential consequences of autonomous weapons, if we should actually worry about singularity or superintelligence, and the evolution of regulations around AI in 20 years.</p><p><br></p><p>We’d love to hear from you! What are your thoughts on any of the stories we discuss in the interview? Will you be checking this book out? Let us know in the comments on the show notes page at <a href="twimlai.com/go/516">twimlai.com/go/516</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2832</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[769ab41a-0cd0-11ec-9365-b799eee8e622]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5690726360.mp3?updated=1631211180"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing Robotic Brains and Bodies with Daniela Rus - #515</title>
      <link>https://twimlai.com/advancing-robotic-brains-and-bodies-with-daniela-rus</link>
      <description>Today we’re joined by Daniela Rus, director of CSAIL &amp; Deputy Dean of Research at MIT. 

In our conversation with Daniela, we explore the history of CSAIL, her role as director of one of the most prestigious computer science labs in the world, how she defines robots, and her take on the current AI for robotics landscape. We also discuss some of her recent research interests including soft robotics, adaptive control in autonomous vehicles, and a mini surgeon robot made with sausage casing(?!). 

The complete show notes for this episode can be found at twimlai.com/go/515.</description>
      <pubDate>Thu, 02 Sep 2021 17:43:22 -0000</pubDate>
      <itunes:title>Advancing Robotic Brains and Bodies with Daniela Rus</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>515</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/a1c1cfe4-0c07-11ec-b4e1-ff2c4a861be8/image/TWIML_COVER_800x800_DR2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Daniela Rus, director of CSAIL &amp; Deputy Dean of Research at MIT. 

In our conversation with Daniela, we explore the history of CSAIL, her role as director of one of the most prestigious computer science labs in the world, how she defines robots, and her take on the current AI for robotics landscape. We also discuss some of her recent research interests including soft robotics, adaptive control in autonomous vehicles, and a mini surgeon robot made with sausage casing(?!). 

The complete show notes for this episode can be found at twimlai.com/go/515.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Daniela Rus, director of CSAIL &amp; Deputy Dean of Research at MIT. </p><p><br></p><p>In our conversation with Daniela, we explore the history of CSAIL, her role as director of one of the most prestigious computer science labs in the world, how she defines robots, and her take on the current AI for robotics landscape. We also discuss some of her recent research interests including soft robotics, adaptive control in autonomous vehicles, and a mini surgeon robot made with sausage casing(?!). </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/515">twimlai.com/go/515</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2736</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a1c1cfe4-0c07-11ec-b4e1-ff2c4a861be8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9350746892.mp3?updated=1630604551"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard - #514</title>
      <link>https://twimlai.com/neural-synthesis-of-binaural-speech-from-mono-audio-with-alexander-richard</link>
      <description>Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.” 

We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (applications like this come to mind). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions. 

The complete show notes for this episode can be found at twimlai.com/go/514.</description>
      <pubDate>Mon, 30 Aug 2021 18:41:14 -0000</pubDate>
      <itunes:title>Neural Synthesis of Binaural Speech From Mono Audio with Alexander Richard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>514</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1279f886-09a3-11ec-86d7-7b4f262edddb/image/TWIML_COVER_800x800_AR2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “Neural Synthesis of Binaural Speech From Mono Audio.” 

We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (applications like this come to mind). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions. 

The complete show notes for this episode can be found at twimlai.com/go/514.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Alexander Richard, a research scientist at Facebook Reality Labs, and recipient of the ICLR Best Paper Award for his paper “<em>Neural Synthesis of Binaural Speech From Mono Audio.” </em></p><p><br></p><p>We begin our conversation with a look into the charter of Facebook Reality Labs, and Alex’s specific Codec Avatar project, where they’re developing AR/VR for social telepresence (<a href="https://about.fb.com/news/2021/08/introducing-horizon-workrooms-remote-collaboration-reimagined/">applications like this come to mind</a>). Of course, we dig into the aforementioned paper, discussing the difficulty in improving the quality of audio and the role of dynamic time warping, as well as the challenges of creating this model. Finally, Alex shares his thoughts on 3D rendering for audio, and other future research directions. </p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/514">twimlai.com/go/514</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2761</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1279f886-09a3-11ec-86d7-7b4f262edddb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4439216324.mp3?updated=1630349522"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Using Brain Imaging to Improve Neural Networks with Alona Fyshe - #513</title>
      <link>https://twimlai.com/using-brain-imaging-to-improve-neural-networks-with-alona-fyshe</link>
      <description>Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta. 

We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.

The complete show notes for this episode can be found at twimlai.com/go/513.</description>
      <pubDate>Thu, 26 Aug 2021 17:33:48 -0000</pubDate>
      <itunes:title>Using Brain Imaging to Improve Neural Networks with Alona Fyshe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>513</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b4c906e2-0690-11ec-b2d4-d33528ce76b9/image/TWIML_COVER_800x800_AF.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Alona Fyshe, an assistant professor at the University of Alberta. 

We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.

The complete show notes for this episode can be found at twimlai.com/go/513.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:alona@ualberta.ca">Alona Fyshe</a>, an assistant professor at the University of Alberta. </p><p><br></p><p>We caught up with Alona on the heels of an interesting panel discussion that she participated in, centered around improving AI systems using research about brain activity. In our conversation, we explore the multiple types of brain images that are used in this research, what representations look like in these images, and how we can improve language models without knowing explicitly how the brain understands the language. We also discuss similar experiments that have incorporated vision, the relationship between computer vision models and the representations that language models create, and future projects like applying a reinforcement learning framework to improve language generation.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/513">twimlai.com/go/513</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2185</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b4c906e2-0690-11ec-b2d4-d33528ce76b9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1310133709.mp3?updated=1629999758"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Adaptivity in Machine Learning with Samory Kpotufe - #512</title>
      <link>https://twimlai.com/adaptivity-in-machine-learning-with-samory-kpotufe</link>
      <description>Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT). 

In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning theory, and his goal of reaching self-tuning, adaptive algorithms. We discuss Samory’s research in transfer learning and other potential procedures that could positively affect transfer, as well as his work understanding unsupervised learning including how clustering could be applied to real-world applications like cybersecurity, IoT (Smart homes, smart city sensors, etc) using methods like dimension reduction, random projection, and others. If you enjoyed this interview, you should definitely check out our conversation with Jelani Nelson on the “Theory of Computation.” 

The complete show notes for this episode can be found at https://twimlai.com/go/512.</description>
      <pubDate>Mon, 23 Aug 2021 18:27:14 -0000</pubDate>
      <itunes:title>Adaptivity in Machine Learning with Samory Kpotufe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>512</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/b7034c34-042d-11ec-81aa-df4f80df01ed/image/TWIML_COVER_800x800_SK3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT). 

In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning theory, and his goal of reaching self-tuning, adaptive algorithms. We discuss Samory’s research in transfer learning and other potential procedures that could positively affect transfer, as well as his work understanding unsupervised learning including how clustering could be applied to real-world applications like cybersecurity, IoT (Smart homes, smart city sensors, etc) using methods like dimension reduction, random projection, and others. If you enjoyed this interview, you should definitely check out our conversation with Jelani Nelson on the “Theory of Computation.” 

The complete show notes for this episode can be found at https://twimlai.com/go/512.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Samory Kpotufe, an associate professor at Columbia University and program chair of the 2021 Conference on Learning Theory (COLT). </p><p><br></p><p>In our conversation with Samory, we explore his research at the intersection of machine learning, statistics, and learning theory, and his goal of reaching self-tuning, adaptive algorithms. We discuss Samory’s research in transfer learning and other potential procedures that could positively affect transfer, as well as his work understanding unsupervised learning including how clustering could be applied to real-world applications like cybersecurity, IoT (Smart homes, smart city sensors, etc) using methods like dimension reduction, random projection, and others. If you enjoyed this interview, you should definitely check out our conversation with Jelani Nelson on the “<a href="https://twimlai.com/theory-of-computation-with-jelani-nelson/"><em>Theory of Computation</em></a><em>.” </em></p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/512">https://twimlai.com/go/512</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2998</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b7034c34-042d-11ec-81aa-df4f80df01ed]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2401244713.mp3?updated=1629776673"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>A Social Scientist’s Perspective on AI with Eric Rice - #511</title>
      <link>https://twimlai.com/a-social-scientists-perspective-on-ai-with-eric-rice</link>
      <description>Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. 

Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model. 

We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest Milind Tambe, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible. 

If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s TWIMLfest on Why AI Innovation and Social Impact Go Hand in Hand.

The complete show notes for this episode can be found at https://twimlai.com/go/511.</description>
      <pubDate>Thu, 19 Aug 2021 16:09:49 -0000</pubDate>
      <itunes:title>A Social Scientist’s Perspective on AI with Eric Rice</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>511</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/63a20c6e-00ff-11ec-95c6-2fd120937e5f/image/TWIML_COVER_800x800_ER.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. 

Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model. 

We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest Milind Tambe, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible. 

If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s TWIMLfest on Why AI Innovation and Social Impact Go Hand in Hand.

The complete show notes for this episode can be found at https://twimlai.com/go/511.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Eric Rice, associate professor at USC, and the co-director of the USC Center for Artificial Intelligence in Society. </p><p><br></p><p>Eric is a sociologist by trade, and in our conversation, we explore how he has made extensive inroads within the machine learning community through collaborations with ML academics and researchers. We discuss some of the most important lessons Eric has learned while doing interdisciplinary projects, how the social scientist’s approach to assessment and measurement would be different from a computer scientist's approach to assessing the algorithmic performance of a model. </p><p><br></p><p>We specifically explore a few projects he’s worked on including HIV prevention amongst the homeless youth population in LA, a project he spearheaded with former guest <a href="https://twimlai.com/why-ai-innovation-and-social-impact-go-hand-in-hand-with-milind-tambe/">Milind Tambe</a>, as well as a project focused on using ML techniques to assist in the identification of people in need of housing resources, and ensuring that they get the best interventions possible. </p><p><br></p><p>If you enjoyed this conversation, I encourage you to check out our conversation with Milind Tambe from last year’s <a href="https://twimlai.com/twimlfest/">TWIMLfest</a> on <a href="https://twimlai.com/why-ai-innovation-and-social-impact-go-hand-in-hand-with-milind-tambe/"><em>Why AI Innovation and Social Impact Go Hand in Hand</em></a><strong><em>.</em></strong></p><p><br></p><p>The complete show notes for this episode can be found at https://twimlai.com/go/511.</p>]]>
      </content:encoded>
      <itunes:duration>2627</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[63a20c6e-00ff-11ec-95c6-2fd120937e5f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2769200761.mp3?updated=1629389502"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato - #510</title>
      <link>https://twimlai.com/applications-of-variational-autoencoders-and-bayesian-optimization-with-jose-miguel-hernandez-lobato</link>
      <description>Today we’re joined by José Miguel Hernández-Lobato, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.</description>
      <pubDate>Mon, 16 Aug 2021 17:54:00 -0000</pubDate>
      <itunes:title>Applications of Variational Autoencoders and Bayesian Optimization with José Miguel Hernández Lobato</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>510</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/409719b8-feb0-11eb-9ef2-47b9b353ba29/image/TWIML_COVER_800x800_JMHL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by José Miguel Hernández-Lobato, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:jmh233@cam.ac.uk">José Miguel Hernández-Lobato</a>, a university lecturer in machine learning at the University of Cambridge. In our conversation with Miguel, we explore his work at the intersection of Bayesian learning and deep learning. We discuss how he’s been applying this to the field of molecular design and discovery via two different methods, with one paper searching for possible chemical reactions, and the other doing the same, but in 3D and in 3D space. We also discuss the challenges of sample efficiency, creating objective functions, and how those manifest themselves in these experiments, and how he integrated the Bayesian approach to RL problems. We also talk through a handful of other papers that Miguel has presented at recent conferences, which are all linked at twimlai.com/go/510.</p>]]>
      </content:encoded>
      <itunes:duration>2547</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[409719b8-feb0-11eb-9ef2-47b9b353ba29]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2500853835.mp3?updated=1629136194"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Codex, OpenAI’s Automated Code Generation API with Greg Brockman - #509</title>
      <link>https://twimlai.com/codex-openais-automated-code-generation-api-with-greg-brockman</link>
      <description>Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs. 

The complete show notes for this episode can be found at twimlai.com/go/509.</description>
      <pubDate>Thu, 12 Aug 2021 16:35:00 -0000</pubDate>
      <itunes:title>Codex, OpenAI’s Automated Code Generation API with Greg Brockman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>509</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/9a0f0e34-f9e6-11eb-b563-ab518cdb43a6/image/TWIML_COVER_800x800_GB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode we're joined by OpenAI CTO and co-founder Greg Brockman to discuss their recent Codex and Copilot product releases. </itunes:subtitle>
      <itunes:summary>Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of reconnecting with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs. 

The complete show notes for this episode can be found at twimlai.com/go/509.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by return guest Greg Brockman, co-founder and CTO of OpenAI. We had the pleasure of <a href="https://twimlai.com/twiml-talk-74-towards-artificial-general-intelligence-greg-brockman/">reconnecting</a> with Greg on the heels of the announcement of Codex, OpenAI’s most recent release. Codex is a direct descendant of GPT-3 that allows users to do autocomplete tasks based on all of the publicly available text and code on the internet. In our conversation with Greg, we explore the distinct results Codex sees in comparison to GPT-3, relative to the prompts it's being given, how it could evolve given different types of training data, and how users and practitioners should think about interacting with the API to get the most out of it. We also discuss Copilot, their recent collaboration with Github that is built on Codex, as well as the implications of Codex on coding education, explainability, and broader societal issues like fairness and bias, copyrighting, and jobs. </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/509.</p>]]>
      </content:encoded>
      <itunes:duration>2837</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9a0f0e34-f9e6-11eb-b563-ab518cdb43a6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8376207401.mp3?updated=1628787032"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Spatiotemporal Data Analysis with Rose Yu - #508</title>
      <link>https://twimlai.com/spatiotemporal-data-analysis-with-rose-yu</link>
      <description>Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego. 

Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models. 

The complete show note for this episode can be found at twimlai.com/go/508.</description>
      <pubDate>Mon, 09 Aug 2021 18:08:00 -0000</pubDate>
      <itunes:title>Spatiotemporal Data Analysis with Rose Yu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>508</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/fac3eb46-f932-11eb-bb91-573243b9a237/image/TWIML_COVER_800x800_RY.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle/>
      <itunes:summary>Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego. 

Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models. 

The complete show note for this episode can be found at twimlai.com/go/508.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Rose Yu, an assistant professor at the Jacobs School of Engineering at UC San Diego. </p><p><br></p><p>Rose’s research focuses on advancing machine learning algorithms and methods for analyzing large-scale time-series and spatial-temporal data, then applying those developments to climate, transportation, and other physical sciences. We discuss how Rose incorporates physical knowledge and partial differential equations in these use cases and how symmetries are being exploited. We also explore their novel neural network design that is focused on non-traditional convolution operators and allows for general symmetry, how we get from these representations to the network architectures that she has developed and another recent paper on deep spatio-temporal models. </p><p><br></p><p>The complete show note for this episode can be found at twimlai.com/go/508.</p>]]>
      </content:encoded>
      <itunes:duration>1931</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fac3eb46-f932-11eb-bb91-573243b9a237]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4270281358.mp3?updated=1628532257"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Parallelism and Acceleration for Large Language Models with Bryan Catanzaro - #507</title>
      <link>https://twimlai.com/parallelism-and-acceleration-for-large-language-models-with-bryan-catanzaro</link>
      <description>Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure. 
We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing. 

The complete show notes for this episode can be found at twimlai.com/go/507.</description>
      <pubDate>Thu, 05 Aug 2021 17:35:00 -0000</pubDate>
      <itunes:title>Parallelism and Acceleration for Large Language Models with Bryan Catanzaro</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>507</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/1265265a-f601-11eb-bc59-fb86c8f1a2c8/image/TWIML_COVER_800x800_BC4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Bryan Catanzaro of NVIDIA to discuss his work on cuDNN, high performance computing, parallelism for large language models, deep learning super sampling, and much more!</itunes:subtitle>
      <itunes:summary>Today we’re joined by Bryan Catanzaro, vice president of applied deep learning research at NVIDIA.

Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure. 
We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing. 

The complete show notes for this episode can be found at twimlai.com/go/507.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:bcatanzaro@nvidia.com">Bryan Catanzaro</a>, vice president of applied deep learning research at NVIDIA.</p><p><br></p><p>Most folks know Bryan as one of the founders/creators of cuDNN, the accelerated library for deep neural networks. In our conversation, we explore his interest in high-performance computing and its recent overlap with AI, his current work on Megatron, a framework for training giant language models, and the basic approach for distributing a large language model on DGX infrastructure. </p><p>We also discuss the three different kinds of parallelism, tensor parallelism, pipeline parallelism, and data parallelism, that Megatron provides when training models, as well as his work on the Deep Learning Super Sampling project and the role it's playing in the present and future of game development via ray tracing. </p><p><br></p><p>The complete show notes for this episode can be found at twimlai.com/go/507.</p>]]>
      </content:encoded>
      <itunes:duration>3033</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1265265a-f601-11eb-bc59-fb86c8f1a2c8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6512516260.mp3?updated=1628185187"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya - #506</title>
      <link>https://twimlai.com/applying-the-causal-roadmap-to-optimal-dynamic-treatment-rules-with-lina-montoya</link>
      <description>Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill. 
In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley. 
Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like.
The complete show notes for this episode can be found at twimlai.com/go/506.</description>
      <pubDate>Mon, 02 Aug 2021 17:20:00 -0000</pubDate>
      <itunes:title>Applying the Causal Roadmap to Optimal Dynamic Treatment Rules with Lina Montoya</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>506</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/44b2422c-f47c-11eb-98a3-c3d915bed7b6/image/TWIML_COVER_800x800_LM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill.  In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her...</itunes:subtitle>
      <itunes:summary>Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill. 
In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of neglected assumptions and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley. 
Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like.
The complete show notes for this episode can be found at twimlai.com/go/506.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out our 2021 ICML series joined by Lina Montoya, a postdoctoral researcher at UNC Chapel Hill. </p><p>In our conversation with Lina, who was an invited speaker at the Neglected Assumptions in Causal Inference Workshop, we explored her work applying Optimal Dynamic Treatment (ODT) to understand which kinds of individuals respond best to specific interventions in the US criminal justice system. We discuss the concept of <em>neglected assumptions</em> and how it connects to ODT rule estimation, as well as a breakdown of the causal roadmap, coined by researchers at UC Berkeley. </p><p>Finally, Lina talks us through the roadmap while applying the ODT rule problem, how she’s applied a “superlearner” algorithm to this problem, how it was trained, and what the future of this research looks like.</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/506">twimlai.com/go/506</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3260</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fb81d937-c03c-40ea-9140-0bbc90f65eca]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8664152635.mp3?updated=1629221479"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes - #505</title>
      <link>https://twimlai.com/constraint-active-search-for-human-in-the-loop-optimization-with-gustavo-malkomes</link>
      <description>Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. 
In our conversation with Gustavo, we explore his paper  Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios.
The complete show notes for this episode can be found at twimlai.com/go/505.</description>
      <pubDate>Thu, 29 Jul 2021 18:19:00 -0000</pubDate>
      <itunes:title>Constraint Active Search for Human-in-the-Loop Optimization with Gustavo Malkomes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>505</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/44ddfc8c-f47c-11eb-98a3-9b981b6c37f0/image/TWIML_COVER_800x800_GM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt.  In our conversation with Gustavo, we explore his paper , which focuses on a novel algorithmic solution for the...</itunes:subtitle>
      <itunes:summary>Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. 
In our conversation with Gustavo, we explore his paper  Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios.
The complete show notes for this episode can be found at twimlai.com/go/505.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our ICML series joined by Gustavo Malkomes, a research engineer at Intel via their recent acquisition of SigOpt. </p><p>In our conversation with Gustavo, we explore his paper <a href="https://public.sigopt.com/conference-experiments/icml-2021/CAS_ICML_2021.pdf"> Beyond the Pareto Efficient Frontier: Constraint Active Search for Multiobjective Experimental Design</a>, which focuses on a novel algorithmic solution for the iterative model search process. This new algorithm empowers teams to run experiments where they are not optimizing particular metrics but instead identifying parameter configurations that satisfy constraints in the metric space. This allows users to efficiently explore multiple metrics at once in an efficient, informed, and intelligent way that lends itself to real-world, human-in-the-loop scenarios.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/505">twimlai.com/go/505</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3038</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2bb8a48c-4f92-42aa-87d9-737a778e9e38]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7630564136.mp3?updated=1628029315"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fairness and Robustness in Federated Learning with Virginia Smith -#504</title>
      <link>https://twimlai.com/fairness-and-robustness-in-federated-learning-with-virginia-smith</link>
      <description>Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. 
In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness.
We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.
The complete show notes for this episode can be found at twimlai.com/go/504.</description>
      <pubDate>Mon, 26 Jul 2021 18:14:00 -0000</pubDate>
      <itunes:title>Fairness and Robustness in Federated Learning with Virginia Smith</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>504</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/36874f88-ee98-11eb-9502-ab5faff85bd7/image/TWIML_COVER_800x800_VS2_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University.  In our conversation with Virginia, we explore her work on cross-device federated learning...</itunes:subtitle>
      <itunes:summary>Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. 
In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness.
We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.
The complete show notes for this episode can be found at twimlai.com/go/504.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our ICML coverage joined by Virginia Smith, an assistant professor in the Machine Learning Department at Carnegie Mellon University. </p><p>In our conversation with Virginia, we explore her work on cross-device federated learning applications, including where the distributed learning aspects of FL are relative to the privacy techniques. We dig into her paper from ICML, Ditto: Fair and Robust Federated Learning Through Personalization, what fairness means in contrast to AI ethics, the particulars of the failure modes, the relationship between models, and the things being optimized across devices, and the tradeoffs between fairness and robustness.</p><p>We also discuss a second paper, Heterogeneity for the Win: One-Shot Federated Clustering, how the proposed method makes heterogeneity beneficial in data, how the heterogeneity of data is classified, and some applications of FL in an unsupervised setting.</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/504">twimlai.com/go/504</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2211</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[92e6d42e-aee9-4657-a777-5f153bbfddd1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5992505586.mp3?updated=1629221508"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling AI at H&amp;M Group with Errol Koolmeister - #503</title>
      <link>https://twimlai.com/scaling-ai-at-hm-group-with-errol-koolmeister</link>
      <description>Today we’re joined by Errol Koolmeister, the head of AI foundation at H&amp;M Group.
In our conversation with Errol, we explore H&amp;M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.</description>
      <pubDate>Thu, 22 Jul 2021 20:18:00 -0000</pubDate>
      <itunes:title>Scaling AI at H&amp;M Group with Errol Koolmeister</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>503</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/36ab357e-ee98-11eb-9502-1f05e37c35f1/image/TWIML_COVER_800x800_EK.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Errol Koolmeister, the head of AI foundation at  H&amp;M Group. In our conversation with Errol, we explore H&amp;M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Errol Koolmeister, the head of AI foundation at H&amp;M Group.
In our conversation with Errol, we explore H&amp;M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Errol Koolmeister, the head of AI foundation at H&amp;M Group.</p><p>In our conversation with Errol, we explore H&amp;M’s AI journey, including its wide adoption across the company in 2016, and the various use cases in which it's deployed like fashion forecasting and pricing algorithms. We discuss Errol’s first steps in taking on the challenge of scaling AI broadly at the company, the value-added learning from proof of concepts, and how to align in a sustainable, long-term way. Of course, we dig into the infrastructure and models being used, the biggest challenges faced, and the importance of managing the project portfolio, while Errol shares their approach to building infra for a specific product with many products in mind.</p>]]>
      </content:encoded>
      <itunes:duration>2477</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e48051f7-5b3d-4851-b1d7-80e50a65e68b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8063915134.mp3?updated=1629221538"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Evolving AI Systems Gracefully with Stefano Soatto - #502</title>
      <link>https://twimlai.com/evolving-ai-systems-gracefully-with-stefano-soatto</link>
      <description>Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. 

Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more.
The complete show notes for this episode can be found at twimlai.com/go/502.</description>
      <pubDate>Mon, 19 Jul 2021 20:05:00 -0000</pubDate>
      <itunes:title>Evolving AI Systems Gracefully with Stefano Soatto</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>502</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/36c9a0e0-ee98-11eb-9502-1337a4c74c94/image/TWIML_COVER_800x800_SS_4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA.  Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. 

Our conversation with Stefano centers on recent research of his called Graceful AI, which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more.
The complete show notes for this episode can be found at twimlai.com/go/502.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Stefano Soatto, VP of AI applications science at AWS and a professor of computer science at UCLA. </p><p><br></p><p>Our conversation with Stefano centers on recent research of his called <em>Graceful AI,</em> which focuses on how to make trained systems evolve gracefully. We discuss the broader motivation for this research and the potential dangers or negative effects of constantly retraining ML models in production. We also talk about research into error rate clustering, the importance of model architecture when dealing with problems of model compression, how they’ve solved problems of regression and reprocessing by utilizing existing models, and much more.</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/502">twimlai.com/go/502</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2951</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d067a960-38ff-4fe5-8909-cf51c67ccd16]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3438710510.mp3?updated=1629821111"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Innovation in Healthcare with Suchi Saria - #501</title>
      <link>https://twimlai.com/ml-innovation-in-healthcare-with-suchi-saria</link>
      <description>Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University. 
Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success. 
Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting.
The complete show notes for this episode can be found at twimlai.com/go/501.</description>
      <pubDate>Thu, 15 Jul 2021 20:32:00 -0000</pubDate>
      <itunes:title>ML Innovation in Healthcare with Suchi Saria</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>501</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/36ec9802-ee98-11eb-9502-0b7b93beeee9/image/TWIML_COVER_800x800_SS1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University. 
Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success. 
Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting.
The complete show notes for this episode can be found at twimlai.com/go/501.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Suchi Saria, the founder and CEO of Bayesian Health, the John C. Malone associate professor of computer science, statistics, and health policy, and the director of the machine learning and healthcare lab at Johns Hopkins University. </p><p>Suchi shares a bit about her journey to working in the intersection of machine learning and healthcare, and how her research has spanned across both medical policy and discovery. We discuss why it has taken so long for machine learning to become accepted and adopted by the healthcare infrastructure and where exactly we stand in the adoption process, where there have been “pockets” of tangible success. </p><p>Finally, we explore the state of healthcare data, and of course, we talk about Suchi’s recently announced startup Bayesian Health and their goals in the healthcare space, and an accompanying study that looks at real-time ML inference in an EMR setting.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/501">twimlai.com/go/501</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2722</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[46ed5336-732e-4107-8661-461d253feed7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6918351851.mp3?updated=1629388953"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Cross-Device AI Acceleration, Compilation &amp; Execution with Jeff Gehlhaar - #500</title>
      <link>https://twimlai.com/cross-device-ai-acceleration-compilation-execution-with-jeff-gehlhaar</link>
      <description>Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm. 
In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more! 
After you check out this interview, you can look below for some of the other conversations with researchers mentioned. 
The complete show notes for this episode can be found at twimlai.com/go/500.</description>
      <pubDate>Mon, 12 Jul 2021 22:25:00 -0000</pubDate>
      <itunes:title>Cross-Device AI Acceleration, Compilation &amp; Execution with Jeff Gehlhaar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>500</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/370eec36-ee98-11eb-9502-9feac149d9ae/image/TWIML_COVER_800x800_JG3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm.  In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what...</itunes:subtitle>
      <itunes:summary>Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm. 
In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more! 
After you check out this interview, you can look below for some of the other conversations with researchers mentioned. 
The complete show notes for this episode can be found at twimlai.com/go/500.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by a friend of the show Jeff Gehlhaar, VP of technology and the head of AI software platforms at Qualcomm. </p><p>In our conversation with Jeff, we cover a ton of ground, starting with a bit of exploration around ML compilers, what they are, and their role in solving issues of parallelism. We also dig into the latest additions to the Snapdragon platform, AI Engine Direct, and how it works as a bridge to bring more capabilities across their platform, how benchmarking works in the context of the platform, how the work of other researchers we’ve spoken to on compression and quantization finds its way from research to product, and much more! </p><p>After you check out this interview, you can look below for some of the other conversations with researchers mentioned. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/500">twimlai.com/go/500</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2514</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a3636e0a-0b24-473f-810a-e3443872dad9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2887028783.mp3?updated=1629821004"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen - #499</title>
      <link>https://twimlai.com/the-future-of-human-machine-interaction-with-dan-bohus-and-siddhartha-sen</link>
      <description>Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and Siddhartha Sen, a principal researcher at Microsoft Research. 
In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups. 
We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future.  
The complete show notes for this episode can be found at https://twimlai.com/go/499.</description>
      <pubDate>Thu, 08 Jul 2021 17:38:00 -0000</pubDate>
      <itunes:title>The Future of Human-Machine Interaction with Dan Bohus and Siddhartha Sen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>499</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/372dae8c-ee98-11eb-9502-dbe4d2688c33/image/TWIML_COVER_800x800_DB_SS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and , a principal researcher at Microsoft Research.  In this conversation, we use a pair of research projects, Maia Chess and...</itunes:subtitle>
      <itunes:summary>Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and Siddhartha Sen, a principal researcher at Microsoft Research. 
In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups. 
We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future.  
The complete show notes for this episode can be found at https://twimlai.com/go/499.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our AI in Innovation series joined by Dan Bohus, senior principal researcher at Microsoft Research, and <a href="mailto:sidsen@microsoft.com">Siddhartha Sen</a>, a principal researcher at Microsoft Research. </p><p>In this conversation, we use a pair of research projects, Maia Chess and Situated Interaction, to springboard us into a conversation about the evolution of human-AI interaction. We discuss both of these projects individually, as well as the commonalities they have, how themes like understanding the human experience appear in their work, the types of models being used, the various types of data, and the complexity of each of their setups. </p><p>We explore some of the challenges associated with getting computers to better understand human behavior and interact in ways that are more fluid. Finally, we touch on what excites both Dan and Sid about their respective projects, and what they’re excited about for the future.  </p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/499">https://twimlai.com/go/499</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2924</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[75a5116a-250f-420f-9fda-2bf479ac8f10]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2082220992.mp3?updated=1629820964"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Vector Quantization for NN Compression with Julieta Martinez - #498</title>
      <link>https://twimlai.com/vector-quantization-for-nn-compression-with-julieta-martinez</link>
      <description>Today we’re joined by Julieta Martinez, a senior research scientist at recently announced startup Waabi. 
Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search and Neural Network Compression have in Common,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network. 
We also dig into another paper Julieta presented at the conference, Deep Multi-Task Learning for Joint Localization, Perception, and Prediction, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently.
The complete show notes for this episode can be found at twimlai.com/go/498.</description>
      <pubDate>Mon, 05 Jul 2021 16:49:00 -0000</pubDate>
      <itunes:title>Vector Quantization for NN Compression with Julieta Martinez</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>498</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3753b898-ee98-11eb-9502-57e5e8b227f5/image/TWIML_COVER_800x800_JM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by , a senior research scientist at recently announced startup Waabi.  Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Julieta Martinez, a senior research scientist at recently announced startup Waabi. 
Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “What do Large-Scale Visual Search and Neural Network Compression have in Common,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network. 
We also dig into another paper Julieta presented at the conference, Deep Multi-Task Learning for Joint Localization, Perception, and Prediction, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently.
The complete show notes for this episode can be found at twimlai.com/go/498.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by <a href="mailto:jmartinez@waabi.ai">Julieta Martinez</a>, a senior research scientist at recently announced startup Waabi. </p><p>Julieta was a keynote speaker at the recent LatinX in AI workshop at CVPR, and our conversation focuses on her talk “<em>What do Large-Scale Visual Search and Neural Network Compression have in Common</em>,” which shows that multiple ideas from large-scale visual search can be used to achieve state-of-the-art neural network compression. We explore the commonality between large databases and dealing with high dimensional, many-parameter neural networks, the advantages of using product quantization, and how that plays out when using it to compress a neural network. </p><p>We also dig into another paper Julieta presented at the conference, <em>Deep Multi-Task Learning for Joint Localization, Perception, and Prediction</em>, which details an architecture that is able to reuse computation between the three tasks, and is thus able to correct localization errors efficiently.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/498">twimlai.com/go/498</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2478</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b5ff8e15-c606-4cd3-a4be-571c29788382]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7007485335.mp3?updated=1629820930"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni - #497</title>
      <link>https://twimlai.com/deep-unsupervised-learning-for-climate-informatics-with-claire-monteleoni</link>
      <description>Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder. 
We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment. 
Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “Deep Unsupervised Learning for Climate Informatics,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events.
The complete show notes for this episode can be found at twimlai.com/go/497.</description>
      <pubDate>Thu, 01 Jul 2021 18:31:00 -0000</pubDate>
      <itunes:title>Deep Unsupervised Learning for Climate Informatics with Claire Monteleoni</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>497</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3774a4c2-ee98-11eb-9502-6b1d361041f9/image/TWIML_COVER_800x800_CM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder.  We cover quite a bit of ground in our conversation with Claire, including her journey down the path from...</itunes:subtitle>
      <itunes:summary>Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder. 
We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment. 
Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “Deep Unsupervised Learning for Climate Informatics,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events.
The complete show notes for this episode can be found at twimlai.com/go/497.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our CVPR 2021 coverage joined by Claire Monteleoni, an associate professor at the University of Colorado Boulder. </p><p>We cover quite a bit of ground in our conversation with Claire, including her journey down the path from environmental activist to one of the leading climate informatics researchers in the world. We explore her current research interests, and the available opportunities in applying machine learning to climate informatics, including the interesting position of doing ML from a data-rich environment. </p><p>Finally, we dig into the evolution of climate science-focused events and conferences, as well as the Keynote Claire gave at the EarthVision workshop at CVPR “<a href="http://www.classic.grss-ieee.org/earthvision2021/program.html">Deep Unsupervised Learning for Climate Informatics</a>,” which focused on semi- and unsupervised deep learning approaches to studying rare and extreme climate events.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/497">twimlai.com/go/497</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2534</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5ed5da11-cf83-4c0e-9fda-72a98431a992]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4466763972.mp3?updated=1629820871"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Skip-Convolutions for Efficient Video Processing with Amir Habibian - #496</title>
      <link>https://twimlai.com/skip-convolutions-for-efficient-video-processing-with-amir-habibian</link>
      <description>Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies. 
In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition. 
The complete show notes for this episode can be found at twimlai.com/go/496.</description>
      <pubDate>Mon, 28 Jun 2021 19:59:00 -0000</pubDate>
      <itunes:title>Skip-Convolutions for Efficient Video Processing with Amir Habibian</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>496</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/379753aa-ee98-11eb-9502-13431f9faf0e/image/TWIML_COVER_800x800_AH2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies.  In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at...</itunes:subtitle>
      <itunes:summary>Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies. 
In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition. 
The complete show notes for this episode can be found at twimlai.com/go/496.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our CVPR coverage joined by Amir Habibian, a senior staff engineer manager at Qualcomm Technologies. </p><p>In our conversation with Amir, whose research primarily focuses on video perception, we discuss a few papers they presented at the event. We explore the paper Skip-Convolutions for Efficient Video Processing, which looks at training discrete variables to end to end into visual neural networks. We also discuss his work on his FrameExit paper, which proposes a conditional early exiting framework for efficient video recognition. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/496">twimlai.com/go/496</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2879</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4a5f7d31-e0e9-402a-b26e-135bc4f5a341]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3647520432.mp3?updated=1629820803"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing NLP with Project Debater w/ Noam Slonim - #495</title>
      <link>https://twimlai.com/advancing-nlp-with-project-debater-w-noam-slonim</link>
      <description>Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research. 
In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “An Autonomous Debating System,” which details the system in its entirety. 
Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more.
The complete show notes for this episode can be found at twimlai.com/go/495.</description>
      <pubDate>Thu, 24 Jun 2021 18:27:00 -0000</pubDate>
      <itunes:title>Advancing NLP with Project Debater w/ Noam Slonim</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>495</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/37b6cf28-ee98-11eb-9502-8bc00f36b9ac/image/TWIML_COVER_800x800_NS3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research.  In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research. 
In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “An Autonomous Debating System,” which details the system in its entirety. 
Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more.
The complete show notes for this episode can be found at twimlai.com/go/495.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Noam Slonim, the principal investigator of Project Debater at IBM Research. </p><p>In our conversation with Noam, we explore the history of Project Debater, the first AI system that can “debate” humans on complex topics. We also dig into the evolution of the project, which is the culmination of 7 years and over 50 research papers, and eventually becoming a Nature cover paper, “<a href="https://eorder.sheridan.com/3_0/app/orders/11030/files/assets/common/downloads/Slonim.pdf">An Autonomous Debating System</a>,” which details the system in its entirety. </p><p>Finally, Noam details many of the underlying capabilities of Debater, including the relationship between systems preparation and training, evidence detection, detecting the quality of arguments, narrative generation, the use of conventional NLP methods like entity linking, and much more.</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/495">twimlai.com/go/495.</a></p>]]>
      </content:encoded>
      <itunes:duration>3105</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dd7bc844-757b-442e-abc7-b6bd61526736]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7886309923.mp3?updated=1629820699"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bringing AI Up to Speed with Autonomous Racing w/ Madhur Behl - #494</title>
      <link>https://twimlai.com/bringing-ai-up-to-speed-with-autonomous-racing-w-madhur-behl</link>
      <description>Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia. 
In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of autonomous driving, ML/AI, and Motorsports, where he’s teaching self-driving cars how to drive in an agile manner. We talk through the differences between traditional self-driving problems and those encountered in a racing environment, the challenges in solving planning, perception, control. 
We also discuss their upcoming race at the Indianapolis Motor Speedway, where Madhur and his students will compete for 1 million dollars in the world’s first head-to-head fully autonomous race, and how they’re preparing for it.</description>
      <pubDate>Mon, 21 Jun 2021 23:52:00 -0000</pubDate>
      <itunes:title>Bringing AI Up to Speed with Autonomous Racing w/ Madhur Behl</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>494</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/37d674d6-ee98-11eb-9502-677ba9cb6f4f/image/TWIML_COVER_800x800_MB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia.  In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia. 
In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of autonomous driving, ML/AI, and Motorsports, where he’s teaching self-driving cars how to drive in an agile manner. We talk through the differences between traditional self-driving problems and those encountered in a racing environment, the challenges in solving planning, perception, control. 
We also discuss their upcoming race at the Indianapolis Motor Speedway, where Madhur and his students will compete for 1 million dollars in the world’s first head-to-head fully autonomous race, and how they’re preparing for it.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Madhur Behl, an Assistant Professor in the department of computer science at the University of Virginia. </p><p>In our conversation with Madhur, we explore the super interesting work he’s doing at the intersection of autonomous driving, ML/AI, and Motorsports, where he’s teaching self-driving cars how to drive in an agile manner. We talk through the differences between traditional self-driving problems and those encountered in a racing environment, the challenges in solving planning, perception, control. </p><p>We also discuss their upcoming race at the Indianapolis Motor Speedway, where Madhur and his students will compete for 1 million dollars in the world’s first head-to-head fully autonomous race, and how they’re preparing for it.</p>]]>
      </content:encoded>
      <itunes:duration>3106</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1b4acfcf-2c49-40b9-ac22-ed5cb91e9ac6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1115322051.mp3?updated=1629389053"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI and Society: Past, Present and Future with Eric Horvitz - #493</title>
      <link>https://twimlai.com/ai-and-society-past-present-and-future-with-eric-horvitz</link>
      <description>Today we continue our AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz. 
In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical implications, the scope of the study on the topic, and how drastically the AI and machine learning landscape has changed since 2009. We also discuss Eric’s role at Microsoft and the Aether committee that has advised the company on issues of responsible AI since 2017.
Finally, we talk through his recent work as a member of the National Security Commission on AI, where he helped commission a 750+ page report on topics including the Future of AI R&amp;D, Building Trustworthy AI systems, civil liberties and privacy, and the challenging area of AI and autonomous weapons.  
The complete show notes for this episode can be found at twimlai.com/go/493.</description>
      <pubDate>Thu, 17 Jun 2021 17:00:00 -0000</pubDate>
      <itunes:title>AI and Society: Past, Present and Future with Eric Horvitz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>493</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/37f8ab6e-ee98-11eb-9502-038f846a99d7/image/TWIML_COVER_800x800_EH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our  AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz.  In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical...</itunes:subtitle>
      <itunes:summary>Today we continue our AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz. 
In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical implications, the scope of the study on the topic, and how drastically the AI and machine learning landscape has changed since 2009. We also discuss Eric’s role at Microsoft and the Aether committee that has advised the company on issues of responsible AI since 2017.
Finally, we talk through his recent work as a member of the National Security Commission on AI, where he helped commission a 750+ page report on topics including the Future of AI R&amp;D, Building Trustworthy AI systems, civil liberties and privacy, and the challenging area of AI and autonomous weapons.  
The complete show notes for this episode can be found at twimlai.com/go/493.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our AI Innovation series joined by Microsoft’s Chief Scientific Officer, Eric Horvitz. </p><p>In our conversation with Eric, we explore his tenure as AAAI president and his focus on the future of AI and its ethical implications, the scope of the study on the topic, and how drastically the AI and machine learning landscape has changed since 2009. We also discuss Eric’s role at Microsoft and the Aether committee that has advised the company on issues of responsible AI since 2017.</p><p>Finally, we talk through his recent work as a member of the National Security Commission on AI, where he helped commission a 750+ page report on topics including the Future of AI R&amp;D, Building Trustworthy AI systems, civil liberties and privacy, and the challenging area of AI and autonomous weapons.  </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/493">twimlai.com/go/493</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3233</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8f92c17e-53f6-4f87-aa19-280cb7a08372]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7035637693.mp3?updated=1629820674"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Agile Applied AI Research with Parvez Ahammad - #492</title>
      <link>https://twimlai.com/agile-applied-ai-research-with-parvez-ahammad</link>
      <description>Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn.
In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly organized at LinkedIn. We explore how they ensure time investments on long-term projects are managed, how to identify products that can help in a cross-cutting way across multiple lines of business, quantitative methodologies to identify unintended consequences in experimentation, and navigating the tension between research and applied ML teams in an organization. Finally, we discuss differential privacy, and their recently released GreyKite library, an open-source Python library developed to support forecasting.
The complete show note for this episode can be found at twimlai.com/go/492.</description>
      <pubDate>Mon, 14 Jun 2021 17:10:00 -0000</pubDate>
      <itunes:title>Agile Applied AI Research with Parvez Ahammad</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>492</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3823cbc8-ee98-11eb-9502-b3aab1cbf2c7/image/TWIML_COVER_800x800_PA2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn. In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn.
In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly organized at LinkedIn. We explore how they ensure time investments on long-term projects are managed, how to identify products that can help in a cross-cutting way across multiple lines of business, quantitative methodologies to identify unintended consequences in experimentation, and navigating the tension between research and applied ML teams in an organization. Finally, we discuss differential privacy, and their recently released GreyKite library, an open-source Python library developed to support forecasting.
The complete show note for this episode can be found at twimlai.com/go/492.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Parvez Ahammad, head of data science applied research at LinkedIn.</p><p>In our conversation, Parvez shares his interesting take on organizing principles for his organization, starting with how data science teams are broadly organized at LinkedIn. We explore how they ensure time investments on long-term projects are managed, how to identify products that can help in a cross-cutting way across multiple lines of business, quantitative methodologies to identify unintended consequences in experimentation, and navigating the tension between research and applied ML teams in an organization. Finally, we discuss differential privacy, and their recently released GreyKite library, an open-source Python library developed to support forecasting.</p><p>The complete show note for this episode can be found at <a href="twimlai.com/go/492">twimlai.com/go/492</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2631</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fe63f280-caee-4733-b976-f7bd0c5c55d2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5732259643.mp3?updated=1629820555"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Haptic Intelligence with Katherine J. Kuchenbecker - #491</title>
      <link>https://twimlai.com/haptic-intelligence-with-katherine-j-kuchenbecker</link>
      <description>Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department. 
In our conversation, we explore Katherine’s research interests, which lie at the intersection of haptics (physical interaction with the world) and machine learning, introducing us to the concept of “haptic intelligence.” We discuss how ML, mainly computer vision, has been integrated to work together with robots, and some of the devices that Katherine’s lab is developing to take advantage of this research.
We also talk about hugging robots, augmented reality in robotic surgery, and the degree to which she studies human-robot interaction. Finally, Katherine shares with us her passion for mentoring and the importance of diversity and inclusion in robotics and machine learning. 
The complete show notes for this episode can be found at twimlai.com/go/491.</description>
      <pubDate>Thu, 10 Jun 2021 19:41:00 -0000</pubDate>
      <itunes:title>Haptic Intelligence with Katherine J. Kuchenbecker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>491</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/384e468c-ee98-11eb-9502-0b4160c5caec/image/TWIML_COVER_800x800_KJK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department.  In our conversation, we explore Katherine’s research interests, which lie at the...</itunes:subtitle>
      <itunes:summary>Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department. 
In our conversation, we explore Katherine’s research interests, which lie at the intersection of haptics (physical interaction with the world) and machine learning, introducing us to the concept of “haptic intelligence.” We discuss how ML, mainly computer vision, has been integrated to work together with robots, and some of the devices that Katherine’s lab is developing to take advantage of this research.
We also talk about hugging robots, augmented reality in robotic surgery, and the degree to which she studies human-robot interaction. Finally, Katherine shares with us her passion for mentoring and the importance of diversity and inclusion in robotics and machine learning. 
The complete show notes for this episode can be found at twimlai.com/go/491.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined Katherine J. Kuchenbecker, director at the Max Planck Institute for Intelligent Systems and of the haptic intelligence department. </p><p>In our conversation, we explore Katherine’s research interests, which lie at the intersection of haptics (physical interaction with the world) and machine learning, introducing us to the concept of “haptic intelligence.” We discuss how ML, mainly computer vision, has been integrated to work together with robots, and some of the devices that Katherine’s lab is developing to take advantage of this research.</p><p>We also talk about hugging robots, augmented reality in robotic surgery, and the degree to which she studies human-robot interaction. Finally, Katherine shares with us her passion for mentoring and the importance of diversity and inclusion in robotics and machine learning. </p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/491">twimlai.com/go/491</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2296</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[54954962-f722-416d-a8c7-c68d44eab43f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3568647915.mp3?updated=1629390584"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Science on AWS with Chris Fregly and Antje Barth - #490</title>
      <link>https://twimlai.com/data-science-on-aws-with-chris-fregly-and-antje-barth</link>
      <description>Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS. 
In our conversation with Chris and Antje, we explore their roles as community builders prior to, and since, joining AWS, as well as their recently released book Data Science on AWS. In the book, Chris and Antje demonstrate how to reduce cost and improve performance while successfully building and deploying data science projects. 
We also discuss the release of their new  Practical Data Science Specialization on Coursera, managing the complexity that comes with building real-world projects, and some of their favorite sessions from the recent ML Summit.</description>
      <pubDate>Mon, 07 Jun 2021 19:02:00 -0000</pubDate>
      <itunes:title>Data Science on AWS with Chris Fregly and Antje Barth</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>490</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3870a72c-ee98-11eb-9502-33ee05c8559e/image/TWIML_COVER_800x800_AB_CF.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS.  In our conversation with Chris and Antje, we explore their roles as community...</itunes:subtitle>
      <itunes:summary>Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS. 
In our conversation with Chris and Antje, we explore their roles as community builders prior to, and since, joining AWS, as well as their recently released book Data Science on AWS. In the book, Chris and Antje demonstrate how to reduce cost and improve performance while successfully building and deploying data science projects. 
We also discuss the release of their new  Practical Data Science Specialization on Coursera, managing the complexity that comes with building real-world projects, and some of their favorite sessions from the recent ML Summit.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our coverage of the AWS ML Summit joined by Chris Fregly, a principal developer advocate at AWS, and Antje Barth, a senior developer advocate at AWS. </p><p>In our conversation with Chris and Antje, we explore their roles as community builders prior to, and since, joining AWS, as well as their recently released book Data Science on AWS. In the book, Chris and Antje demonstrate how to reduce cost and improve performance while successfully building and deploying data science projects. </p><p>We also discuss the release of their new <a href="https://www.coursera.org/specializations/practical-data-science"> Practical Data Science Specialization</a> on Coursera, managing the complexity that comes with building real-world projects, and some of their favorite sessions from the recent ML Summit.</p>]]>
      </content:encoded>
      <itunes:duration>2426</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7143c4cc-2dfe-432a-8459-6c85c074d2a4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4745533187.mp3?updated=1629820187"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar - #489</title>
      <link>https://twimlai.com/accelerating-distributed-ai-applications-at-qualcomm-with-ziad-asghar</link>
      <description>Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies &amp; roadmap at Qualcomm Technologies. 
We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year. 
Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data. 
The complete show notes can be found at https://twimlai.com/go/489.</description>
      <pubDate>Thu, 03 Jun 2021 17:54:00 -0000</pubDate>
      <itunes:title>Accelerating Distributed AI Applications at Qualcomm with Ziad Asghar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>489</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3894945c-ee98-11eb-9502-c7867daf6439/image/TWIML_COVER_800x800_ZA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies &amp; roadmap at Qualcomm Technologies.  We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies &amp; roadmap at Qualcomm Technologies. 
We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year. 
Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data. 
The complete show notes can be found at https://twimlai.com/go/489.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ziad Asghar, vice president of product management for snapdragon technologies &amp; roadmap at Qualcomm Technologies. </p><p>We begin our conversation with Ziad exploring the symbiosis between 5G and AI and what is enabling developers to take full advantage of AI on mobile devices. We also discuss the balance of product evolution and incorporating research concepts, and the evolution of their hardware infrastructure Cloud AI 100, their role in the deployment of Ingenuity, the robotic helicopter that operated on Mars just last year. </p><p>Finally, we talk about specialization in building IoT applications like autonomous vehicles and smart cities, the degree to which federated learning is being deployed across the industry, and the importance of privacy and security of personal data. </p><p>The complete show notes can be found at <a href="https://twimlai.com/go/489">https://twimlai.com/go/489</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2376</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cfb6734a-3e73-484d-86a1-59da16b692be]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2191923114.mp3?updated=1629820187"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Buy AND Build for Production Machine Learning with Nir Bar-Lev - #488</title>
      <link>https://twimlai.com/buy-and-build-for-production-machine-learning-with-nir-bar-lev</link>
      <description>Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML.

In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise. 
We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions.

The complete show notes for this episode can be found at https://twimlai.com/go/488.</description>
      <pubDate>Mon, 31 May 2021 17:54:00 -0000</pubDate>
      <itunes:title>Buy AND Build for Production Machine Learning with Nir Bar-Lev</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>488</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/38c26576-ee98-11eb-9502-1f9685773a8f/image/TWIML_COVER_800x800_NBL.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML.  In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML.

In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise. 
We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions.

The complete show notes for this episode can be found at https://twimlai.com/go/488.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Nir Bar-Lev, co-founder and CEO of ClearML.</p><p><br></p><p>In our conversation with Nir, we explore how his view of the wide vs deep machine learning platforms paradox has changed and evolved over time, how companies should think about building vs buying and integration, and his thoughts on why experiment management has become an automatic buy, be it open source or otherwise. </p><p>We also discuss the disadvantages of using a cloud vendor as opposed to a software-based approach, the balance between mlops and data science when addressing issues of overfitting, and how ClearML is applying techniques like federated machine learning and transfer learning to their solutions.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/488">https://twimlai.com/go/488</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2604</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[be7ab8df-88e3-475f-a8f5-d0a4ef6f7456]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9839821163.mp3?updated=1629820069"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applied AI Research at AWS with Alex Smola - #487</title>
      <link>https://twimlai.com/applied-ai-research-at-aws-with-alex-smola</link>
      <description>Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI.
We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research.
Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from.
The complete show notes for this episode can be found at https://twimlai.com/go/487.</description>
      <pubDate>Thu, 27 May 2021 16:42:00 -0000</pubDate>
      <itunes:title>Applied AI Research at AWS with Alex Smola</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>487</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/38e6fe22-ee98-11eb-9502-bb7c06a9e7ec/image/TWIML_COVER_800x800_AS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI. We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI.
We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research.
Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from.
The complete show notes for this episode can be found at https://twimlai.com/go/487.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Alex Smola, Vice President and Distinguished Scientist at AWS AI.</p><p>We had the pleasure to catch up with Alex prior to the upcoming AWS Machine Learning Summit, and we covered a TON of ground in the conversation. We start by focusing on his research in the domain of deep learning on graphs, including a few examples showcasing its function, and an interesting discussion around the relationship between large language models and graphs. Next up, we discuss their focus on AutoML research and how it's the key to lowering the barrier of entry for machine learning research.</p><p>Alex also shares a bit about his work on causality and causal modeling, introducing us to the concept of Granger causality. Finally, we talk about the aforementioned ML Summit, its exponential growth since its inception a few years ago, and what speakers he's most excited about hearing from.</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/487">https://twimlai.com/go/487</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3355</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f8c45160-6b68-4852-8b06-ba502238f0df]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2661499376.mp3?updated=1629820069"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Causal Models in Practice at Lyft with Sean Taylor - #486</title>
      <link>https://twimlai.com/causal-models-in-practice-at-lyft-with-sean-taylor</link>
      <description>Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs.
We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work.
Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more.

The complete show notes for this episode can be found at twimlai.com/go/486.</description>
      <pubDate>Mon, 24 May 2021 20:25:00 -0000</pubDate>
      <itunes:title>Causal Models in Practice at Lyft with Sean Taylor</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>486</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3911beb4-ee98-11eb-9502-b7fecfd1a851/image/TWIML_COVER_800x800_ST2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs. We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs.
We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work.
Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more.

The complete show notes for this episode can be found at twimlai.com/go/486.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sean Taylor, Staff Data Scientist at Lyft Rideshare Labs.</p><p>We cover a lot of ground with Sean, starting with his recent decision to step away from his previous role as the lab director to take a more hands-on role, and what inspired that change. We also discuss his research at Rideshare Labs, where they take a more “moonshot” approach to solving the typical problems like forecasting and planning, marketplace experimentation, and decision making, and how his statistical approach manifests itself in his work.</p><p>Finally, we spend quite a bit of time exploring the role of causality in the work at rideshare labs, including how systems like the aforementioned forecasting system are designed around causal models, if driving model development is more effective using business metrics, challenges associated with hierarchical modeling, and much much more.</p><p><br></p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/486.">twimlai.com/go/486.</a></p>]]>
      </content:encoded>
      <itunes:duration>2426</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1d0488ba-3fe4-4ea6-906d-a799eb715625]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2504058090.mp3?updated=1629318572"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Using AI to Map the Human Immune System w/ Jabran Zahid - #485</title>
      <link>https://twimlai.com/using-ai-to-map-the-human-immune-system-w-jabran-zahid</link>
      <description>Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research.
In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic.
We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more.
The complete show notes for this episode can be found at twimlai.com/go/485.</description>
      <pubDate>Thu, 20 May 2021 16:05:00 -0000</pubDate>
      <itunes:title>Using AI to Map the Human Immune System w/ Jabran Zahid</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>485</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/394aacb0-ee98-11eb-9502-1725616930c0/image/TWIML_COVER_800x800_JZ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research. In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research.
In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic.
We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more.
The complete show notes for this episode can be found at twimlai.com/go/485.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jabran Zahid, a Senior Researcher at Microsoft Research.</p><p>In our conversation with Jabran, we explore their recent endeavor into the complete mapping of which T-cells bind to which antigens through the Antigen Map Project. We discuss how Jabran’s background in astrophysics and cosmology has translated to his current work in immunology and biology, the origins of the antigen map, the biological and how the focus was changed by the emergence of the coronavirus pandemic.</p><p>We talk through the biological advancements, and the challenges of using machine learning in this setting, some of the more advanced ML techniques that they’ve tried that have not panned out (as of yet), the path forward for the antigen map to make a broader impact, and much more.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/485.">twimlai.com/go/485.</a></p>]]>
      </content:encoded>
      <itunes:duration>2514</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[34c2eaa0-68fc-41e1-ab67-dfa6e196693e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1598763998.mp3?updated=1629818863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch - #484</title>
      <link>https://twimlai.com/learning-long-time-dependencies-with-rnns-w-thorben-konstantin-rusch</link>
      <description>Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich.
In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies.
We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals.
The complete show notes for this episode can be found at twimlai.com/go/484.</description>
      <pubDate>Mon, 17 May 2021 16:28:00 -0000</pubDate>
      <itunes:title>Learning Long-Time Dependencies with RNNs w/ Konstantin Rusch</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>484</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/397640aa-ee98-11eb-9502-c3351a490f1e/image/TWIML_COVER_800x800_TKR_2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich. In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of...</itunes:subtitle>
      <itunes:summary>Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich.
In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies.
We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals.
The complete show notes for this episode can be found at twimlai.com/go/484.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we conclude our 2021 ICLR coverage joined by Konstantin Rusch, a PhD Student at ETH Zurich.</p><p>In our conversation with Konstantin, we explore his recent papers, titled coRNN and uniCORNN respectively, which focus on a novel architecture of recurrent neural networks for learning long-time dependencies.</p><p>We explore the inspiration he drew from neuroscience when tackling this problem, how the performance results compared to networks like LSTMs and others that have been proven to work on this problem and Konstantin’s future research goals.</p><p>The complete show notes for this episode can be found at <a href="twimlai.com/go/484">twimlai.com/go/484</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2263</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ae6b6586-9691-4191-b4dd-600dffa7ffaf]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2670737521.mp3?updated=1629818863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger - #483</title>
      <link>https://twimlai.com/what-the-human-brain-can-tell-us-about-nlp-models-with-allyson-ettinger</link>
      <description>Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago. 
One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems. 
Allyson also participated in a recent panel discussion at the ICLR workshop How Can Findings About The Brain Improve AI Systems?, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more!
The complete show notes for this episode can be found at twimlai.com/go/483. </description>
      <pubDate>Thu, 13 May 2021 15:28:00 -0000</pubDate>
      <itunes:title>What the Human Brain Can Tell Us About NLP Models with Allyson Ettinger</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>483</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/39b59070-ee98-11eb-9502-0bb252578304/image/TWIML_COVER_800x800_AE_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago.  One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and...</itunes:subtitle>
      <itunes:summary>Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago. 
One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems. 
Allyson also participated in a recent panel discussion at the ICLR workshop How Can Findings About The Brain Improve AI Systems?, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more!
The complete show notes for this episode can be found at twimlai.com/go/483. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our ICLR ‘21 series joined by Allyson Ettinger, an Assistant Professor at the University of Chicago. </p><p>One of our favorite recurring conversations on the podcast is the two-way street that lies between machine learning and neuroscience, which Allyson explores through the modeling of cognitive processes that pertain to language. In our conversation, we discuss how she approaches assessing the competencies of AI, the value of control of confounding variables in AI research, and how the pattern matching traits of Ml/DL models are not necessarily exclusive to these systems. </p><p>Allyson also participated in a recent panel discussion at the ICLR workshop <a href="https://iclrbrain2ai.github.io/">How Can Findings About The Brain Improve AI Systems?</a>, centered around the utility of brain inspiration for developing AI models. We discuss ways in which we can try to more closely simulate the functioning of a brain, where her work fits into the analysis and interpretability area of NLP, and much more!</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/483">twimlai.com/go/483</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2280</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9b1ed7ae-d85d-4bf8-b173-9b9d5eaafe12]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3445724419.mp3?updated=1629818863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Probabilistic Numeric CNNs with Roberto Bondesan - #482</title>
      <link>https://twimlai.com/probabilistic-numeric-cnns-with-roberto-bondesan</link>
      <description>Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm. 
In our conversation with Roberto, we explore his paper Probabilistic Numeric Convolutional Neural Networks, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.  
The complete show notes for this episode can be found at https://twimlai.com/go/482</description>
      <pubDate>Mon, 10 May 2021 17:36:00 -0000</pubDate>
      <itunes:title>Probabilistic Numeric CNNs with Roberto Bondesan </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>482</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/39df5784-ee98-11eb-9502-13e3640d5022/image/TWIML_COVER_800x800_RB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm.  In our conversation with Roberto, we explore his paper , which represents features as Gaussian processes, providing a probabilistic description of...</itunes:subtitle>
      <itunes:summary>Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm. 
In our conversation with Roberto, we explore his paper Probabilistic Numeric Convolutional Neural Networks, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.  
The complete show notes for this episode can be found at https://twimlai.com/go/482</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our ICLR 2021 coverage joined by Roberto Bondesan, an AI Researcher at Qualcomm. </p><p>In our conversation with Roberto, we explore his paper <a href="https://arxiv.org/pdf/2010.10876.pdf">Probabilistic Numeric Convolutional Neural Networks</a>, which represents features as Gaussian processes, providing a probabilistic description of discretization error. We discuss some of the other work the team at Qualcomm presented at the conference, including a paper called Adaptive Neural Compression, as well as work on Guage Equvariant Mesh CNNs. Finally, we briefly discuss quantum deep learning, and what excites Roberto and his team about the future of their research in combinatorial optimization.  </p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/482">https://twimlai.com/go/482</a></p>]]>
      </content:encoded>
      <itunes:duration>2488</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ef20a947-160a-4da9-a03c-9fa124cc785e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8706590949.mp3?updated=1629818863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building a Unified NLP Framework at LinkedIn with Huiji Gao - #481</title>
      <link>https://twimlai.com/building-a-unified-nlp-framework-at-linkedin-with-huiji-gao</link>
      <description>Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn. 
In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more!
The complete show notes for this episode can be found at https://twimlai.com/go/481. </description>
      <pubDate>Thu, 06 May 2021 19:18:00 -0000</pubDate>
      <itunes:title>Building a Unified NLP Framework at LinkedIn with Huiji Gao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>481</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a0130fc-ee98-11eb-9502-9778b1a18fb6/image/TWIML_COVER_800x800_HG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn.  In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn. 
In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more!
The complete show notes for this episode can be found at https://twimlai.com/go/481. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Huiji Gao, a Senior Engineering Manager of Machine Learning and AI at LinkedIn. </p><p>In our conversation with Huiji, we dig into his interest in building NLP tools and systems, including a recent open-source project called DeText, a framework for generating models for ranking classification and language generation. We explore the motivation behind DeText, the landscape at LinkedIn before and after it was put into use broadly, and the various contexts it’s being used in at the company. We also discuss the relationship between BERT and DeText via LiBERT, a version of BERT that is trained and calibrated on LinkedIn data, the practical use of these tools from an engineering perspective, the approach they’ve taken to optimization, and much more!</p><p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/481">https://twimlai.com/go/481</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2083</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dd2228c1-c56a-4e5f-9335-202fd04fd931]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3460072255.mp3?updated=1629817053"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Dask + Data Science Careers with Jacqueline Nolis - #480</title>
      <link>https://twimlai.com/dask-data-science-careers-with-jacqueline-nolis</link>
      <description>Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the Build a Career in Data Science Podcast. 
 You might remember Jacqueline from our Advancing Your Data Science Career During the Pandemic panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist. 
 We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more!
 The complete show notes for this episode can be found at https://twimlai.com/go/480.
  </description>
      <pubDate>Mon, 03 May 2021 15:17:09 -0000</pubDate>
      <itunes:title>Dask + Data Science Careers with Jacqueline Nolis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>480</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a1c9f22-ee98-11eb-9502-a3072a566225/image/TWIML_COVER_800x800_JN2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the .  You might remember Jacqueline from our  panel, where she shared her experience trying to navigate the suddenly hectic data science job market....</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the Build a Career in Data Science Podcast. 
 You might remember Jacqueline from our Advancing Your Data Science Career During the Pandemic panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist. 
 We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more!
 The complete show notes for this episode can be found at https://twimlai.com/go/480.
  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jacqueline Nolis, Head of Data Science at Saturn Cloud, and co-host of the <a href="https://podcast.bestbook.cool/">Build a Career in Data Science Podcast</a>. </p> <p>You might remember Jacqueline from our <a href="https://twimlai.com/advancingds/">Advancing Your Data Science Career During the Pandemic</a> panel, where she shared her experience trying to navigate the suddenly hectic data science job market. Now, a year removed from that panel, we explore her book on data science careers, top insights for folks just getting into the field, ways that job seekers should be signaling that they have the required background, and how to approach and navigate failure as a data scientist. </p> <p>We also spend quite a bit of time discussing Dask, an open-source library for parallel computing in Python, as well as use cases for the tool, the relationship between dask and Kubernetes and docker containers, where data scientists are in regards to the software development toolchain and much more!</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/480">https://twimlai.com/go/480</a>.</p> <p> </p>]]>
      </content:encoded>
      <itunes:duration>2099</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cea56279-3655-4d00-8188-67ac2e527cd1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4272685946.mp3?updated=1629244891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning for Equitable Healthcare Outcomes with Irene Chen - #479</title>
      <link>https://twimlai.com/machine-learning-for-equitable-healthcare-outcomes-with-irene-chen</link>
      <description>Today we’re joined by Irene Chen, a Ph.D. student at MIT. 
 Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence. 
 We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research.
 The complete show notes for this episode can be found at https://twimlai.com/go/479.</description>
      <pubDate>Thu, 29 Apr 2021 16:36:15 -0000</pubDate>
      <itunes:title>Machine Learning for Equitable Healthcare Outcomes with Irene Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>479</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a3e38a8-ee98-11eb-9502-5bd8e2f17baf/image/TWIML_COVER_800x800_IC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Irene Chen, a Ph.D. student at MIT.  Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Irene Chen, a Ph.D. student at MIT. 
 Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence. 
 We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research.
 The complete show notes for this episode can be found at https://twimlai.com/go/479.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Irene Chen, a Ph.D. student at MIT. </p> <p>Irene’s research is focused on developing new machine learning methods specifically for healthcare, through the lens of questions of equity and inclusion. In our conversation, we explore some of the various projects that Irene has worked on, including an early detection program for intimate partner violence. </p> <p>We also discuss how she thinks about the long term implications of predictions in the healthcare domain, how she’s learned to communicate across the interface between the ML researcher and clinician, probabilistic approaches to machine learning for healthcare, and finally, key takeaways for those of you interested in this area of research.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/479">https://twimlai.com/go/479</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2219</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8bbfaeb7-3f29-4982-ae39-008a6244682a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7222865558.mp3?updated=1629244894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Storytelling Systems with Mark Riedl - #478</title>
      <link>https://twimlai.com/ai-storytelling-systems-with-mark-riedl</link>
      <description>Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems. 
 We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more! 
 The complete show notes for this episode can be found at https://twimlai.com/go/478.</description>
      <pubDate>Mon, 26 Apr 2021 18:02:05 -0000</pubDate>
      <itunes:title>AI Storytelling Systems with Mark Riedl</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>478</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a5fb0f0-ee98-11eb-9502-8f31cce47a3f/image/TWIML_COVER_800x800_MR2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems. 
 We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more! 
 The complete show notes for this episode can be found at https://twimlai.com/go/478.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Mark Riedl, a Professor in the School of Interactive Computing at Georgia Tech. In our conversation with Mark, we explore his work building AI storytelling systems, mainly those that try and predict what listeners think will happen next in a story and how he brings together many different threads of ML/AI together to solve these problems. We discuss how the theory of mind is layered into his research, the use of large language models like GPT-3, and his push towards being able to generate suspenseful stories with these systems. </p> <p>We also discuss the concept of intentional creativity and the lack of good theory on the subject, the adjacent areas in ML that he’s most excited about for their potential contribution to his research, his recent focus on model explainability, how he approaches problems of common sense, and much more! </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/478">https://twimlai.com/go/478</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2488</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0dfbc4fd-ade3-469a-bcdd-9d9556a66f2d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9650012336.mp3?updated=1629244921"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Creating Robust Language Representations with Jamie Macbeth - #477</title>
      <link>https://twimlai.com/creating-robust-language-representations-with-jamie-macbeth</link>
      <description>Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College. 
 In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into.
 One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research. 
 The complete show notes for this episode can be found at https://twimlai.com/go/477.</description>
      <pubDate>Wed, 21 Apr 2021 21:11:56 -0000</pubDate>
      <itunes:title>Creating Robust Language Representations with Jamie Macbeth</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>477</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3a88d0f2-ee98-11eb-9502-d39fe5370ff1/image/TWIML_COVER_800x800_JM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College.  In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College. 
 In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into.
 One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research. 
 The complete show notes for this episode can be found at https://twimlai.com/go/477.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jamie Macbeth, an assistant professor in the department of computer science at Smith College. </p> <p>In our conversation with Jamie, we explore his work at the intersection of cognitive systems and natural language understanding, and how to use AI as a vehicle for better understanding human intelligence. We discuss the tie that binds these domains together, if the tasks are the same as traditional NLU tasks, and what are the specific things he’s trying to gain deeper insights into.</p> <p>One of the unique aspects of Jamie’s research is that he takes an “old-school AI” approach, and to that end, we discuss the models he handcrafts to generate language. Finally, we examine how he evaluates the performance of his representations if he’s not playing the SOTA “game,” what he bookmarks against, identifying deficiencies in deep learning systems, and the exciting directions for his upcoming research. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/477.">https://twimlai.com/go/477.</a></p>]]>
      </content:encoded>
      <itunes:duration>2404</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c36b6f3b-34ce-4b85-9fdc-0bf2a3e610f7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6026881131.mp3?updated=1629244870"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reinforcement Learning for Industrial AI with Pieter Abbeel - #476</title>
      <link>https://twimlai.com/reinforcement-learning-for-industrial-ai-with-pieter-abbeel</link>
      <description>Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant.
 In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he’s building.
 We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper “Pretrained Transformers as Universal Computation Engines” and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today!
 The complete show notes for this episode can be found at twimlai.com/go/476.</description>
      <pubDate>Mon, 19 Apr 2021 18:09:44 -0000</pubDate>
      <itunes:title>Reinforcement Learning for Industrial AI with Pieter Abbeel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>476</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ab095b0-ee98-11eb-9502-bfca09b5d7d9/image/TWIML_COVER_800x800_PA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant. In our conversation with Pieter, we cover a ton of ground, starting with the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant.
 In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he’s building.
 We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper “Pretrained Transformers as Universal Computation Engines” and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today!
 The complete show notes for this episode can be found at twimlai.com/go/476.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Pieter Abbeel, a Professor at UC Berkeley, co-Director of the Berkeley AI Research Lab (BAIR), as well as Co-founder and Chief Scientist at Covariant.</p> <p>In our conversation with Pieter, we cover a ton of ground, starting with the specific goals and tasks of his work at Covariant, the shift in needs for industrial AI application and robots, if his experience solving real-world problems has changed his opinion on end to end deep learning, and the scope for the three problem domains of the models he’s building.</p> <p>We also explore his recent work at the intersection of unsupervised and reinforcement learning, goal-directed RL, his recent paper “<a href="https://arxiv.org/abs/2103.05247">Pretrained Transformers as Universal Computation Engines</a>” and where that research thread is headed, and of course, his new podcast Robot Brains, which you can find on all streaming platforms today!</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/476.">twimlai.com/go/476.</a></p>]]>
      </content:encoded>
      <itunes:duration>3498</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[67a677a7-b33a-415e-8d7a-0de519f40357]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6771418704.mp3?updated=1629245042"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AutoML for Natural Language Processing with Abhishek Thakur - #475</title>
      <link>https://twimlai.com/automl-for-natural-language-processing-with-abhishek-thakur</link>
      <description>Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster!
 In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way.
 We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, Approaching (Almost) Any Machine Learning Problem.
 The complete show notes for this episode can be found at https://twimlai.com/go/475.</description>
      <pubDate>Thu, 15 Apr 2021 16:44:17 -0000</pubDate>
      <itunes:title>AutoML for Natural Language Processing with Abhishek Thakur</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>475</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ad982fe-ee98-11eb-9502-033ef0205e20/image/TWIML_COVER_800x800_AT4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster! In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster!
 In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way.
 We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, Approaching (Almost) Any Machine Learning Problem.
 The complete show notes for this episode can be found at https://twimlai.com/go/475.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Abhishek Thakur, a machine learning engineer at Hugging Face, and the world’s first Quadruple Kaggle Grandmaster!</p> <p>In our conversation with Abhishek, we explore his Kaggle journey, including how his approach to competitions has evolved over time, what resources he used to prepare for his transition to a full-time practitioner, and the most important lessons he’s learned along the way.</p> <p>We also spend a great deal of time discussing his new role at HuggingFace, where he's building AutoNLP. We talk through the goals of the project, the primary problem domain, and how the results of AutoNLP compare with those from hand-crafted models. Finally, we discuss Abhishek’s book, <a href="https://www.amazon.com/dp/8269211508">Approaching (Almost) Any Machine Learning Problem.</a></p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/475.">https://twimlai.com/go/475.</a></p>]]>
      </content:encoded>
      <itunes:duration>2176</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5d26dce8-7745-4195-bce4-382f5a0f6a98]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4089646507.mp3?updated=1629244903"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inclusive Design for Seeing AI with Saqib Shaikh - #474</title>
      <link>https://twimlai.com/inclusive-design-for-seeing-ai-with-saqib-shaikh</link>
      <description>Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project.
 In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various technologies and use cases for the app, and how it has evolved since the inception of the project, how the technology landscape supports projects like this one, and the technical challenges he faces when building out the app.
 We also the relationship and trust between humans and robots, and how that translates to this app, what Saqib sees on the research horizon that will support his vision for the future of Seeing AI, and how the integration of tech like Apple’s upcoming “smart” glasses could change the way their app is used.
 The complete show notes for this episode can be found at twimlai.com/go/474.</description>
      <pubDate>Mon, 12 Apr 2021 17:00:00 -0000</pubDate>
      <itunes:title>Inclusive Design for Seeing AI with Saqib Shaikh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>474</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3b02aa26-ee98-11eb-9502-37c57975768f/image/TWIML_COVER_800x800_SS2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project. In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project.
 In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various technologies and use cases for the app, and how it has evolved since the inception of the project, how the technology landscape supports projects like this one, and the technical challenges he faces when building out the app.
 We also the relationship and trust between humans and robots, and how that translates to this app, what Saqib sees on the research horizon that will support his vision for the future of Seeing AI, and how the integration of tech like Apple’s upcoming “smart” glasses could change the way their app is used.
 The complete show notes for this episode can be found at twimlai.com/go/474.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Saqib Shaikh, a Software Engineer at Microsoft, and the lead for the Seeing AI Project.</p> <p>In our conversation with Saqib, we explore the Seeing AI app, an app “that narrates the world around you.” We discuss the various technologies and use cases for the app, and how it has evolved since the inception of the project, how the technology landscape supports projects like this one, and the technical challenges he faces when building out the app.</p> <p>We also the relationship and trust between humans and robots, and how that translates to this app, what Saqib sees on the research horizon that will support his vision for the future of Seeing AI, and how the integration of tech like Apple’s upcoming “smart” glasses could change the way their app is used.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/474">twimlai.com/go/474.</a></p>]]>
      </content:encoded>
      <itunes:duration>2137</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4f32d77e-6ce4-4da0-9b0a-987c26160e4f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4973031913.mp3?updated=1629244877"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Theory of Computation with Jelani Nelson - #473</title>
      <link>https://twimlai.com/theory-of-computation-with-jelani-nelson</link>
      <description>Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley.
 In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random projections, and dimensionality reduction. We discuss how Jelani thinks about the balance between the innovation of new algorithms and the performance of existing ones, and some use cases where we’d see his work in action.
 Finally, we talk through how his work ties into machine learning, what tools from the theorist’s toolbox he’d suggest all ML practitioners know, and his nonprofit AddisCoder, a 4 week summer program that introduces high-school students to programming and algorithms.
 The complete show notes for this episode can be found at twimlai.com/go/473.</description>
      <pubDate>Thu, 08 Apr 2021 18:06:58 -0000</pubDate>
      <itunes:title>Theory of Computation with Jelani Nelson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>473</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3b28640a-ee98-11eb-9502-af5b65a681b4/image/TWIML_COVER_800x800_JN.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley. In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley.
 In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random projections, and dimensionality reduction. We discuss how Jelani thinks about the balance between the innovation of new algorithms and the performance of existing ones, and some use cases where we’d see his work in action.
 Finally, we talk through how his work ties into machine learning, what tools from the theorist’s toolbox he’d suggest all ML practitioners know, and his nonprofit AddisCoder, a 4 week summer program that introduces high-school students to programming and algorithms.
 The complete show notes for this episode can be found at twimlai.com/go/473.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jelani Nelson, a professor in the Theory Group at UC Berkeley.</p> <p>In our conversation with Jelani, we explore his research in computational theory, where he focuses on building streaming and sketching algorithms, random projections, and dimensionality reduction. We discuss how Jelani thinks about the balance between the innovation of new algorithms and the performance of existing ones, and some use cases where we’d see his work in action.</p> <p>Finally, we talk through how his work ties into machine learning, what tools from the theorist’s toolbox he’d suggest all ML practitioners know, and his nonprofit AddisCoder, a 4 week summer program that introduces high-school students to programming and algorithms.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/473">twimlai.com/go/473.</a></p>]]>
      </content:encoded>
      <itunes:duration>2019</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bf67e335-625a-41bc-90d3-be5ceebb09f1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6353401553.mp3?updated=1629244883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human-Centered ML for High-Risk Behaviors with Stevie Chancellor - #472</title>
      <link>https://twimlai.com/human-centered-ml-for-high-risk-behaviors-with-stevie-chancellor</link>
      <description>Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota.
 In our conversation with Stevie, we explore her work at the intersection of human-centered computing, machine learning, and high-risk mental illness behaviors. We discuss how her background in HCC helps shapes her perspective, how machine learning helps with understanding severity levels of mental illness, and some recent work where convolutional graph neural networks are applied to identify and discover new kinds of behaviors for people who struggle with opioid use disorder.
 We also explore the role of computational linguistics and NLP in her research, issues in using social media data being used as a data source, and finally, how people who are interested in an introduction to human-centered computing can get started.
 The complete show notes for this episode can be found at twimlai.com/go/472.</description>
      <pubDate>Mon, 05 Apr 2021 20:08:38 -0000</pubDate>
      <itunes:title>Human-Centered ML for High-Risk Behaviors with Stevie Chancellor</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>472</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3b507b5c-ee98-11eb-9502-abf4850e21e4/image/TWIML_COVER_800x800_SC2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota. In our conversation with Stevie, we explore her work at the intersection of human-centered...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota.
 In our conversation with Stevie, we explore her work at the intersection of human-centered computing, machine learning, and high-risk mental illness behaviors. We discuss how her background in HCC helps shapes her perspective, how machine learning helps with understanding severity levels of mental illness, and some recent work where convolutional graph neural networks are applied to identify and discover new kinds of behaviors for people who struggle with opioid use disorder.
 We also explore the role of computational linguistics and NLP in her research, issues in using social media data being used as a data source, and finally, how people who are interested in an introduction to human-centered computing can get started.
 The complete show notes for this episode can be found at twimlai.com/go/472.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Stevie Chancellor, an Assistant Professor in the Department of Computer Science and Engineering at the University of Minnesota.</p> <p>In our conversation with Stevie, we explore her work at the intersection of human-centered computing, machine learning, and high-risk mental illness behaviors. We discuss how her background in HCC helps shapes her perspective, how machine learning helps with understanding severity levels of mental illness, and some recent work where convolutional graph neural networks are applied to identify and discover new kinds of behaviors for people who struggle with opioid use disorder.</p> <p>We also explore the role of computational linguistics and NLP in her research, issues in using social media data being used as a data source, and finally, how people who are interested in an introduction to human-centered computing can get started.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/472">twimlai.com/go/472.</a></p>]]>
      </content:encoded>
      <itunes:duration>2445</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ed449b64-9e14-4bb0-9aa4-7fdd8ea1b234]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5407201681.mp3?updated=1629217047"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Operationalizing AI at Dataiku with Conor Jensen - #471</title>
      <link>https://twimlai.com/sponsorseries</link>
      <description>In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML journey of each panelist’s company, and where Dataiku fits in the equation.
 The complete show notes for this episode can be found at https://twimlai.com/go/471. </description>
      <pubDate>Thu, 01 Apr 2021 18:49:19 -0000</pubDate>
      <itunes:title>Operationalizing AI at Dataiku with Conor Jensen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>471</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3b730a50-ee98-11eb-9502-6b0be03711c1/image/TWIML_COVER_800x800_CJ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML journey of each panelist’s company, and where Dataiku fits in the equation.
 The complete show notes for this episode can be found at https://twimlai.com/go/471. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Dataiku’s Director of Data Science, Conor Jensen. In our conversation, we explore the panel he lead at TWIMLcon “AI Operationalization: Where the AI Rubber Hits the Road for the Enterprise,” discussing the ML journey of each panelist’s company, and where Dataiku fits in the equation.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/471">https://twimlai.com/go/471</a>. </p>]]>
      </content:encoded>
      <itunes:duration>1431</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[634c91e0-d3a7-4804-b4ce-e3b4c0ed2b5f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4843420548.mp3?updated=1629244828"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Lifecycle Management at Algorithmia with Diego Oppenheimer - #470</title>
      <link>https://twimlai.com/sponsorseries</link>
      <description>In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market.
 The complete show notes for this episode can be found at twimlai.com/go/470.</description>
      <pubDate>Thu, 01 Apr 2021 18:37:07 -0000</pubDate>
      <itunes:title>ML Lifecycle Management at Algorithmia with Diego Oppenheimer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>470</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ba01a36-ee98-11eb-9502-5b6f9f84c684/image/TWIML_COVER_800x800_DO2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market.
 The complete show notes for this episode can be found at twimlai.com/go/470.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Diego Oppenheimer, Founder and CEO of Algorithmia. In our conversation, we discuss Algorithmia’s involvement with TWIMLcon, as well as an exploration of the results of their recently conducted survey on the state of the AI market.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/470">twimlai.com/go/470</a>.</p>]]>
      </content:encoded>
      <itunes:duration>1571</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a62b372c-f4e4-49e4-a013-b47acdf70117]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8350234051.mp3?updated=1629244842"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>End to End ML at Cloudera with Santiago Giraldo - #469 [TWIMLcon Sponsor Series]</title>
      <link>https://twimlai.com/sponsorseries</link>
      <description>In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering &amp; Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm.
  The complete show notes for this episode can be found at twimlai.com/sponsorseries.</description>
      <pubDate>Mon, 29 Mar 2021 20:28:56 -0000</pubDate>
      <itunes:title>End to End ML at Cloudera with Santiago Giraldo [TWIMLcon Sponsor Series]</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>469</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3bc4e168-ee98-11eb-9502-6bf37548c0d0/image/TWIML_COVER_800x800_SG2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering &amp; Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering &amp; Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm.
  The complete show notes for this episode can be found at twimlai.com/sponsorseries.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Santiago Giraldo, Director Of Product Marketing for Data Engineering &amp; Machine Learning at Cloudera. In our conversation, we discuss Cloudera’s talks at TWIMLcon, as well as their various research efforts from their Fast Forward Labs arm.</p> <p><br> The complete show notes for this episode can be found at <a href="https://twimlai.com/go/468">twimlai.com/sponsorseries.</a></p>]]>
      </content:encoded>
      <itunes:duration>1340</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[34683b59-d533-40eb-81d9-67d03ad759da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6302000643.mp3?updated=1629216967"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Platforms for Global Scale at Prosus with Paul van der Boor - #468 [TWIMLcon Sponsor Series]</title>
      <link>https://twimlai.com/sponsorseries</link>
      <description>In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale.
 The complete show notes for this episode can be found at twimlai.com/sponsorseries.</description>
      <pubDate>Mon, 29 Mar 2021 20:20:12 -0000</pubDate>
      <itunes:title>ML Platforms for Global Scale at Prosus with Paul van der Boor [TWIMLcon Sponsor Series]</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>468</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3be5507e-ee98-11eb-9502-2fee4ae1f5e3/image/TWIML_COVER_800x800_PVB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale. The complete show notes for this...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale.
 The complete show notes for this episode can be found at twimlai.com/sponsorseries.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode, we’re joined by Paul van der Boor, Senior Director of Data Science at Prosus, to discuss his TWIMLcon experience and how they’re using ML platforms to manage machine learning at a global scale.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/sponsorseries">twimlai.com/sponsorseries</a>.</p>]]>
      </content:encoded>
      <itunes:duration>1321</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55f49b75-ff7c-4746-a1a7-04831deca943]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5732832437.mp3?updated=1629244840"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Can Language Models Be Too Big? &#129436; with Emily Bender and Margaret Mitchell - #467</title>
      <link>https://twimlai.com/can-language-models-be-too-big-with-emily-bender-and-margaret-mitchell</link>
      <description>Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell. 
 Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper  On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? &#129436;. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going. 
 We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more. 
 The complete show notes for this episode can be found at twimlai.com/go/467.</description>
      <pubDate>Wed, 24 Mar 2021 16:11:31 -0000</pubDate>
      <itunes:title>Can Language Models Be Too Big? &#129436; with Emily Bender and Margaret Mitchell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>467</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3c027f1e-ee98-11eb-9502-7fb5edc0cc3e/image/TWIML_COVER_800x800_EB_MM_B.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell.  Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper  &#129436;. As most of you...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell. 
 Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper  On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? &#129436;. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going. 
 We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more. 
 The complete show notes for this episode can be found at twimlai.com/go/467.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Emily M. Bender, Professor at the University of Washington, and AI Researcher, Margaret Mitchell. </p> <p>Emily and Meg, as well as Timnit Gebru and Angelina McMillan-Major, are co-authors on the paper <a href="http://faculty.washington.edu/ebender/papers/Stochastic_Parrots.pdf"> On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</a> 🦜. As most of you undoubtedly know by now, there has been much controversy surrounding, and fallout from, this paper. In this conversation, our main priority was to focus on the message of the paper itself. We spend some time discussing the historical context for the paper, then turn to the goals of the paper, discussing the many reasons why the ever-growing datasets and models are not necessarily the direction we should be going. </p> <p>We explore the cost of these training datasets, both literal and environmental, as well as the bias implications of these models, and of course the perpetual debate about responsibility when building and deploying ML systems. Finally, we discuss the thin line between AI hype and useful AI systems, and the importance of doing pre-mortems to truly flesh out any issues you could potentially come across prior to building models, and much much more. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/467">twimlai.com/go/467</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3242</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3bc6f90d-5e30-4a37-baf9-72ad1da1d098]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2960447183.mp3?updated=1629244947"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applying RL to Real-World Robotics with Abhishek Gupta - #466</title>
      <link>https://twimlai.com/applying-rl-to-real-world-robotics-with-abhishek-gupta</link>
      <description>Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley. 
 Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments. 
 We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations. 
 The complete show notes for this episode can be found at https://twimlai.com/go/466.</description>
      <pubDate>Mon, 22 Mar 2021 19:25:01 -0000</pubDate>
      <itunes:title>Applying RL to Real-World Robotics with Abhishek Gupta</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>466</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3c2bb6d6-ee98-11eb-9502-df1e2104b7c4/image/TWIML_COVER_800x800_AG2-2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley.  Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley. 
 Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments. 
 We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations. 
 The complete show notes for this episode can be found at https://twimlai.com/go/466.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Abhishek Gupta, a PhD Student at UC Berkeley. </p> <p>Abhishek, a member of the BAIR Lab, joined us to talk about his recent robotics and reinforcement learning research and interests, which focus on applying RL to real-world robotics applications. We explore the concept of reward supervision, and how to get robots to learn these reward functions from videos, and the rationale behind supervised experts in these experiments. </p> <p>We also discuss the use of simulation for experiments, data collection, and the path to scalable robotic learning. Finally, we discuss gradient surgery vs gradient sledgehammering, and his ecological RL paper, which focuses on the “phenomena that exist in the real world” and how humans and robotics systems interface in those situations. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/466">https://twimlai.com/go/466</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2170</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[81a0f097-a82c-440e-a4c5-31ad57fc1174]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5673534245.mp3?updated=1629244890"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating Innovation with AI at Scale with David Carmona - #465</title>
      <link>https://twimlai.com/accelerating-innovation-with-ai-at-scale-with-david-carmona</link>
      <description>Today we’re joined by David Carmona, General Manager of Artificial Intelligence &amp; Innovation at Microsoft. 
 In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models.
 We also discuss the different families of models (generation &amp; representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning.
 The complete show notes for this episode can be found at twimlai.com/go/465.</description>
      <pubDate>Thu, 18 Mar 2021 02:38:14 -0000</pubDate>
      <itunes:title>Accelerating Innovation with AI at Scale with David Carmona - #465</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>465</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3c7d6968-ee98-11eb-9502-6f419d6b495c/image/TWIML_COVER_800x800_DC2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by David Carmona, General Manager of Artificial Intelligence &amp; Innovation at Microsoft.  In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are...</itunes:subtitle>
      <itunes:summary>Today we’re joined by David Carmona, General Manager of Artificial Intelligence &amp; Innovation at Microsoft. 
 In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models.
 We also discuss the different families of models (generation &amp; representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning.
 The complete show notes for this episode can be found at twimlai.com/go/465.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by David Carmona, General Manager of Artificial Intelligence &amp; Innovation at Microsoft. </p> <p>In our conversation with David, we focus on his work on AI at Scale, an initiative focused on the change in the ways people are developing AI, driven in large part by the emergence of massive models. We explore David’s thoughts about the progression towards larger models, the focus on parameters and how it ties to the architecture of these models, and how we should assess how attention works in these models.</p> <p>We also discuss the different families of models (generation &amp; representation), the transition from CV to NLP tasks, and an interesting point of models “becoming a platform” via transfer learning.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/465.</p>]]>
      </content:encoded>
      <itunes:duration>2916</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7cc8ad69-c328-4a48-a751-dc17ac5ed7da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8894787643.mp3?updated=1629244938"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Complexity and Intelligence with Melanie Mitchell - #464</title>
      <link>https://twimlai.com/complexity-and-intelligence-with-melanie-mitchell</link>
      <description>Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. 
 While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence. 
 We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more!
 The complete show notes for this episode can be found at twimlai.com/go/464.</description>
      <pubDate>Mon, 15 Mar 2021 17:46:22 -0000</pubDate>
      <itunes:title>Complexity and Intelligence with Melanie Mitchell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>464</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3c9d96f2-ee98-11eb-9502-fb8f41e429f7/image/TWIML_COVER_800x800_MM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans.  While Melanie has had a long career with a myriad of research interests, we focus on a few,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. 
 While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence. 
 We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more!
 The complete show notes for this episode can be found at twimlai.com/go/464.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Melanie Mitchell, Davis Professor at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. </p> <p>While Melanie has had a long career with a myriad of research interests, we focus on a few, complex systems and the understanding of intelligence, complexity, and her recent work on getting AI systems to make analogies. We explore examples of social learning, and how it applies to AI contextually, and defining intelligence. </p> <p>We discuss potential frameworks that would help machines understand analogies, established benchmarks for analogy, and if there is a social learning solution to help machines figure out analogy. Finally we talk through the overall state of AI systems, the progress we’ve made amid the limited concept of social learning, if we’re able to achieve intelligence with current approaches to AI, and much more!</p> <p>The complete show notes for this episode can be found at twimlai.com/go/464.</p>]]>
      </content:encoded>
      <itunes:duration>1968</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55cab332-ab23-400f-ad23-54c821ad93ef]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3174132384.mp3?updated=1627362747"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Robust Visual Reasoning with Adriana Kovashka - #463</title>
      <link>https://twimlai.com/shortcut-effects-in-visual-commonsense-reasoning-with-adriana-kovashka</link>
      <description>Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh.
 In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements. 
 Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward. 
 The complete show notes for this episode can be found at twimlai.com/go/463.</description>
      <pubDate>Thu, 11 Mar 2021 15:08:08 -0000</pubDate>
      <itunes:title>Robust Visual Reasoning with Adriana Kovashka</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>463</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3cc6eeda-ee98-11eb-9502-3740892db18a/image/TWIML_COVER_800x800_AK2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh. In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh.
 In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements. 
 Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward. 
 The complete show notes for this episode can be found at twimlai.com/go/463.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Adriana Kovashka, an Assistant Professor at the University of Pittsburgh.</p> <p>In our conversation with Adriana, we explore her visual commonsense research, and how it intersects with her background in media studies. We discuss the idea of shortcuts, or faults in visual question answering data sets that appear in many SOTA results, as well as the concept of masking, a technique developed to assist in context prediction. Adriana then describes how these techniques fit into her broader goal of trying to understand the rhetoric of visual advertisements. </p> <p>Finally, Adriana shares a bit about her work on robust visual reasoning, the parallels between this research and other work happening around explainability, and the vision for her work going forward. </p> <p>The complete show notes for this episode can be found at twimlai.com/go/463.</p>]]>
      </content:encoded>
      <itunes:duration>2500</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e8daf4ef-5ce6-4e50-bbe4-1f0d9cd5edb7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8692273462.mp3?updated=1629244922"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Architectural and Organizational Patterns in Machine Learning with Nishan Subedi - #462</title>
      <link>https://twimlai.com/architectural-and-organizational-patterns-in-machine-learning-with-nishan-subedi</link>
      <description>Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com.
 In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement use cases. We spend a great deal of time exploring machine learning architecture and architectural patterns, how he perceives the differences between architectural patterns and algorithms, and emergent architectural patterns that standards have not yet been set for.
 Finally, we discuss how the idea of anti-patterns was innovative in early design pattern thinking and if those concepts are transferable to ML, if architectural patterns will bleed over into organizational patterns and culture, and Nishan introduces us to the concept of Squads within an organizational structure.
 The complete show notes for this episode can be found at https://twimlai.com/go/462.</description>
      <pubDate>Mon, 08 Mar 2021 20:13:40 -0000</pubDate>
      <itunes:title>Architectural and Organizational Patterns in Machine Learning with Nishan Subedi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>462</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3cea0c12-ee98-11eb-9502-77034eed9c48/image/TWIML_COVER_800x800_NS2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com. In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com.
 In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement use cases. We spend a great deal of time exploring machine learning architecture and architectural patterns, how he perceives the differences between architectural patterns and algorithms, and emergent architectural patterns that standards have not yet been set for.
 Finally, we discuss how the idea of anti-patterns was innovative in early design pattern thinking and if those concepts are transferable to ML, if architectural patterns will bleed over into organizational patterns and culture, and Nishan introduces us to the concept of Squads within an organizational structure.
 The complete show notes for this episode can be found at https://twimlai.com/go/462.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Nishan Subedi, VP of Algorithms at Overstock.com.</p> <p>In our conversation with Nishan, we discuss his interesting path to MLOps and how ML/AI is used at Overstock, primarily for search/recommendations and marketing/advertisement use cases. We spend a great deal of time exploring machine learning architecture and architectural patterns, how he perceives the differences between architectural patterns and algorithms, and emergent architectural patterns that standards have not yet been set for.</p> <p>Finally, we discuss how the idea of anti-patterns was innovative in early design pattern thinking and if those concepts are transferable to ML, if architectural patterns will bleed over into organizational patterns and culture, and Nishan introduces us to the concept of Squads within an organizational structure.</p> <p>The complete show notes for this episode can be found at https://twimlai.com/go/462.</p>]]>
      </content:encoded>
      <itunes:duration>3455</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[34247a7a-5972-43e1-9e65-49bdc569b1f5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1488867008.mp3?updated=1629245019"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Common Sense Reasoning in NLP with Vered Shwartz - #461</title>
      <link>https://twimlai.com/common-sense-reasoning-in-nlp-with-vered-shwartz</link>
      <description>Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science &amp; Engineering at the University of Washington.
 In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities.
 Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research.
 The complete show notes for this episode can be found at https://twimlai.com/go/461. </description>
      <pubDate>Thu, 04 Mar 2021 22:40:23 -0000</pubDate>
      <itunes:title>Common Sense Reasoning in NLP with Vered Shwartz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>461</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3d0ac434-ee98-11eb-9502-e3fced509527/image/TWIML_COVER_800x800_VS1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science &amp; Engineering at the University of Washington. In our conversation with Vered, we explore her NLP...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science &amp; Engineering at the University of Washington.
 In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities.
 Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research.
 The complete show notes for this episode can be found at https://twimlai.com/go/461. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Vered Shwartz, a Postdoctoral Researcher at both the Allen Institute for AI and the Paul G. Allen School of Computer Science &amp; Engineering at the University of Washington.</p> <p>In our conversation with Vered, we explore her NLP research, where she focuses on teaching machines common sense reasoning in natural language. We discuss training using GPT models and the potential use of multimodal reasoning and incorporating images to augment the reasoning capabilities.</p> <p>Finally, we talk through some other noteworthy research in this field, how she deals with biases in the models, and Vered's future plans for incorporating some of the newer techniques into her future research.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/461">https://twimlai.com/go/461</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2234</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1b370396-9768-416f-8746-ec37cc6049e0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4668199924.mp3?updated=1629244896"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How to Be Human in the Age of AI with Ayanna Howard - #460</title>
      <link>https://twimlai.com/how-to-be-human-in-the-age-of-ai-with-ayanna-howard</link>
      <description>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. 
 Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. 
 We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. 
 The complete show notes for this episode can be found at https://twimlai.com/go/460.</description>
      <pubDate>Mon, 01 Mar 2021 20:04:16 -0000</pubDate>
      <itunes:title>How to Be Human in the Age of AI with Ayanna Howard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>460</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3d30952e-ee98-11eb-9502-d7409b52852e/image/TWIML_COVER_800x800_AH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard.  Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be...</itunes:subtitle>
      <itunes:summary>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. 
 Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. 
 We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. 
 The complete show notes for this episode can be found at https://twimlai.com/go/460.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. </p> <p>Our conversation with Dr. Howard focuses on her recently released book, <em>Sex, Race, and Robots: How to Be Human in the Age of AI,</em> which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. </p> <p>We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/460">https://twimlai.com/go/460</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2148</itunes:duration>
      <guid isPermaLink="false"><![CDATA[db14efa5-80f4-40ce-b997-4b0edab2816b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5960209989.mp3?updated=1627362749"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:explicit>no</itunes:explicit><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How to Be Human in the Age of AI with Ayanna Howard - #460</title>
      <link>https://twimlai.com/how-to-be-human-in-the-age-of-ai-with-ayanna-howard</link>
      <description>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. 
 Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. 
 We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. 
 The complete show notes for this episode can be found at https://twimlai.com/go/460.</description>
      <pubDate>Mon, 01 Mar 2021 20:04:13 -0000</pubDate>
      <itunes:title>How to Be Human in the Age of AI with Ayanna Howard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>460</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3d4e916e-ee98-11eb-9502-63cbadadd57c/image/TWIML_COVER_800x800_AH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard.  Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be...</itunes:subtitle>
      <itunes:summary>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. 
 Our conversation with Dr. Howard focuses on her recently released book, Sex, Race, and Robots: How to Be Human in the Age of AI, which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. 
 We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. 
 The complete show notes for this episode can be found at https://twimlai.com/go/460.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by returning guest and newly appointed Dean of the College of Engineering at The Ohio State University, Ayanna Howard. </p> <p>Our conversation with Dr. Howard focuses on her recently released book, <em>Sex, Race, and Robots: How to Be Human in the Age of AI,</em> which is an extension of her research on the relationships between humans and robots. We continue to explore this relationship through the themes of socialization introduced in the book, like associating genders to AI and robotic systems and the “self-fulfilling prophecy” that has become search engines. </p> <p>We also discuss a recurring conversation in the community around AI  being biased because of data versus models and data, and the choices and responsibilities that come with the ethical aspects of building AI systems. Finally, we discuss Dr. Howard’s new role at OSU, how it will affect her research, and what the future holds for the applied AI field. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/460">https://twimlai.com/go/460</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2192</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5b0498f7-3dca-4a56-8988-1f570de42593]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1948619037.mp3?updated=1629244864"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Evolution and Intelligence with Penousal Machado - #459</title>
      <link>https://twimlai.com/evolution-and-intelligence-with-penousal-machado</link>
      <description>Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra. 
 In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture. 
 Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices.
 The complete show notes for this episode can be found at twimlai.com/go/459.  </description>
      <pubDate>Thu, 25 Feb 2021 21:20:36 -0000</pubDate>
      <itunes:title>Evolution and Intelligence with Penousal Machado</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>459</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3d715f46-ee98-11eb-9502-0fd6fa5a41fe/image/TWIML_COVER_800x800_PM2_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra.  In our conversation with Penousal, we explore his research in...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra. 
 In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture. 
 Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices.
 The complete show notes for this episode can be found at twimlai.com/go/459.  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra. </p> <p>In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture. </p> <p>Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/459">twimlai.com/go/459</a>.  </p>]]>
      </content:encoded>
      <itunes:duration>3439</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[791cab95-562b-42c6-8949-914700caf6ae]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4856305145.mp3?updated=1629244970"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Innovating Neural Machine Translation with Arul Menezes - #458</title>
      <link>https://twimlai.com/innovating-neural-machine-translation-with-arul-menezes</link>
      <description>Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft. 
 Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models. 
 We also discuss how they’re using multilingual transfer learning and combining what they’ve learned in translation with pre-trained language models like BERT. Finally, we explore what they’re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward. 
 The complete show notes for this series can be found at twimlai.com/go/458.</description>
      <pubDate>Mon, 22 Feb 2021 20:11:04 -0000</pubDate>
      <itunes:title>Innovating Neural Machine Translation with Arul Menezes</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>458</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3d93de68-ee98-11eb-9502-db7264837f4e/image/TWIML_COVER_800x800_AM4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft.  Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft. 
 Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models. 
 We also discuss how they’re using multilingual transfer learning and combining what they’ve learned in translation with pre-trained language models like BERT. Finally, we explore what they’re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward. 
 The complete show notes for this series can be found at twimlai.com/go/458.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Arul Menezes, a Distinguished Engineer at Microsoft. </p> <p>Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models. </p> <p>We also discuss how they’re using multilingual transfer learning and combining what they’ve learned in translation with pre-trained language models like BERT. Finally, we explore what they’re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward. </p> <p>The complete show notes for this series can be found at twimlai.com/go/458.</p>]]>
      </content:encoded>
      <itunes:duration>2665</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1f96a9de-4ae4-4fe1-b064-9e4adec7fcf7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2406976180.mp3?updated=1629244893"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building the Product Knowledge Graph at Amazon with Luna Dong - #457</title>
      <link>https://twimlai.com/building-the-product-knowledge-graph-at-amazon-with-luna-dong</link>
      <description>Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon.
 In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community.
 The complete show notes for this episode can be found at https://twimlai.com/go/457.</description>
      <pubDate>Thu, 18 Feb 2021 21:09:47 -0000</pubDate>
      <itunes:title>Building the Product Knowledge Graph at Amazon with Luna Dong</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>457</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3db3c688-ee98-11eb-9502-cb18d953592b/image/TWIML_COVER_800x800_LD.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon. In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon.
 In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community.
 The complete show notes for this episode can be found at https://twimlai.com/go/457.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Luna Dong, Sr. Principal Scientist at Amazon.</p> <p>In our conversation with Luna, we explore Amazon’s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/457">https://twimlai.com/go/457</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2631</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f382a3ec-6258-4e4c-ac0f-b907375cfa6c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8104579179.mp3?updated=1629244901"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Towards a Systems-Level Approach to Fair ML with Sarah M. Brown - #456</title>
      <link>https://twimlai.com/towards-a-systems-level-approach-to-fair-ml-with-sarah-m-brown</link>
      <description>Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island.
 In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness.
 Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making.
 The complete show notes can be found at twimlai.com/go/456.</description>
      <pubDate>Mon, 15 Feb 2021 21:26:54 -0000</pubDate>
      <itunes:title>Towards a Systems-Level Approach to Fair ML with Sarah M. Brown</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>456</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3dd62192-ee98-11eb-9502-f3021cbc75d6/image/TWIML_COVER_800x800_SB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island. In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island.
 In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness.
 Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making.
 The complete show notes can be found at twimlai.com/go/456.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island.</p> <p>In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a “systems-level” approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness.</p> <p>Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making.</p> <p>The complete show notes can be found at <a href="https://twimlai.com/go/456">twimlai.com/go/456</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2253</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fc6f27a9-dfe4-4858-beae-5a6bb821f9b9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4578679649.mp3?updated=1629244873"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Digital Health Innovation with Andrew Trister - #455</title>
      <link>https://twimlai.com/ai-for-digital-health-innovation-with-andrew-trister</link>
      <description>Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill &amp; Melinda Gates Foundation. 
 In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing “community-based” healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves. 
 We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches.
 The complete show notes for this episode can be found at https://twimlai.com/go/455</description>
      <pubDate>Thu, 11 Feb 2021 18:38:29 -0000</pubDate>
      <itunes:title>AI for Digital Health Innovation with Andrew Trister</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>455</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3df92a20-ee98-11eb-9502-fb1d973e45b7/image/TWIML_COVER_800x800_AT3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill &amp; Melinda Gates Foundation.  In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill &amp; Melinda Gates Foundation. 
 In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing “community-based” healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves. 
 We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches.
 The complete show notes for this episode can be found at https://twimlai.com/go/455</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill &amp; Melinda Gates Foundation. </p> <p>In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing “community-based” healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves. </p> <p>We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches.</p> <p>The complete show notes for this episode can be found at https://twimlai.com/go/455<br> <br></p>]]>
      </content:encoded>
      <itunes:duration>2515</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[20fc6375-9804-4d73-a700-ecb99ab581df]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2866340721.mp3?updated=1629217028"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>System Design for Autonomous Vehicles with Drago Anguelov - #454</title>
      <link>https://twimlai.com/system-design-for-autonomous-vehicles-with-drago-anguelov</link>
      <description>Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo. 
 In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago’s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix. 
 We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow’s enterprise machine learning systems.
 The complete show notes for this episode can be found at twimlai.com/go/454. </description>
      <pubDate>Mon, 08 Feb 2021 21:20:56 -0000</pubDate>
      <itunes:title>System Design for Autonomous Vehicles with Drago Anguelov</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>454</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3e233e50-ee98-11eb-9502-1b18e86863a0/image/TWIML_COVER_800x800_DA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo.  In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo. 
 In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago’s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix. 
 We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow’s enterprise machine learning systems.
 The complete show notes for this episode can be found at twimlai.com/go/454. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo. </p> <p>In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago’s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix. </p> <p>We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow’s enterprise machine learning systems.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/454">twimlai.com/go/454</a>. </p>]]>
      </content:encoded>
      <itunes:duration>3052</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a0e80f7e-4583-4e93-a315-4e7da0f4d744]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3205169571.mp3?updated=1629244905"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building, Adopting, and Maturing LinkedIn's Machine Learning Platform with Ya Xu - #453</title>
      <link>https://twimlai.com/building-adopting-and-maturing-linkedins-machine-learning-platform-with-ya-xu</link>
      <description>Today we’re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker.
 We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn Platform. We discuss her “three phases” (building, adoption, and maturation) to keep in mind when building out a platform, how to avoid “hero syndrome” early in the process.
 Finally, we dig into the various tools and platforms that give LinkedIn teams leverage, their organizational structure, as well as the emergence of differential privacy for security use cases and if it's ready for prime time.
 The complete show notes for this episode can be found at https://twimlai.com/go/453. </description>
      <pubDate>Thu, 04 Feb 2021 22:41:29 -0000</pubDate>
      <itunes:title>Building, Adopting, and Maturing LinkedIn's Machine Learning Platform with Ya Xu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>453</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3e417b36-ee98-11eb-9502-6bf89408d724/image/TWIML_COVER_800x800_YX_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker. We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker.
 We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn Platform. We discuss her “three phases” (building, adoption, and maturation) to keep in mind when building out a platform, how to avoid “hero syndrome” early in the process.
 Finally, we dig into the various tools and platforms that give LinkedIn teams leverage, their organizational structure, as well as the emergence of differential privacy for security use cases and if it's ready for prime time.
 The complete show notes for this episode can be found at https://twimlai.com/go/453. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker.</p> <p>We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn Platform. We discuss her “three phases” (building, adoption, and maturation) to keep in mind when building out a platform, how to avoid “hero syndrome” early in the process.</p> <p>Finally, we dig into the various tools and platforms that give LinkedIn teams leverage, their organizational structure, as well as the emergence of differential privacy for security use cases and if it's ready for prime time.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/453">https://twimlai.com/go/453</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2946</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bb08fcd6-d8f6-4559-b874-54b0218f39bc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3584885553.mp3?updated=1629244904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Expressive Deep Learning with Magenta DDSP w/ Jesse Engel - #452</title>
      <link>https://twimlai.com/expressive-deep-learning-with-magenta-ddsp-w-jesse-engel</link>
      <description>Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project. 
 In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which “lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.”
 Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research.
 The complete show notes for this episode can be found at twimlai.com/go/452. </description>
      <pubDate>Mon, 01 Feb 2021 21:22:31 -0000</pubDate>
      <itunes:title>Expressive Deep Learning with Magenta DDSP w/ Jesse Engel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>452</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3e5f1330-ee98-11eb-9502-7b65240ca38a/image/TWIML_COVER_800x800_JE3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project.  In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project. 
 In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which “lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.”
 Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research.
 The complete show notes for this episode can be found at twimlai.com/go/452. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project. </p> <p>In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which “lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.”</p> <p>Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/452">twimlai.com/go/452</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2347</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bf91b5c8-e029-48f1-be38-72e3453027c7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2894820773.mp3?updated=1629244891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Semantic Folding for Natural Language Understanding with Francisco Weber - #451</title>
      <link>https://twimlai.com/semantic-folding-for-natural-language-understanding-with-francisco-weber</link>
      <description>Today we’re joined by return guest Francisco Webber, CEO &amp; Co-founder of Cortical.io.
 Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding.
 The complete show notes for this episode can be found at twimlai.com/go/451.</description>
      <pubDate>Fri, 29 Jan 2021 00:38:37 -0000</pubDate>
      <itunes:title>Semantic Folding for Natural Language Understanding with Francisco Weber</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>451</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3e8037b8-ee98-11eb-9502-07d9f948d9aa/image/TWIML_COVER_800x800_FW_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by return guest Francisco Webber, CEO &amp; Co-founder of Cortical.io. Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language...</itunes:subtitle>
      <itunes:summary>Today we’re joined by return guest Francisco Webber, CEO &amp; Co-founder of Cortical.io.
 Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding.
 The complete show notes for this episode can be found at twimlai.com/go/451.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by return guest Francisco Webber, CEO &amp; Co-founder of Cortical.io.</p> <p>Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/451">twimlai.com/go/451</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3317</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f6aa04b5-8ca7-45c5-8281-e4f8b3dfe7b9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6247466426.mp3?updated=1629244922"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Future of Autonomous Systems with Gurdeep Pall - #450</title>
      <link>https://twimlai.com/the-future-of-autonomous-systems-with-gurdeep-pall</link>
      <description>Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft.
 Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system.
 In our conversation with Gurdeep, we discuss Microsoft’s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with “machine teaching,” and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they’ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what’s next in the very broad space of autonomous systems.
 The complete show notes for this episode can be found at twimlai.com/go/450.</description>
      <pubDate>Mon, 25 Jan 2021 06:39:11 -0000</pubDate>
      <itunes:title>The Future of Autonomous Systems with Gurdeep Pall</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>450</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ea25c80-ee98-11eb-9502-5376afb60e12/image/TWIML_COVER_800x800_GP_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft. Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft.
 Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system.
 In our conversation with Gurdeep, we discuss Microsoft’s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with “machine teaching,” and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they’ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what’s next in the very broad space of autonomous systems.
 The complete show notes for this episode can be found at twimlai.com/go/450.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Gurdeep Pall, Corporate Vice President at Microsoft.</p> <p>Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system.</p> <p>In our conversation with Gurdeep, we discuss Microsoft’s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with “machine teaching,” and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they’ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what’s next in the very broad space of autonomous systems.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/450">twimlai.com/go/450</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3197</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7db9c821-f8ce-48a7-9961-6e378b9d4726]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2873721997.mp3?updated=1629244912"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Ecology and Ecosystem Preservation with Bryan Carstens - #449</title>
      <link>https://twimlai.com/ai-for-ecology-and-ecosystem-preservation</link>
      <description>Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology &amp; Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University.
 In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab’s research.
 We explore a few of his lab’s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what’s next for his group.
 The complete show notes for this episode can be found at twimlai.com/go/449.</description>
      <pubDate>Thu, 21 Jan 2021 22:40:49 -0000</pubDate>
      <itunes:title>AI for Ecology and Ecosystem Preservation with Bryan Carstens</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>449</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ec158ba-ee98-11eb-9502-cfff24050d5e/image/TWIML_COVER_800x800_BC3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology &amp; Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University. In our conversation with Bryan,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology &amp; Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University.
 In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab’s research.
 We explore a few of his lab’s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what’s next for his group.
 The complete show notes for this episode can be found at twimlai.com/go/449.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology &amp; Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University.</p> <p>In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab’s research.</p> <p>We explore a few of his lab’s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what’s next for his group.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/449">twimlai.com/go/449</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2149</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[23aae1d7-8fde-468e-bf40-e35d4613f093]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5156982675.mp3?updated=1629244862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Off-Line, Off-Policy RL for Real-World Decision Making at Facebook - #448</title>
      <link>https://twimlai.com/off-line-off-policy-rl-for-real-world-decision-making-at-facebook</link>
      <description>Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI.
 In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they’re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace.
 Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models.
 The complete show notes for this episode can be found at twimlai.com/go/448.</description>
      <pubDate>Mon, 18 Jan 2021 23:16:51 -0000</pubDate>
      <itunes:title>Off-Line, Off-Policy RL for Real-World Decision Making at Facebook</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>448</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3ee3b0ae-ee98-11eb-9502-f3754da4e33d/image/TWIML_COVER_800x800_JG2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI. In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI.
 In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they’re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace.
 Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models.
 The complete show notes for this episode can be found at twimlai.com/go/448.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jason Gauci, a Software Engineering Manager at Facebook AI.</p> <p>In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they’re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace.</p> <p>Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/448">twimlai.com/go/448</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3699</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8fc14027-af15-4da5-a340-82e446e6c45e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7904022278.mp3?updated=1629244949"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>A Future of Work for the Invisible Workers in A.I. with Saiph Savage - #447</title>
      <link>https://twimlai.com/a-future-of-work-for-the-invisible-workers-in-a-i-with-saiph-savage</link>
      <description>Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM.
 We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs.
 We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south.
 The complete show notes for this episode can be found at twimlai.com/go/447.</description>
      <pubDate>Thu, 14 Jan 2021 22:24:33 -0000</pubDate>
      <itunes:title>A Future of Work for the Invisible Workers in A.I. with Saiph Savage</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>447</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3f07b1ac-ee98-11eb-9502-97bfe9074b32/image/TWIML_COVER_800x800_SS_3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM. We caught up with Saiph during NeurIPS where she...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM.
 We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs.
 We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south.
 The complete show notes for this episode can be found at twimlai.com/go/447.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM.</p> <p>We caught up with Saiph during NeurIPS where she delivered an insightful invited talk “A Future of Work for the Invisible Workers in A.I.”. In our conversation with Saiph, we gain a better understanding of the “Invisible workers,” or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs.</p> <p>We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph’s participatory design work with rural workers in the global south.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/447">twimlai.com/go/447</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2299</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[94dac081-f97c-4b39-ade6-806ce7e09481]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4128204212.mp3?updated=1629244839"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Graph Machine Learning with Michael Bronstein - #446</title>
      <link>https://twimlai.com/trends-in-graph-machine-learning-with-michael-bronstein</link>
      <description>Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter.
 In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning. 
 We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.</description>
      <pubDate>Mon, 11 Jan 2021 22:35:35 -0000</pubDate>
      <itunes:title>Trends in Graph Machine Learning with Michael Bronstein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>446</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3f2ad952-ee98-11eb-9502-d36af7312aaf/image/TWIML_COVER_800x800_MB_Rewind.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter. In our conversation with Michael, we touch on his thoughts about the year in...</itunes:subtitle>
      <itunes:summary>Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter.
 In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning. 
 We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter.</p> <p>In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning. </p> <p>We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.</p>]]>
      </content:encoded>
      <itunes:duration>4467</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b2015aff-b225-4c8f-acd0-cac497538e62]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7767263592.mp3?updated=1629244974"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Natural Language Processing with Sameer Singh - #445</title>
      <link>https://twimlai.com/trends-in-natural-language-processing-with-sameer-singh</link>
      <description>Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine. 
 We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation. 
 We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/445.</description>
      <pubDate>Thu, 07 Jan 2021 22:10:05 -0000</pubDate>
      <itunes:title>Trends in Natural Language Processing with Sameer Singh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>445</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3f4fd3ba-ee98-11eb-9502-97ad7bae862c/image/TWIML_COVER_800x800_SS_Rewind.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine.  We last spoke with Sameer at our Natural Language Processing office hours back at...</itunes:subtitle>
      <itunes:summary>Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine. 
 We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation. 
 We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/445.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine. </p> <p>We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation. </p> <p>We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/445">twimlai.com/go/445</a>.</p>]]>
      </content:encoded>
      <itunes:duration>4915</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[afac2518-dfad-4a1d-8951-3aad83dd20c1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6599587193.mp3?updated=1629245045"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Computer Vision with Pavan Turaga - #444</title>
      <link>https://twimlai.com/trends-in-computer-vision-with-pavan-turaga</link>
      <description>AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering &amp; Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University.
 Pavan, who joined us back in June to talk through his work from CVPR ‘20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he’s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future.
 We want to hear from you! Send your thoughts on the year that was 2020 below in the comments, or via Twitter at @samcharrington or @twimlai.
 The complete show notes for this episode can be found at twimlai.com/go/444</description>
      <pubDate>Mon, 04 Jan 2021 22:33:23 -0000</pubDate>
      <itunes:title>Trends in Computer Vision with Pavan Turaga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>444</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3f727eec-ee98-11eb-9502-6337dc57cae8/image/TWIML_COVER_800x800_PT_Rewind.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering &amp; Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona...</itunes:subtitle>
      <itunes:summary>AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering &amp; Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University.
 Pavan, who joined us back in June to talk through his work from CVPR ‘20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he’s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future.
 We want to hear from you! Send your thoughts on the year that was 2020 below in the comments, or via Twitter at @samcharrington or @twimlai.
 The complete show notes for this episode can be found at twimlai.com/go/444</itunes:summary>
      <content:encoded>
        <![CDATA[<p>AI Rewind continues today as we’re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering &amp; Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University.</p> <p>Pavan, who joined us back in June to talk through his work from CVPR ‘20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he’s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future.</p> <p>We want to hear from you! Send your thoughts on the year that was 2020 below in the comments, or via Twitter at <a href="https://twitter.com/samcharrington">@samcharrington</a> or <a href="https://twitter.com/twimlai">@twimlai</a>.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/444</p>]]>
      </content:encoded>
      <itunes:duration>4159</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b5626b2a-29cc-409b-a9ed-fd4c72676357]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6105670531.mp3?updated=1629244940"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Reinforcement Learning with Pablo Samuel Castro - #443</title>
      <link>https://twimlai.com/trends-in-reinforcement-learning-with-pablo-samuel-castro</link>
      <description>Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain.
 Pablo joined us earlier this year for a discussion about Music &amp; AI, and his Geometric Perspective on Reinforcement Learning, as well our RL office hours during the inaugural TWIMLfest. In today’s conversation, we explore some of the latest and greatest RL advancements coming out of the major conferences this year, broken down into a few major themes, Metrics/Representations, Understanding and Evaluating Deep Reinforcement Learning, and RL in the Real World.
 This was a very fun conversation, and we encourage you to check out all the great papers and other resources available on the show notes page.</description>
      <pubDate>Wed, 30 Dec 2020 18:51:31 -0000</pubDate>
      <itunes:title>Trends in Reinforcement Learning with Pablo Samuel Castro</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>443</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3f96dc24-ee98-11eb-9502-cfa7a365d9dd/image/TWIML_COVER_800x800_PC_Rewind.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain. Pablo joined us earlier this year for a discussion about Music &amp; AI, and his Geometric Perspective...</itunes:subtitle>
      <itunes:summary>Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain.
 Pablo joined us earlier this year for a discussion about Music &amp; AI, and his Geometric Perspective on Reinforcement Learning, as well our RL office hours during the inaugural TWIMLfest. In today’s conversation, we explore some of the latest and greatest RL advancements coming out of the major conferences this year, broken down into a few major themes, Metrics/Representations, Understanding and Evaluating Deep Reinforcement Learning, and RL in the Real World.
 This was a very fun conversation, and we encourage you to check out all the great papers and other resources available on the show notes page.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain.</p> <p>Pablo joined us earlier this year for a discussion about Music &amp; AI, and his Geometric Perspective on Reinforcement Learning, as well our RL office hours during the inaugural TWIMLfest. In today’s conversation, we explore some of the latest and greatest RL advancements coming out of the major conferences this year, broken down into a few major themes, Metrics/Representations, Understanding and Evaluating Deep Reinforcement Learning, and RL in the Real World.</p> <p>This was a very fun conversation, and we encourage you to check out all the great papers and other resources available on the <a href="https://twimlai.com/go/443">show notes page</a>.</p>]]>
      </content:encoded>
      <itunes:duration>5214</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a0d76110-61bf-4659-b1d3-eb70c95a5fe7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1670542751.mp3?updated=1629245002"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran - #442</title>
      <link>https://twimlai.com/morel-model-based-offline-reinforcement-learning-with-aravind-rajeswaran</link>
      <description>Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
 At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
 The complete show notes for this episode can be found at twimlai.com/go/442</description>
      <pubDate>Mon, 28 Dec 2020 21:19:48 -0000</pubDate>
      <itunes:title>MOReL: Model-Based Offline Reinforcement Learning with Aravind Rajeswaran</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>442</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3fbfff46-ee98-11eb-9502-2fd081280a8c/image/TWIML_COVER_800x800_AR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington. At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our...</itunes:subtitle>
      <itunes:summary>Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
 At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.
 The complete show notes for this episode can be found at twimlai.com/go/442</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.</p> <p>At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a “prerequisite” to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they’re seeing from this research.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/442</p>]]>
      </content:encoded>
      <itunes:duration>2281</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[62afda48-a8ea-4ec7-8ca4-77c7edb52dc1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7928636684.mp3?updated=1629244778"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning as a Software Engineering Enterprise with Charles Isbell - #441</title>
      <link>https://twimlai.com/machine-learning-as-a-software-engineering-enterprise-with-charles-isbell</link>
      <description>As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing.
 This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored.
 We also touch on the fallout from Timnit Gebru being “resignated” and the importance of having diverse voices and different perspectives “in the room,” and what the future holds for machine learning as a discipline.
 The complete show notes for this episode can be found at twimlai.com/go/441. </description>
      <pubDate>Wed, 23 Dec 2020 22:03:50 -0000</pubDate>
      <itunes:title>Machine Learning as a Software Engineering Enterprise with Charles Isbell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>441</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/3fe621c6-ee98-11eb-9502-63df2cc40b74/image/TWIML_COVER_800x800_CI.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing. This year Charles gave an Invited Talk at this year’s conference,...</itunes:subtitle>
      <itunes:summary>As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing.
 This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored.
 We also touch on the fallout from Timnit Gebru being “resignated” and the importance of having diverse voices and different perspectives “in the room,” and what the future holds for machine learning as a discipline.
 The complete show notes for this episode can be found at twimlai.com/go/441. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>As we continue our NeurIPS 2020 series, we’re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing.</p> <p>This year Charles gave an Invited Talk at this year’s conference, You Can’t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored.</p> <p>We also touch on the fallout from Timnit Gebru being “resignated” and the importance of having diverse voices and different perspectives “in the room,” and what the future holds for machine learning as a discipline.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/441">twimlai.com/go/441</a>. </p>]]>
      </content:encoded>
      <itunes:duration>2782</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[68277daa-cee2-4506-9a98-ac5915c20a03]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9453371471.mp3?updated=1629244800"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Natural Graph Networks with Taco Cohen - #440</title>
      <link>https://twimlai.com/natural-graph-networks-with-taco-cohen</link>
      <description>Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies.
 In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative models, as well as his paper “Natural Graph Networks,” which explores the concept of “naturality, a generalization of equivariance” which suggests that weaker constraints will allow for a “wider class of architectures.”
 We also discuss some of Taco’s recent research on neural compression and a very interesting visual demo for equivariance CNNs that Taco and the Qualcomm team released during the conference.
 The complete show notes for this episode can be found at twimlai.com/go/440.</description>
      <pubDate>Mon, 21 Dec 2020 20:02:24 -0000</pubDate>
      <itunes:title>Natural Graph Networks with Taco Cohen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>440</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40063786-ee98-11eb-9502-b3e39cd5ada3/image/TWIML_COVER_800x800_TC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies. In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative...</itunes:subtitle>
      <itunes:summary>Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies.
 In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative models, as well as his paper “Natural Graph Networks,” which explores the concept of “naturality, a generalization of equivariance” which suggests that weaker constraints will allow for a “wider class of architectures.”
 We also discuss some of Taco’s recent research on neural compression and a very interesting visual demo for equivariance CNNs that Taco and the Qualcomm team released during the conference.
 The complete show notes for this episode can be found at twimlai.com/go/440.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies.</p> <p>In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative models, as well as his paper “Natural Graph Networks,” which explores the concept of “naturality, a generalization of equivariance” which suggests that weaker constraints will allow for a “wider class of architectures.”</p> <p>We also discuss some of Taco’s recent research on neural compression and a very interesting visual demo for equivariance CNNs that Taco and the Qualcomm team released during the conference.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/440">twimlai.com/go/440</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3503</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7d1a2a4f-7e30-4930-bb92-35e1b5b254d5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9835995828.mp3?updated=1629244889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Productionizing Time-Series Workloads at Siemens Energy with Edgar Bahilo Rodriguez - #439</title>
      <link>https://twimlai.com/productionizing-time-series-workloads-at-siemens-energy-with-edgar-bahilo-rodriguez</link>
      <description>Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy.
 Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the resurrection of R for machine learning and productionalization. In our conversation with Edgar, we explore the fundamentals of building a strong machine learning infrastructure, and how they’re breaking down applications and using mixed technologies to build models.
 We also discuss their industrial applications, including wind, power production management, managing systems intent on decreasing the environmental impact of pre-existing installations, and their extensive use of time-series forecasting across these use cases.
 The complete show notes can be found at twimlai.com/go/439.</description>
      <pubDate>Fri, 18 Dec 2020 20:13:52 -0000</pubDate>
      <itunes:title>Productionizing Time-Series Workloads at Siemens Energy with Edgar Bahilo Rodriguez</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>439</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/402e3f1a-ee98-11eb-9502-b3cb101f6900/image/TWIML_COVER_800x800_EBR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy. Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the...</itunes:subtitle>
      <itunes:summary>Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy.
 Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the resurrection of R for machine learning and productionalization. In our conversation with Edgar, we explore the fundamentals of building a strong machine learning infrastructure, and how they’re breaking down applications and using mixed technologies to build models.
 We also discuss their industrial applications, including wind, power production management, managing systems intent on decreasing the environmental impact of pre-existing installations, and their extensive use of time-series forecasting across these use cases.
 The complete show notes can be found at twimlai.com/go/439.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy.</p> <p>Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the resurrection of R for machine learning and productionalization. In our conversation with Edgar, we explore the fundamentals of building a strong machine learning infrastructure, and how they’re breaking down applications and using mixed technologies to build models.</p> <p>We also discuss their industrial applications, including wind, power production management, managing systems intent on decreasing the environmental impact of pre-existing installations, and their extensive use of time-series forecasting across these use cases.</p> <p>The complete show notes can be found at <a href="https://twimlai.com/go/439">twimlai.com/go/439</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2486</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c9fcad13-90bc-4ea3-8afe-799ca6a2ab2d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9814617094.mp3?updated=1629217030"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Feature Store at Intuit with Srivathsan Canchi - #438</title>
      <link>https://twimlai.com/ml-feature-store-at-intuit-with-srivathsan-canchi</link>
      <description>Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit. 
 As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of the SageMaker Feature Store. To our pleasant surprise, we came to learn that our friends at Intuit are the original architects of this offering and partnered with AWS to productize it at a much broader scale. In our conversation with Srivathsan, we explore the focus areas that are supported by the Intuit machine learning platform across various teams, including QuickBooks and Mint, Turbotax, and Credit Karma,  and his thoughts on why companies should be investing in feature stores. 
 We also discuss why the concept of “feature store” has seemingly exploded in the last year, and how you know when your organization is ready to deploy one. Finally, we dig into the specifics of the feature store, including the popularity of graphQL and why they chose to include it in their pipelines, the similarities (and differences) between the two versions of the store, and much more!
 The complete show notes for this episode can be found at twimlai.com/go/438.</description>
      <pubDate>Wed, 16 Dec 2020 20:14:07 -0000</pubDate>
      <itunes:title>ML Feature Store at Intuit with Srivathsan Canchi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>438</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40529fb8-ee98-11eb-9502-fb3c1fb6aed0/image/TWIML_COVER_800x800_SC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit.  As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of...</itunes:subtitle>
      <itunes:summary>Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit. 
 As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of the SageMaker Feature Store. To our pleasant surprise, we came to learn that our friends at Intuit are the original architects of this offering and partnered with AWS to productize it at a much broader scale. In our conversation with Srivathsan, we explore the focus areas that are supported by the Intuit machine learning platform across various teams, including QuickBooks and Mint, Turbotax, and Credit Karma,  and his thoughts on why companies should be investing in feature stores. 
 We also discuss why the concept of “feature store” has seemingly exploded in the last year, and how you know when your organization is ready to deploy one. Finally, we dig into the specifics of the feature store, including the popularity of graphQL and why they chose to include it in their pipelines, the similarities (and differences) between the two versions of the store, and much more!
 The complete show notes for this episode can be found at twimlai.com/go/438.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit. </p> <p>As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of the SageMaker Feature Store. To our pleasant surprise, we came to learn that our friends at Intuit are the original architects of this offering and partnered with AWS to productize it at a much broader scale. In our conversation with Srivathsan, we explore the focus areas that are supported by the Intuit machine learning platform across various teams, including QuickBooks and Mint, Turbotax, and Credit Karma,  and his thoughts on why companies should be investing in feature stores. </p> <p>We also discuss why the concept of “feature store” has seemingly exploded in the last year, and how you know when your organization is ready to deploy one. Finally, we dig into the specifics of the feature store, including the popularity of graphQL and why they chose to include it in their pipelines, the similarities (and differences) between the two versions of the store, and much more!</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/438">twimlai.com/go/438</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2463</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9f1c0d09-d9a2-4a18-a4f6-1c813072dadb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9698580121.mp3?updated=1629217023"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>re:Invent Roundup 2020 with Swami Sivasubramanian - #437</title>
      <link>https://twimlai.com/reinvent-roundup-2020-with-swami-sivasubramanian</link>
      <description>Today we’re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS.
 During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few advancements to SageMaker. In this roundup conversation, we discuss the motivation for hosting the first-ever machine learning keynote at the conference, a bunch of details surrounding tools like Pipelines for workflow management, Clarify for bias detection, and JumpStart for easy to use algorithms and notebooks, and many more.
 We also discuss the emphasis placed on DevOps and MLOps tools in these announcements, and how the tools are all interconnected. Finally, we briefly touch on the announcement of the AWS feature store, but be sure to check back later this week for a more in-depth discussion on that particular release!
 The complete show notes for this episode can be found at twimlai.com/go/437.</description>
      <pubDate>Mon, 14 Dec 2020 20:41:46 -0000</pubDate>
      <itunes:title>re:Invent Roundup 2020 with Swami Sivasubramanian</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>437</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40851498-ee98-11eb-9502-a3a794d99ae1/image/TWIML_COVER_800x800_SS_2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS. During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few...</itunes:subtitle>
      <itunes:summary>Today we’re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS.
 During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few advancements to SageMaker. In this roundup conversation, we discuss the motivation for hosting the first-ever machine learning keynote at the conference, a bunch of details surrounding tools like Pipelines for workflow management, Clarify for bias detection, and JumpStart for easy to use algorithms and notebooks, and many more.
 We also discuss the emphasis placed on DevOps and MLOps tools in these announcements, and how the tools are all interconnected. Finally, we briefly touch on the announcement of the AWS feature store, but be sure to check back later this week for a more in-depth discussion on that particular release!
 The complete show notes for this episode can be found at twimlai.com/go/437.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS.</p> <p>During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few advancements to SageMaker. In this roundup conversation, we discuss the motivation for hosting the first-ever machine learning keynote at the conference, a bunch of details surrounding tools like Pipelines for workflow management, Clarify for bias detection, and JumpStart for easy to use algorithms and notebooks, and many more.</p> <p>We also discuss the emphasis placed on DevOps and MLOps tools in these announcements, and how the tools are all interconnected. Finally, we briefly touch on the announcement of the AWS feature store, but be sure to check back later this week for a more in-depth discussion on that particular release!</p> <p>The complete show notes for this episode can be found at twimlai.com/go/437.</p>]]>
      </content:encoded>
      <itunes:duration>2924</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9f455e9e-ef7c-4f6c-8460-24d88b265df5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8720896309.mp3?updated=1629217051"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Predictive Disease Risk Modeling at 23andMe with Subarna Sinha - #436</title>
      <link>https://twimlai.com/predictive-disease-risk-modeling-at-23andme-with-subarna-sinha</link>
      <description>Today we’re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe.
 23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use case we discuss in our conversation.
 Subarna talks us through an initial use case of creating an evaluation of polygenic scores, and how that led them to build an ML pipeline and platform. We talk through the tools and tech stack used for the operationalization of their platform, the use of synthetic data, the internal pushback that came along with the changes that were being made, and what’s next for her team and the platform.
 The complete show notes for this episode can be found at twimlai.com/go/436.</description>
      <pubDate>Fri, 11 Dec 2020 21:35:16 -0000</pubDate>
      <itunes:title>Predictive Disease Risk Modeling at 23andMe with Subarna Sinha</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>436</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40a18a24-ee98-11eb-9502-fba29371e819/image/TWIML_COVER_800x800_SS_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe. 23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe.
 23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use case we discuss in our conversation.
 Subarna talks us through an initial use case of creating an evaluation of polygenic scores, and how that led them to build an ML pipeline and platform. We talk through the tools and tech stack used for the operationalization of their platform, the use of synthetic data, the internal pushback that came along with the changes that were being made, and what’s next for her team and the platform.
 The complete show notes for this episode can be found at twimlai.com/go/436.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe.</p> <p>23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use case we discuss in our conversation.</p> <p>Subarna talks us through an initial use case of creating an evaluation of polygenic scores, and how that led them to build an ML pipeline and platform. We talk through the tools and tech stack used for the operationalization of their platform, the use of synthetic data, the internal pushback that came along with the changes that were being made, and what’s next for her team and the platform.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/436">twimlai.com/go/436</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2384</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[60d2004e-4905-495d-be7d-ab23829dfa71]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1580949074.mp3?updated=1629216998"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Video AI at RTL with Daan Odijk - #435</title>
      <link>https://twimlai.com/scaling-video-ai-at-rtl-with-daan-odijk</link>
      <description>Today we’re joined by Daan Odijk, Data Science Manager at RTL.
 In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content understanding use cases. Daan walks us through some of the challenges on both the modeling and engineering sides of building the platform, as well as the inherent challenges of video applications.
 Finally, we discuss the current state of their platform, and the benefits they’ve seen from having this infrastructure in place, and why using building a custom platform was worth the investment.
 The complete show notes for this episode can be found at twimlai.com/go/435. </description>
      <pubDate>Wed, 09 Dec 2020 19:25:58 -0000</pubDate>
      <itunes:title>Scaling Video AI at RTL with Daan Odijk</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>435</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40c5ccea-ee98-11eb-9502-672b8a6b9375/image/TWIML_COVER_800x800_DO.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Daan Odijk, Data Science Manager at RTL. In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Daan Odijk, Data Science Manager at RTL.
 In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content understanding use cases. Daan walks us through some of the challenges on both the modeling and engineering sides of building the platform, as well as the inherent challenges of video applications.
 Finally, we discuss the current state of their platform, and the benefits they’ve seen from having this infrastructure in place, and why using building a custom platform was worth the investment.
 The complete show notes for this episode can be found at twimlai.com/go/435. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Daan Odijk, Data Science Manager at RTL.</p> <p>In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content understanding use cases. Daan walks us through some of the challenges on both the modeling and engineering sides of building the platform, as well as the inherent challenges of video applications.</p> <p>Finally, we discuss the current state of their platform, and the benefits they’ve seen from having this infrastructure in place, and why using building a custom platform was worth the investment.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/435. </p>]]>
      </content:encoded>
      <itunes:duration>2428</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2bba9a35-c9c0-447e-82bd-b11f6cda62fe]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7256281127.mp3?updated=1629217009"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Benchmarking ML with MLCommons w/ Peter Mattson - #434</title>
      <link>https://twimlai.com/benchmarking-ml-with-mlperf-w-peter-mattson</link>
      <description>Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons. 
 In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference. 
 We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.
 The complete show notes page for this episode can be found at twimlai.com/go/434.</description>
      <pubDate>Mon, 07 Dec 2020 20:40:58 -0000</pubDate>
      <itunes:title>Benchmarking ML with MLCommons w/ Peter Mattson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>434</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/40f27bc8-ee98-11eb-9502-b39ff4229768/image/TWIML_COVER_800x800_PM-2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons.  In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons. 
 In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference. 
 We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.
 The complete show notes page for this episode can be found at twimlai.com/go/434.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons. </p> <p>In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference. </p> <p>We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they’re approaching this through the "People’s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.</p> <p>The complete show notes page for this episode can be found at <a href="https://twimlai.com/go/434">twimlai.com/go/434</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2764</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0b471706-7f77-40c3-9631-565a949f1423]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2487103671.mp3?updated=1629217035"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for NLP: From the Trenches with Charlene Chambliss - #433</title>
      <link>https://twimlai.com/deep-learning-for-nlp-from-the-trenches-with-charlene-chambliss</link>
      <description>Today we’re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI. 
 Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she’s been doing with NLP. In our conversation, we explore her experiences working with newer NLP models and tools like BERT and HuggingFace, as well as whats she’s learned along the way with word embeddings, labeling tasks, debugging, and more. We also focus on a few of her projects, like her popular multi-lingual BERT project, and a COVID-19 classifier. 
 Finally, Charlene shares her experience getting into data science and machine learning coming from a non-technical background, and what the transition was like, and tips for people looking to make a similar shift.</description>
      <pubDate>Thu, 03 Dec 2020 20:43:43 -0000</pubDate>
      <itunes:title>Deep Learning for NLP: From the Trenches with Charlene Chambliss</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>433</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41102de4-ee98-11eb-9502-5b009e1cb8c4/image/TWIML_COVER_800x800_CC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI.  Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she’s been doing with NLP. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI. 
 Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she’s been doing with NLP. In our conversation, we explore her experiences working with newer NLP models and tools like BERT and HuggingFace, as well as whats she’s learned along the way with word embeddings, labeling tasks, debugging, and more. We also focus on a few of her projects, like her popular multi-lingual BERT project, and a COVID-19 classifier. 
 Finally, Charlene shares her experience getting into data science and machine learning coming from a non-technical background, and what the transition was like, and tips for people looking to make a similar shift.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI. </p> <p>Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she’s been doing with NLP. In our conversation, we explore her experiences working with newer NLP models and tools like BERT and HuggingFace, as well as whats she’s learned along the way with word embeddings, labeling tasks, debugging, and more. We also focus on a few of her projects, like her popular multi-lingual BERT project, and a COVID-19 classifier. </p> <p>Finally, Charlene shares her experience getting into data science and machine learning coming from a non-technical background, and what the transition was like, and tips for people looking to make a similar shift.</p>]]>
      </content:encoded>
      <itunes:duration>2743</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cbb22af1-6487-46f9-9c64-6d690ffddf24]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4373868675.mp3?updated=1629244815"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Feature Stores for Accelerating AI Development - #432</title>
      <link>https://twimlai.com/feature-stores-for-accelerating-ai-development</link>
      <description>In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder &amp; CEO of Preset, for a discussion on Feature Stores for Accelerating AI Development.
 In this panel discussion, Sam and our guests explored how organizations can increase value and decrease time-to-market for machine learning using feature stores, MLOps, and open source. We also discuss the main data challenges of AI/ML, and the role of the feature store in solving those challenges.
 The complete show notes for this episode can be found at twimlai.com/go/432.</description>
      <pubDate>Mon, 30 Nov 2020 22:40:21 -0000</pubDate>
      <itunes:title>Feature Stores for Accelerating AI Development</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>432</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/415154f4-ee98-11eb-9502-3799a9d5acf6/image/TWIML_COVER_800x800_SC-MB-WP-KS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder &amp; CEO of Preset, for a discussion on...</itunes:subtitle>
      <itunes:summary>In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder &amp; CEO of Preset, for a discussion on Feature Stores for Accelerating AI Development.
 In this panel discussion, Sam and our guests explored how organizations can increase value and decrease time-to-market for machine learning using feature stores, MLOps, and open source. We also discuss the main data challenges of AI/ML, and the role of the feature store in solving those challenges.
 The complete show notes for this episode can be found at twimlai.com/go/432.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder &amp; CEO of Preset, for a discussion on Feature Stores for Accelerating AI Development.</p> <p>In this panel discussion, Sam and our guests explored how organizations can increase value and decrease time-to-market for machine learning using feature stores, MLOps, and open source. We also discuss the main data challenges of AI/ML, and the role of the feature store in solving those challenges.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/432">twimlai.com/go/432</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3376</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[015b8020-47ef-4059-9165-00d3f7f5923a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5354607679.mp3?updated=1629217072"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>An Exploration of Coded Bias with Shalini Kantayya, Deb Raji and Meredith Broussard - #431</title>
      <link>https://twimlai.com/an-exploration-of-coded-bias-with-shalini-kantayya-deb-raji-and-meredith-broussard</link>
      <description>In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film.
 In this panel discussion, Sam and our guests explored the societal implications of the biases embedded within AI algorithms. The conversation discussed examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved.
 Our panelists Shalini, Meredith, and Deb each share insight into their experience working on and researching bias in AI systems and the oppressive and dehumanizing impact they can have on people in the real world. 
 The complete show notes for this film can be found at twimlai.com/go/431</description>
      <pubDate>Fri, 27 Nov 2020 21:41:23 -0000</pubDate>
      <itunes:title>An Exploration of Coded Bias with Shalini Kantayya, Deb Raji and Meredith Broussard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>431</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41783808-ee98-11eb-9502-b76e04515c5d/image/Copy_of_2020_TWIMLfest_Banners.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film. In this panel discussion, Sam and our guests explored the societal...</itunes:subtitle>
      <itunes:summary>In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film.
 In this panel discussion, Sam and our guests explored the societal implications of the biases embedded within AI algorithms. The conversation discussed examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved.
 Our panelists Shalini, Meredith, and Deb each share insight into their experience working on and researching bias in AI systems and the oppressive and dehumanizing impact they can have on people in the real world. 
 The complete show notes for this film can be found at twimlai.com/go/431</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film.</p> <p>In this panel discussion, Sam and our guests explored the societal implications of the biases embedded within AI algorithms. The conversation discussed examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved.</p> <p>Our panelists Shalini, Meredith, and Deb each share insight into their experience working on and researching bias in AI systems and the oppressive and dehumanizing impact they can have on people in the real world. </p> <p>The complete show notes for this film can be found at <a href="https://twimlai.com/go/431">twimlai.com/go/431</a></p>]]>
      </content:encoded>
      <itunes:duration>5088</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1f91163a-2a5c-4f53-9893-70f395a765da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3288309548.mp3?updated=1629217120"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Common Sense as an Algorithmic Framework with Dileep George - #430</title>
      <link>https://twimlai.com/common-sense-as-an-algorithmic-framework-with-dileep-george</link>
      <description>Today we’re joined by Dileep George, Founder and the CTO of Vicarious.
 Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the nuance of “language understanding” and how all the tasks that fall underneath it are all interconnected, with or without language.
 We also discuss his work with Recursive Cortical Networks, Schema Networks, and what’s next on the path towards AGI!</description>
      <pubDate>Mon, 23 Nov 2020 21:18:54 -0000</pubDate>
      <itunes:title>Common Sense as an Algorithmic Framework with Dileep George - #430</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>430</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/419e0632-ee98-11eb-9502-e782fa8bc782/image/TWIML_COVER_800x800_DG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dileep George, Founder and the CTO of Vicarious. Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dileep George, Founder and the CTO of Vicarious.
 Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the nuance of “language understanding” and how all the tasks that fall underneath it are all interconnected, with or without language.
 We also discuss his work with Recursive Cortical Networks, Schema Networks, and what’s next on the path towards AGI!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Dileep George, Founder and the CTO of Vicarious.</p> <p>Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the nuance of “language understanding” and how all the tasks that fall underneath it are all interconnected, with or without language.</p> <p>We also discuss his work with Recursive Cortical Networks, Schema Networks, and what’s next on the path towards AGI!</p>]]>
      </content:encoded>
      <itunes:duration>2872</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[df3bab69-ca74-4d93-8b72-38c490c3e02f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9309754283.mp3?updated=1629217023"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Enterprise ML in 2020: Still Hard! with Sushil Thomas - #429</title>
      <link>https://twimlai.com/scaling-enterprise-ml-in-2020-still-hard-with-sushil-thomas</link>
      <description>Today we’re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera.
 Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this conversation with Sushil, we recap the roundtable, exploring some of the topics discussed and insights gained from those conversations. Sushil gives us a look at how COVID19 has impacted business throughout the year, and how the pandemic is shaping enterprise decision making moving forward. 
 We also discuss some of the key trends he’s seeing as organizations try to scale their machine learning and AI efforts, including understanding best practices, and learning how to hybridize the engineering side of ML with the scientific exploration of the tasks. Finally, we explore if organizational models like hub vs centralized are still organization-specific or if that’s changed in recent years, as well as how to get and retain good ML talent with giant companies like Google and Microsoft looming large.
 The complete show notes for this episode can be found at https://twimlai.com/go/429.</description>
      <pubDate>Thu, 19 Nov 2020 21:21:42 -0000</pubDate>
      <itunes:title>Scaling Enterprise ML in 2020: Still Hard! with Sushil Thomas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>429</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41b71488-ee98-11eb-9502-4735213f7d86/image/TWIML_COVER_800x800_ST.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera. Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera.
 Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this conversation with Sushil, we recap the roundtable, exploring some of the topics discussed and insights gained from those conversations. Sushil gives us a look at how COVID19 has impacted business throughout the year, and how the pandemic is shaping enterprise decision making moving forward. 
 We also discuss some of the key trends he’s seeing as organizations try to scale their machine learning and AI efforts, including understanding best practices, and learning how to hybridize the engineering side of ML with the scientific exploration of the tasks. Finally, we explore if organizational models like hub vs centralized are still organization-specific or if that’s changed in recent years, as well as how to get and retain good ML talent with giant companies like Google and Microsoft looming large.
 The complete show notes for this episode can be found at https://twimlai.com/go/429.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera.</p> <p>Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this conversation with Sushil, we recap the roundtable, exploring some of the topics discussed and insights gained from those conversations. Sushil gives us a look at how COVID19 has impacted business throughout the year, and how the pandemic is shaping enterprise decision making moving forward. </p> <p>We also discuss some of the key trends he’s seeing as organizations try to scale their machine learning and AI efforts, including understanding best practices, and learning how to hybridize the engineering side of ML with the scientific exploration of the tasks. Finally, we explore if organizational models like hub vs centralized are still organization-specific or if that’s changed in recent years, as well as how to get and retain good ML talent with giant companies like Google and Microsoft looming large.</p> <p>The complete show notes for this episode can be found at https://twimlai.com/go/429.</p>]]>
      </content:encoded>
      <itunes:duration>2779</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[50e22c35-3857-46eb-8300-eac2b1decbf4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7468389701.mp3?updated=1629216980"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Enabling Clinical Automation: From Research to Deployment with Devin Singh - #428</title>
      <link>https://twimlai.com/enabling-clinical-automation-from-research-to-deployment-with-devin-singh</link>
      <description>Today we’re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence &amp; Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI.
 In our conversation with Devin, we discuss some of the interesting ways that Devin is deploying machine learning within the SickKids hospital, the current structure of academic research, including how much research and publications are currently being incentivized, how little of those research projects actually make it to deployment, and how Devin is working to flip that system on it's head. 
 We also talk about his work at Hero AI, where he is commercializing and deploying his academic research to build out infrastructure and deploy AI solutions within hospitals, creating an automated pipeline with patients, caregivers, and EHS companies. Finally, we discuss Devins's thoughts on how he’d approach bias mitigation in these systems, and the importance of having proper stakeholder engagement and using design methodology when building ML systems.
 The complete show notes for this episode can be found at twimlai.com/go/428.</description>
      <pubDate>Mon, 16 Nov 2020 22:20:29 -0000</pubDate>
      <itunes:title>Enabling Clinical Automation: From Research to Deployment with Devin Singh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>428</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/41dc2aa2-ee98-11eb-9502-ebe31208b146/image/TWIML_COVER_800x800_DS2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence &amp; Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence &amp; Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI.
 In our conversation with Devin, we discuss some of the interesting ways that Devin is deploying machine learning within the SickKids hospital, the current structure of academic research, including how much research and publications are currently being incentivized, how little of those research projects actually make it to deployment, and how Devin is working to flip that system on it's head. 
 We also talk about his work at Hero AI, where he is commercializing and deploying his academic research to build out infrastructure and deploy AI solutions within hospitals, creating an automated pipeline with patients, caregivers, and EHS companies. Finally, we discuss Devins's thoughts on how he’d approach bias mitigation in these systems, and the importance of having proper stakeholder engagement and using design methodology when building ML systems.
 The complete show notes for this episode can be found at twimlai.com/go/428.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence &amp; Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI.</p> <p>In our conversation with Devin, we discuss some of the interesting ways that Devin is deploying machine learning within the SickKids hospital, the current structure of academic research, including how much research and publications are currently being incentivized, how little of those research projects actually make it to deployment, and how Devin is working to flip that system on it's head. </p> <p>We also talk about his work at Hero AI, where he is commercializing and deploying his academic research to build out infrastructure and deploy AI solutions within hospitals, creating an automated pipeline with patients, caregivers, and EHS companies. Finally, we discuss Devins's thoughts on how he’d approach bias mitigation in these systems, and the importance of having proper stakeholder engagement and using design methodology when building ML systems.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/428">twimlai.com/go/428</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2617</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[213b83b2-70a1-41ca-bf0a-3194dbe439bd]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5722887477.mp3?updated=1629244786"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pixels to Concepts with Backpropagation w/ Roland Memisevic - #427</title>
      <link>https://twimlai.com/pixels-to-concepts-with-backpropagation-w-roland-memisevic</link>
      <description>Today we’re joined by Roland Memisevic, return podcast guest and Co-Founder &amp; CEO of Twenty Billion Neurons. 
 We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app called Fitness Ally, an interactive, personalized fitness coach on your phone. 
 In our conversation with Roland, we explore the progress TwentyBN has made on their goal of training deep neural networks to understand physical movement and exercise. We also discuss how they’ve taken their research on understanding video context and awareness and applied it in their app, including how recent advancements have allowed them to deploy their neural net locally while preserving privacy, and Roland’s thoughts on the enormous opportunity that lies in the merging of language and video processing.
 The complete show notes for this episode can be found at twimlai.com/go/427.</description>
      <pubDate>Thu, 12 Nov 2020 18:29:58 -0000</pubDate>
      <itunes:title>Pixels to Concepts with Backpropagation w/ Roland Memisevic</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>427</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4214f062-ee98-11eb-9502-9f66422e6e60/image/TWIML_COVER_800x800_RM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Roland Memisevic, return podcast guest and Co-Founder &amp; CEO of Twenty Billion Neurons.  We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Roland Memisevic, return podcast guest and Co-Founder &amp; CEO of Twenty Billion Neurons. 
 We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app called Fitness Ally, an interactive, personalized fitness coach on your phone. 
 In our conversation with Roland, we explore the progress TwentyBN has made on their goal of training deep neural networks to understand physical movement and exercise. We also discuss how they’ve taken their research on understanding video context and awareness and applied it in their app, including how recent advancements have allowed them to deploy their neural net locally while preserving privacy, and Roland’s thoughts on the enormous opportunity that lies in the merging of language and video processing.
 The complete show notes for this episode can be found at twimlai.com/go/427.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Roland Memisevic, return podcast guest and Co-Founder &amp; CEO of Twenty Billion Neurons. </p> <p>We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app called Fitness Ally, an interactive, personalized fitness coach on your phone. </p> <p>In our conversation with Roland, we explore the progress TwentyBN has made on their goal of training deep neural networks to understand physical movement and exercise. We also discuss how they’ve taken their research on understanding video context and awareness and applied it in their app, including how recent advancements have allowed them to deploy their neural net locally while preserving privacy, and Roland’s thoughts on the enormous opportunity that lies in the merging of language and video processing.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/427">twimlai.com/go/427</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2093</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1b42d9c0-0d87-4c31-bc22-ca7c4cf9a3af]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4496142783.mp3?updated=1629216943"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fighting Global Health Disparities with AI w/ Jon Wang - #426</title>
      <link>https://twimlai.com/fighting-global-health-disparities-with-ai-w-jon-wang</link>
      <description>Today we’re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation.
 In our conversation with Jon, we explore a few of the different ways he’s attacking various public health issues, including improving the electronic health records system through automating clinical order sets, and exploring how the lack of literature and AI talent in the non-profit and healthcare spaces, and bad data have further marginalized undersupported communities.
 We also discuss his work at the Gates Foundation, which included understanding how AI can be helpful in lower-resource and lower-income countries, and building digital infrastructure, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/426.
  </description>
      <pubDate>Mon, 09 Nov 2020 19:19:42 -0000</pubDate>
      <itunes:title>Fighting Global Health Disparities with AI w/ Jon Wang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>426</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/422db070-ee98-11eb-9502-03751590512d/image/TWIML_COVER_800x800_JW.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation. In our conversation with Jon, we explore a few of the different ways he’s attacking various public...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation.
 In our conversation with Jon, we explore a few of the different ways he’s attacking various public health issues, including improving the electronic health records system through automating clinical order sets, and exploring how the lack of literature and AI talent in the non-profit and healthcare spaces, and bad data have further marginalized undersupported communities.
 We also discuss his work at the Gates Foundation, which included understanding how AI can be helpful in lower-resource and lower-income countries, and building digital infrastructure, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/426.
  </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation.</p> <p>In our conversation with Jon, we explore a few of the different ways he’s attacking various public health issues, including improving the electronic health records system through automating clinical order sets, and exploring how the lack of literature and AI talent in the non-profit and healthcare spaces, and bad data have further marginalized undersupported communities.</p> <p>We also discuss his work at the Gates Foundation, which included understanding how AI can be helpful in lower-resource and lower-income countries, and building digital infrastructure, and much more.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/426">twimlai.com/go/426.</a></p> <p> </p>]]>
      </content:encoded>
      <itunes:duration>2149</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eacdb690-6e57-4a09-815b-33599d3be4ac]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2414767410.mp3?updated=1629216959"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accessibility and Computer Vision - #425</title>
      <link>https://twimlai.com/accessibility-and-computer-vision</link>
      <description>Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions.
 However, most digital images remain inaccessible to the 39 million people worldwide who are blind. AI and computer vision technologies hold the potential to increase image accessibility for people who are blind, through technologies like automated image descriptions.
 The speakers share their perspectives as people who are both technology experts and are blind, providing insight into future directions for the field of computer vision for describing images and videos for people who are blind.
 To check out the video of this panel, visit  here!
 The complete show notes for this episode can be found at twimlai.com/go/425</description>
      <pubDate>Thu, 05 Nov 2020 22:46:38 -0000</pubDate>
      <itunes:title>Accessibiliity in Computer Vision</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>425</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/424e8296-ee98-11eb-9502-1b5a79f02ada/image/TWIML_COVER_800x800_TWIMLfest.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental...</itunes:subtitle>
      <itunes:summary>Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions.
 However, most digital images remain inaccessible to the 39 million people worldwide who are blind. AI and computer vision technologies hold the potential to increase image accessibility for people who are blind, through technologies like automated image descriptions.
 The speakers share their perspectives as people who are both technology experts and are blind, providing insight into future directions for the field of computer vision for describing images and videos for people who are blind.
 To check out the video of this panel, visit  here!
 The complete show notes for this episode can be found at twimlai.com/go/425</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions.</p> <p>However, most digital images remain inaccessible to the 39 million people worldwide who are blind. AI and computer vision technologies hold the potential to increase image accessibility for people who are blind, through technologies like automated image descriptions.</p> <p>The speakers share their perspectives as people who are both technology experts and are blind, providing insight into future directions for the field of computer vision for describing images and videos for people who are blind.</p> <p>To check out the video of this panel, visit <a href="https://twimlai.com/twimlfest/sessions/accessibility-and-computer-vision"> here</a>!</p> <p>The complete show notes for this episode can be found at twimlai.com/go/425</p>]]>
      </content:encoded>
      <itunes:duration>3651</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4be55680-7a96-47c4-aea2-4b8f21937561]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4801697750.mp3?updated=1629217050"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>NLP for Equity Investing with Frank Zhao - #424</title>
      <link>https://twimlai.com/go/424</link>
      <description>Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&amp;P Global Market Intelligence.
 In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors.
 Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline.
 The complete show notes for this episode can be found at twimlai.com/go/424.</description>
      <pubDate>Mon, 02 Nov 2020 17:00:00 -0000</pubDate>
      <itunes:title>NLP for Equity Investing with Frank Zhao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>424</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/426d423a-ee98-11eb-9502-b34196dae367/image/TWIML_COVER_800x800_FZ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&amp;P Global Market Intelligence. In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&amp;P Global Market Intelligence.
 In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors.
 Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline.
 The complete show notes for this episode can be found at twimlai.com/go/424.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Frank Zhao, Senior Director of Quantamental Research at S&amp;P Global Market Intelligence.</p> <p>In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors.</p> <p>Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/424">twimlai.com/go/424</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2660</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cf4cf30d-5ef4-409c-bd92-ea20f2eb573d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3908507448.mp3?updated=1629244858"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Future of Education and AI with Salman Khan - #423</title>
      <link>https://twimlai.com/the-future-of-education-and-ai-with-sal-khan</link>
      <description>In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy.
 In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal’s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they’re planning on implementing these features in the future.
 Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond!
 The complete show notes for this episode can be found at twimlai.com/go/423.</description>
      <pubDate>Wed, 28 Oct 2020 05:47:56 -0000</pubDate>
      <itunes:title>The Future of Education and AI with Salman Khan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>423</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4292fdd6-ee98-11eb-9502-1b3ba2dd57f1/image/TWIML_COVER_800x800_SK2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy. In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and...</itunes:subtitle>
      <itunes:summary>In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy.
 In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal’s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they’re planning on implementing these features in the future.
 Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond!
 The complete show notes for this episode can be found at twimlai.com/go/423.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In the final #TWIMLfest Keynote Interview, we’re joined by Salman Khan, Founder of Khan Academy.</p> <p>In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal’s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they’re planning on implementing these features in the future.</p> <p>Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond!</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/423">twimlai.com/go/423</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2825</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3cb2eb1b-a305-4abe-ac26-eae78fa140bb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2700188121.mp3?updated=1629217041"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Why AI Innovation and Social Impact Go Hand in Hand with Milind Tambe - #422</title>
      <link>https://twimlai.com/why-ai-innovation-and-social-impact-go-hand-in-hand-with-milind-tambe</link>
      <description>In this special #TWIMLfest Keynote episode, we’re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University.
 In our conversation, we explore Milind’s various research interests, most of which fall under the umbrella of AI for Social Impact, including his work in public health, both stateside and abroad, his conservation work in South Asia and Africa, and his thoughts on the ways that those interested in social impact can get involved. 
 The complete show notes for this episode can be found at twimlai.com/go/422.</description>
      <pubDate>Fri, 23 Oct 2020 05:36:58 -0000</pubDate>
      <itunes:title>Why AI Innovation and Social Impact Go Hand in Hand with Milind Tambe</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>422</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/42b5908a-ee98-11eb-9502-ab4883438cfd/image/TWIML_COVER_800x800_MT2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special #TWIMLfest Keynote episode, we’re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University. In our conversation,...</itunes:subtitle>
      <itunes:summary>In this special #TWIMLfest Keynote episode, we’re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University.
 In our conversation, we explore Milind’s various research interests, most of which fall under the umbrella of AI for Social Impact, including his work in public health, both stateside and abroad, his conservation work in South Asia and Africa, and his thoughts on the ways that those interested in social impact can get involved. 
 The complete show notes for this episode can be found at twimlai.com/go/422.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special #TWIMLfest Keynote episode, we’re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University.</p> <p>In our conversation, we explore Milind’s various research interests, most of which fall under the umbrella of AI for Social Impact, including his work in public health, both stateside and abroad, his conservation work in South Asia and Africa, and his thoughts on the ways that those interested in social impact can get involved. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/422">twimlai.com/go/422.</a></p>]]>
      </content:encoded>
      <itunes:duration>2132</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7fab87a1-cabe-4dbe-80a9-0239a416fbd1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6853115048.mp3?updated=1629244752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What's Next for Fast.ai? w/ Jeremy Howard - #421</title>
      <link>https://twimlai.com/whats-next-for-fast-ai-w-jeremy-howard</link>
      <description>In this special #TWIMLfest episode of the podcast, we’re joined by Jeremy Howard, Founder of Fast.ai.
 In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him down the path to ML education, his thoughts on the current state of the machine learning adoption cycle, and if we’re at maximum capacity for deep learning use and capability.
 Of course, we dig into the newest version of the fast.ai framework and course, the reception of Jeremy’s book ‘Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD,’ and what’s missing from the machine learning education landscape. If you’ve missed our previous conversations with Jeremy, I encourage you to check them out here and here.
 The complete show notes for this episode can be found at https://twimlai.com/go/421.</description>
      <pubDate>Wed, 21 Oct 2020 18:55:06 -0000</pubDate>
      <itunes:title>What's Next for Fast.ai? w/ Jeremy Howard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>421</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/42d570e4-ee98-11eb-9502-ff354f2e8832/image/TWIML_COVER_800x800_JH2_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special #TWIMLfest episode of the podcast, we’re joined by Jeremy Howard, Founder of Fast.ai. In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him...</itunes:subtitle>
      <itunes:summary>In this special #TWIMLfest episode of the podcast, we’re joined by Jeremy Howard, Founder of Fast.ai.
 In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him down the path to ML education, his thoughts on the current state of the machine learning adoption cycle, and if we’re at maximum capacity for deep learning use and capability.
 Of course, we dig into the newest version of the fast.ai framework and course, the reception of Jeremy’s book ‘Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD,’ and what’s missing from the machine learning education landscape. If you’ve missed our previous conversations with Jeremy, I encourage you to check them out here and here.
 The complete show notes for this episode can be found at https://twimlai.com/go/421.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special #TWIMLfest episode of the podcast, we’re joined by Jeremy Howard, Founder of Fast.ai.</p> <p>In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him down the path to ML education, his thoughts on the current state of the machine learning adoption cycle, and if we’re at maximum capacity for deep learning use and capability.</p> <p>Of course, we dig into the newest version of the fast.ai framework and course, the reception of Jeremy’s book ‘Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD,’ and what’s missing from the machine learning education landscape. If you’ve missed our previous conversations with Jeremy, I encourage you to check them out here and here.</p> <p>The complete show notes for this episode can be found at https://twimlai.com/go/421.</p>]]>
      </content:encoded>
      <itunes:duration>3679</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6647766f-de76-4b71-be2c-ff726c9258b4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8319131211.mp3?updated=1629244811"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Feature Stores for MLOps with Mike del Balso - #420</title>
      <link>https://twimlai.com/feature-stores-for-mlops-with-mike-del-balso</link>
      <description>Today we’re joined by Mike del Balso, co-Founder and CEO of Tecton. 
 Mike, who you might remember from our last conversation on the podcast, was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure from the company in 2018, he has been busy building up Tecton, and their enterprise feature store. 
 In our conversation, Mike walks us through why he chose to focus on the feature store aspects of the machine learning platform, the journey, personal and otherwise, to operationalizing machine learning, and the capabilities that more mature platforms teams tend to look for or need to build. We also explore the differences between standalone components and feature stores, if organizations are taking their existing databases and building feature stores with them, and what a dynamic, always available feature store looks like in deployment. 
 Finally, we explore what sets Tecton apart from other vendors in this space, including enterprise cloud providers who are throwing their hat in the ring.
 The complete show notes for this episode can be found at twimlai.com/go/420.
 Thanks to our friends at Tecton for sponsoring this episode of the podcast! Find out more about what they're up to at  tecton.ai.</description>
      <pubDate>Mon, 19 Oct 2020 15:02:14 -0000</pubDate>
      <itunes:title>Feature Stores for MLOps with Mike del Balso</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>420</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/42f61024-ee98-11eb-9502-7b76ef08e12d/image/TWIML_COVER_800x800_MDB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Mike del Balso, co-Founder and CEO of Tecton.  Mike, who you might remember from our last conversation on the , was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Mike del Balso, co-Founder and CEO of Tecton. 
 Mike, who you might remember from our last conversation on the podcast, was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure from the company in 2018, he has been busy building up Tecton, and their enterprise feature store. 
 In our conversation, Mike walks us through why he chose to focus on the feature store aspects of the machine learning platform, the journey, personal and otherwise, to operationalizing machine learning, and the capabilities that more mature platforms teams tend to look for or need to build. We also explore the differences between standalone components and feature stores, if organizations are taking their existing databases and building feature stores with them, and what a dynamic, always available feature store looks like in deployment. 
 Finally, we explore what sets Tecton apart from other vendors in this space, including enterprise cloud providers who are throwing their hat in the ring.
 The complete show notes for this episode can be found at twimlai.com/go/420.
 Thanks to our friends at Tecton for sponsoring this episode of the podcast! Find out more about what they're up to at  tecton.ai.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Mike del Balso, co-Founder and CEO of Tecton. </p> <p>Mike, who you might remember from our last conversation on the <a href="https://twimlai.com/twiml-talk-115-scaling-machine-learning-uber-mike-del-balso/">podcast</a>, was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure from the company in 2018, he has been busy building up Tecton, and their enterprise feature store. </p> <p>In our conversation, Mike walks us through why he chose to focus on the feature store aspects of the machine learning platform, the journey, personal and otherwise, to operationalizing machine learning, and the capabilities that more mature platforms teams tend to look for or need to build. We also explore the differences between standalone components and feature stores, if organizations are taking their existing databases and building feature stores with them, and what a dynamic, always available feature store looks like in deployment. </p> <p>Finally, we explore what sets Tecton apart from other vendors in this space, including enterprise cloud providers who are throwing their hat in the ring.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/420">twimlai.com/go/420</a>.</p> <p>Thanks to our friends at Tecton for sponsoring this episode of the podcast! Find out more about what they're up to at <a href="Tecton.ai?utm_source=podcast_ep_420&amp;utm_medium=Libsyn&amp;utm_campaign=tecton_ai_10192020"> tecton.ai</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2729</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9d277112-c887-4bc5-9eea-fa7a22933eff]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1897682374.mp3?updated=1629244855"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring Causality and Community with Suzana Ilić - #419</title>
      <link>https://twimlai.com/exploring-causality-and-community-with-suzana-ilic</link>
      <description>In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT).
 Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana’s work at Causaly, touching on her experiences transitioning from linguist and domain expert to working with causal modeling, balancing her role as both product manager and leader of the development team for their causality extraction module, and the unique ways that she thinks about UI in relation to their product.
 We also spend quite a bit of time exploring MLT, including how they’ve achieved exponential growth within the community over the past few years and when Suzana knew MLT was moving beyond just a personal endeavor, her experiences publishing papers at major ML conferences as an independent organization, and inspires her within the broader ML/AI Community. And of course, we answer quite a few great questions from our live audience!</description>
      <pubDate>Fri, 16 Oct 2020 08:00:00 -0000</pubDate>
      <itunes:title>Exploring Causality and Community with Suzana Ilić - #419</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>419</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4312cd36-ee98-11eb-9502-e764ccf41bce/image/TWIML_COVER_800x800_SI.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT). Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot...</itunes:subtitle>
      <itunes:summary>In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT).
 Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana’s work at Causaly, touching on her experiences transitioning from linguist and domain expert to working with causal modeling, balancing her role as both product manager and leader of the development team for their causality extraction module, and the unique ways that she thinks about UI in relation to their product.
 We also spend quite a bit of time exploring MLT, including how they’ve achieved exponential growth within the community over the past few years and when Suzana knew MLT was moving beyond just a personal endeavor, her experiences publishing papers at major ML conferences as an independent organization, and inspires her within the broader ML/AI Community. And of course, we answer quite a few great questions from our live audience!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special #TWIMLfest episode, we’re joined by Suzana Ilić, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT).</p> <p>Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana’s work at Causaly, touching on her experiences transitioning from linguist and domain expert to working with causal modeling, balancing her role as both product manager and leader of the development team for their causality extraction module, and the unique ways that she thinks about UI in relation to their product.</p> <p>We also spend quite a bit of time exploring MLT, including how they’ve achieved exponential growth within the community over the past few years and when Suzana knew MLT was moving beyond just a personal endeavor, her experiences publishing papers at major ML conferences as an independent organization, and inspires her within the broader ML/AI Community. And of course, we answer quite a few great questions from our live audience!</p>]]>
      </content:encoded>
      <itunes:duration>3248</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b3eb11e2-537e-48ef-857b-a4207f49ea0f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5340517758.mp3?updated=1629244800"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Decolonizing AI with Shakir Mohamed - #418</title>
      <link>https://twimlai.com/decolonizing-ai-with-shakir-mohamed</link>
      <description>In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind.
 Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ‘Decolonial AI,’ the distinction between decolonizing AI and ethical AI, while also exploring the origin of the Indaba, the phases of community, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/418.</description>
      <pubDate>Wed, 14 Oct 2020 04:59:31 -0000</pubDate>
      <itunes:title>Decolonizing AI with Shakir Mohamed</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>418</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/43396716-ee98-11eb-9502-57590b36b4d0/image/TWIML_COVER_800x800_SM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind. Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine...</itunes:subtitle>
      <itunes:summary>In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind.
 Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ‘Decolonial AI,’ the distinction between decolonizing AI and ethical AI, while also exploring the origin of the Indaba, the phases of community, and much more.
 The complete show notes for this episode can be found at twimlai.com/go/418.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this special #TWIMLfest edition of the podcast, we’re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind.</p> <p>Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ‘Decolonial AI,’ the distinction between decolonizing AI and ethical AI, while also exploring the origin of the Indaba, the phases of community, and much more.</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/418">twimlai.com/go/418</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3243</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[16c9b0c3-5f79-4c3b-ba5a-f9703557ad7d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6015912320.mp3?updated=1629244835"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Spatial Analysis for Real-Time Video Processing with Adina Trufinescu</title>
      <link>https://twimlai.com/spatial-analysis-for-real-time-video-processing-with-adina-trufinescu</link>
      <description>Today we’re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020. 
 We focus on the technical innovations that went into their recently announced spatial analysis software, and the software’s use cases including the movement of people within spaces, distance measurements (social distancing), and more. 
 We also discuss the ‘responsible AI guidelines’ put in place to curb bad actors potentially using this software for surveillance, what techniques are being used to do object detection and image classification, and the challenges to productizing this research. 
 The complete show notes for this episode can be found at twimlai.com/go/417.</description>
      <pubDate>Thu, 08 Oct 2020 18:06:50 -0000</pubDate>
      <itunes:title>Spatial Analysis for Real-Time Video Processing with Adina Trufinescu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>417</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/435b9854-ee98-11eb-9502-fb7e14500dfc/image/TWIML_COVER_800x800_AT2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020.  We focus on the technical innovations that went into their recently announced spatial...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020. 
 We focus on the technical innovations that went into their recently announced spatial analysis software, and the software’s use cases including the movement of people within spaces, distance measurements (social distancing), and more. 
 We also discuss the ‘responsible AI guidelines’ put in place to curb bad actors potentially using this software for surveillance, what techniques are being used to do object detection and image classification, and the challenges to productizing this research. 
 The complete show notes for this episode can be found at twimlai.com/go/417.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020. </p> <p>We focus on the technical innovations that went into their recently announced spatial analysis software, and the software’s use cases including the movement of people within spaces, distance measurements (social distancing), and more. </p> <p>We also discuss the ‘responsible AI guidelines’ put in place to curb bad actors potentially using this software for surveillance, what techniques are being used to do object detection and image classification, and the challenges to productizing this research. </p> <p>The complete show notes for this episode can be found at twimlai.com/go/417.</p>]]>
      </content:encoded>
      <itunes:duration>2381</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ec70d9cc-3412-4c60-9345-04594113bf8f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2738554769.mp3?updated=1629244755"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How Deep Learning has Revolutionized OCR with Cha Zhang - #416</title>
      <link>https://twimlai.com/how-deep-learning-has-revolutionized-ocr-with-cha-zhang</link>
      <description>Today we’re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud &amp; AI. 
 Cha’s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries of what has been seen as an otherwise ‘solved’ problem. In our conversation with Cha, we explore some of the traditional challenges of doing OCR in the wild, and what are the ways in which deep learning algorithms are being applied to transform these solutions. 
 We also discuss the difficulties of using an end to end pipeline for OCR work, if there is a semi-supervised framing that could be used for OCR, the role of techniques like neural architecture search, how advances in NLP could influence the advancement of OCR problems, and much more. 
 The complete show notes for this episode can be found at twimlai.com/go/416.</description>
      <pubDate>Mon, 05 Oct 2020 16:02:09 -0000</pubDate>
      <itunes:title>How Deep Learning has Revolutionized OCR with Cha Zhang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>416</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4381a378-ee98-11eb-9502-afd9f6c86e91/image/TWIML_COVER_800x800_CZ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud &amp; AI.  Cha’s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud &amp; AI. 
 Cha’s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries of what has been seen as an otherwise ‘solved’ problem. In our conversation with Cha, we explore some of the traditional challenges of doing OCR in the wild, and what are the ways in which deep learning algorithms are being applied to transform these solutions. 
 We also discuss the difficulties of using an end to end pipeline for OCR work, if there is a semi-supervised framing that could be used for OCR, the role of techniques like neural architecture search, how advances in NLP could influence the advancement of OCR problems, and much more. 
 The complete show notes for this episode can be found at twimlai.com/go/416.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud &amp; AI. </p> <p>Cha’s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries of what has been seen as an otherwise ‘solved’ problem. In our conversation with Cha, we explore some of the traditional challenges of doing OCR in the wild, and what are the ways in which deep learning algorithms are being applied to transform these solutions. </p> <p>We also discuss the difficulties of using an end to end pipeline for OCR work, if there is a semi-supervised framing that could be used for OCR, the role of techniques like neural architecture search, how advances in NLP could influence the advancement of OCR problems, and much more. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/416">twimlai.com/go/416</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3451</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[db5c7ede-d6b6-443f-aada-aa862f8b5879]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6917853024.mp3?updated=1629244851"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning for Food Delivery at Global Scale - #415</title>
      <link>https://twimlai.com/machine-learning-for-food-delivery-at-global-scale</link>
      <description>In this special edition of the show, we discuss the various ways in which machine learning plays a role in helping businesses overcome their challenges in the food delivery space.  A few weeks ago Sam had the opportunity to moderate a panel at the Prosus AI Marketplace virtual event with Sandor Caetano of iFood, Dale Vaz of Swiggy, Nicolas Guenon of Delivery Hero, and Euro Beinat of Prosus.  In this conversation, panelists describe the application of machine learning to a variety of business use cases, including how they deliver recommendations, the unique ways they handle the logistics of deliveries, and fraud and abuse prevention.  The complete show notes for this episode can be found at twimlai.com/go/415.</description>
      <pubDate>Fri, 02 Oct 2020 18:40:07 -0000</pubDate>
      <itunes:title>Machine Learning for Food Delivery at Global Scale</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>415</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/43a167bc-ee98-11eb-9502-0bfef1e1b6bc/image/TWIML_COVER_800x800_SC-DZ-NG-EB-SC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special edition of the show, we discuss the various ways in which machine learning plays a role in helping businesses overcome their challenges in the food delivery space.  A few weeks ago Sam had the opportunity to moderate a panel at...</itunes:subtitle>
      <itunes:summary>In this special edition of the show, we discuss the various ways in which machine learning plays a role in helping businesses overcome their challenges in the food delivery space.  A few weeks ago Sam had the opportunity to moderate a panel at the Prosus AI Marketplace virtual event with Sandor Caetano of iFood, Dale Vaz of Swiggy, Nicolas Guenon of Delivery Hero, and Euro Beinat of Prosus.  In this conversation, panelists describe the application of machine learning to a variety of business use cases, including how they deliver recommendations, the unique ways they handle the logistics of deliveries, and fraud and abuse prevention.  The complete show notes for this episode can be found at twimlai.com/go/415.</itunes:summary>
      <content:encoded>
        <![CDATA[In this special edition of the show, we discuss the various ways in which machine learning plays a role in helping businesses overcome their challenges in the food delivery space.  A few weeks ago Sam had the opportunity to moderate a panel at the Prosus AI Marketplace virtual event with Sandor Caetano of iFood, Dale Vaz of Swiggy, Nicolas Guenon of Delivery Hero, and Euro Beinat of Prosus.  In this conversation, panelists describe the application of machine learning to a variety of business use cases, including how they deliver recommendations, the unique ways they handle the logistics of deliveries, and fraud and abuse prevention.  <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/415">twimlai.com/go/415</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3469</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[92941e93-0514-492e-a6b1-4062b34a7b4a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4505688374.mp3?updated=1629244799"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Open Source at Qualcomm AI Research with Jeff Gehlhaar and Zahra Koochak - #414</title>
      <link>https://twimlai.com/open-source-at-qualcomm-ai-research-with-jeff-gehlhaar-and-zahra-koochak</link>
      <description>Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research. 
 If you haven’t had a chance to listen to our first interview with Jeff, I encourage you to check it out  here! In this conversation, we catch up with Jeff and Zahra to get an update on what the company has up to since our last conversation, including the Snapdragon 865 chipset and Hexagon Neural Network Direct. 
 We also discuss open-source projects like the AI efficiency toolkit and Tensor Virtual Machine compiler, and how these projects fit in the broader Qualcomm ecosystem. Finally, we talk through their vision for on-device federated learning. 
 The complete show notes for this page can be found at twimlai.com/go/414.</description>
      <pubDate>Wed, 30 Sep 2020 13:29:26 -0000</pubDate>
      <itunes:title>Open Source at Qualcomm AI Research with Jeff Gehlhaar and Zahra Koochak</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>414</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/43c5a370-ee98-11eb-9502-1b7ed17246d1/image/TWIML_COVER_800x800_JG-ZK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research.  If you haven’t had a chance to listen to our first interview with Jeff, I encourage you to check it...</itunes:subtitle>
      <itunes:summary>Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research. 
 If you haven’t had a chance to listen to our first interview with Jeff, I encourage you to check it out  here! In this conversation, we catch up with Jeff and Zahra to get an update on what the company has up to since our last conversation, including the Snapdragon 865 chipset and Hexagon Neural Network Direct. 
 We also discuss open-source projects like the AI efficiency toolkit and Tensor Virtual Machine compiler, and how these projects fit in the broader Qualcomm ecosystem. Finally, we talk through their vision for on-device federated learning. 
 The complete show notes for this page can be found at twimlai.com/go/414.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research. </p> <p>If you haven’t had a chance to listen to our first interview with Jeff, I encourage you to check it out <a href="https://twimlai.com/twiml-talk-280-spiking-neural-nets-and-ml-as-a-systems-challenge-with-jeff-gehlhaar/"> here</a>! In this conversation, we catch up with Jeff and Zahra to get an update on what the company has up to since our last conversation, including the Snapdragon 865 chipset and Hexagon Neural Network Direct. </p> <p>We also discuss open-source projects like the AI efficiency toolkit and Tensor Virtual Machine compiler, and how these projects fit in the broader Qualcomm ecosystem. Finally, we talk through their vision for on-device federated learning. </p> <p>The complete show notes for this page can be found at twimlai.com/go/414.</p>]]>
      </content:encoded>
      <itunes:duration>2533</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4a98574f-0f94-4e82-929d-06aa0980b637]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1827706889.mp3?updated=1629216954"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Visualizing Climate Impact with GANs w/ Sasha Luccioni - #413</title>
      <link>https://twimlai.com/visualizing-climate-impact-with-gans-w-sasha-luccioni</link>
      <description>Today we’re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ‘Machine Learning in the Fight Against Climate Change.’ 
 We were first introduced to Sasha’s work through her paper on ‘Visualizing The Consequences Of Climate Change Using Cycle-consistent Adversarial Networks’, and we’re excited to pick her brain about the ways ML is currently being leveraged to help the environment. In our conversation, we explore the use of GANs to visualize the consequences of climate change, the evolution of different approaches she used, and the challenges of training GANs using an end-to-end pipeline.
 Finally, we talk through Sasha’s goals for the aforementioned panel, which is scheduled for Friday, October 23rd at 1 pm PT. Register for all of the great TWIMLfest sessions at twimlfest.com!
 The complete show notes for this episode can be found at twimlai.com/go/413.</description>
      <pubDate>Mon, 28 Sep 2020 20:57:21 -0000</pubDate>
      <itunes:title>Visualizing Climate Impact with GANs w/ Sasha Luccioni</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>413</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/43e80730-ee98-11eb-9502-e7591bfe9038/image/TWIML_COVER_800x800_SL3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ‘Machine Learning in the Fight Against Climate Change.’  We were first introduced to Sasha’s work...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ‘Machine Learning in the Fight Against Climate Change.’ 
 We were first introduced to Sasha’s work through her paper on ‘Visualizing The Consequences Of Climate Change Using Cycle-consistent Adversarial Networks’, and we’re excited to pick her brain about the ways ML is currently being leveraged to help the environment. In our conversation, we explore the use of GANs to visualize the consequences of climate change, the evolution of different approaches she used, and the challenges of training GANs using an end-to-end pipeline.
 Finally, we talk through Sasha’s goals for the aforementioned panel, which is scheduled for Friday, October 23rd at 1 pm PT. Register for all of the great TWIMLfest sessions at twimlfest.com!
 The complete show notes for this episode can be found at twimlai.com/go/413.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ‘<em>Machine Learning in the Fight Against Climate Change.</em>’ </p> <p>We were first introduced to Sasha’s work through her paper on ‘<em>Visualizing The Consequences Of Climate Change Using Cycle-consistent Adversarial Networks’</em>, and we’re excited to pick her brain about the ways ML is currently being leveraged to help the environment. In our conversation, we explore the use of GANs to visualize the consequences of climate change, the evolution of different approaches she used, and the challenges of training GANs using an end-to-end pipeline.</p> <p>Finally, we talk through Sasha’s goals for the aforementioned panel, which is scheduled for Friday, October 23rd at 1 pm PT. Register for all of the great TWIMLfest sessions at <a href="https://twimlfest.com">twimlfest.com</a>!</p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/413">twimlai.com/go/413</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2492</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b825cd57-16d9-428a-89d8-637b21ea27ac]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4221537317.mp3?updated=1629216942"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML-Powered Language Learning at Duolingo with Burr Settles - #412</title>
      <link>https://listen.twimlai.com/412</link>
      <description>Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale.
 In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they’ve built. We also discuss the Duolingo English Test, and the challenges they’ve faced with maintaining the platform while adding languages and courses.
 Check out the complete show notes for this episode at twimlai.com/go/412.</description>
      <pubDate>Thu, 24 Sep 2020 17:59:40 -0000</pubDate>
      <itunes:title>ML-Powered Language Learning at Duolingo with Burr Settles</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>412</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4413fcdc-ee98-11eb-9502-c7b657dc28b1/image/TWIML_COVER_800x800_BS2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale. In our conversation with...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale.
 In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they’ve built. We also discuss the Duolingo English Test, and the challenges they’ve faced with maintaining the platform while adding languages and courses.
 Check out the complete show notes for this episode at twimlai.com/go/412.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo’s main goal is to replicate that at scale.</p> <p>In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they’ve built. We also discuss the Duolingo English Test, and the challenges they’ve faced with maintaining the platform while adding languages and courses.</p> <p>Check out the complete show notes for this episode at <a href="https://twimlai.com/go/412">twimlai.com/go/412</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3304</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6a5102db-07ac-4900-b585-78261cad1c02]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1730070552.mp3?updated=1629244818"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bridging The Gap Between Machine Learning and the Life Sciences with Artur Yakimovich - #411</title>
      <link>http://listen.twimlai.com/411</link>
      <description>Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists. 
 While Artur’s background is in viral chemistry, he has since transitioned to a career in computational biology to “see where chemistry stopped, and biology started.” We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems. 
 Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems. 
 Check out the complete show notes at twimlai.com/go/411.</description>
      <pubDate>Mon, 21 Sep 2020 18:43:40 -0000</pubDate>
      <itunes:title>Bridging The Gap Between Machine Learning and the Life Sciences with Artur Yakimovich</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>411</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/443b5160-ee98-11eb-9502-a7e60462f689/image/TWIML_COVER_800x800_AY.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists. 
 While Artur’s background is in viral chemistry, he has since transitioned to a career in computational biology to “see where chemistry stopped, and biology started.” We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems. 
 Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems. 
 Check out the complete show notes at twimlai.com/go/411.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists. </p> <p>While Artur’s background is in viral chemistry, he has since transitioned to a career in computational biology to “see where chemistry stopped, and biology started.” We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems. </p> <p>Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems. </p> <p>Check out the complete show notes at <a href="https://twimlai.com/go/411">twimlai.com/go/411.</a></p>]]>
      </content:encoded>
      <itunes:duration>2425</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b8aae3df-e4c5-40ed-a4f9-714d079069ef]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6100067219.mp3?updated=1629244750"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Understanding Cultural Style Trends with Computer Vision w/ Kavita Bala - #410</title>
      <link>http://listen.twimlai.com/410</link>
      <description>Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University. 
 Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe. 
 Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research.
 The complete show notes for this episode can be found at twimlai.com/go/410.</description>
      <pubDate>Thu, 17 Sep 2020 18:33:55 -0000</pubDate>
      <itunes:title>Understanding Cultural Style Trends with Computer Vision w/ Kavita Bala</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>410</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4476759c-ee98-11eb-9502-bb69e0dab141/image/TWIML_COVER_800x800_KB3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University.  Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University. 
 Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe. 
 Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research.
 The complete show notes for this episode can be found at twimlai.com/go/410.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University. </p> <p>Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe. </p> <p>Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what’s next for Kavita’s research.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/410.</p>]]>
      </content:encoded>
      <itunes:duration>2289</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cb8c29b0-f12a-4586-978d-8c665af7a3ce]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8195649954.mp3?updated=1629244750"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>That's a VIBE: ML for Human Pose and Shape Estimation with Nikos Athanasiou, Muhammed Kocabas, Michael Black - #409</title>
      <link>http://listen.twimlai.com/409</link>
      <description>Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems. 
 We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.
  The complete show notes for this episode can be found at https://twimlai.com/go/409.
 Register for TWIMLfest today!</description>
      <pubDate>Mon, 14 Sep 2020 20:37:40 -0000</pubDate>
      <itunes:title>That's a VIBE: ML for Human Pose and Shape Estimation with Nikos Athanasiou, Muhammed Kocabas, Michael Black</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>409</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/449f827a-ee98-11eb-9502-83c5c5210525/image/TWIML_COVER_800x800_NA_MK_MB_B.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems.  We caught up with the group to explore their paper VIBE: Video Inference for Human Body...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems. 
 We caught up with the group to explore their paper VIBE: Video Inference for Human Body Pose and Shape Estimation, which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.
  The complete show notes for this episode can be found at https://twimlai.com/go/409.
 Register for TWIMLfest today!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Nikos Athanasiou, Muhammed Kocabas, Ph.D. students, and Michael Black, Director of the Max Planck Institute for Intelligent Systems. </p> <p>We caught up with the group to explore their paper <em>VIBE: Video Inference for Human Body Pose and Shape Estimation,</em> which they submitted to CVPR 2020. In our conversation, we explore the problem that they’re trying to solve through an adversarial learning framework, the datasets (AMASS) that they’re building upon, the core elements that separate this work from its predecessors in this area of research, and the results they’ve seen through their experiments and testing.</p> <p> The complete show notes for this episode can be found at <a href="https://twimlai.com/go/409">https://twimlai.com/go/409</a>.</p> <p>Register for <a href="https://twimlfest.com">TWIMLfest</a> today!</p>]]>
      </content:encoded>
      <itunes:duration>2599</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[84d92313-1be6-42f0-8f69-58d7581aef91]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2248457543.mp3?updated=1629244756"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>3D Deep Learning with PyTorch 3D w/ Georgia Gkioxari - #408</title>
      <link>https://twimlai.com/pytorch-3d-deep-learning-for-3d-data-with-georgia-gkioxari</link>
      <description>Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research. 
 Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then. 
 Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences. 
 The complete show notes for this episode can be found at twimlai.com/go/408.</description>
      <pubDate>Thu, 10 Sep 2020 17:50:11 -0000</pubDate>
      <itunes:title>3D Deep Learning with PyTorch 3D w/ Georgia Gkioxari</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>408</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/44c778b6-ee98-11eb-9502-8384ec453af5/image/TWIML_COVER_800x800_GG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research.  Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research. 
 Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then. 
 Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences. 
 The complete show notes for this episode can be found at twimlai.com/go/408.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Georgia Gkioxari, a research scientist at Facebook AI Research. </p> <p>Georgia was hand-picked by the TWIML community to discuss her work on the recently released open-source library PyTorch3D. In our conversation, Georgia describes her experiences as a computer vision researcher prior to the 2012 deep learning explosion, and how the entire landscape has changed since then. </p> <p>Georgia walks us through the user experience of PyTorch3D, while also detailing who the target audience is, why the library is useful, and how it fits in the broad goal of giving computers better means of perception. Finally, Georgia gives us a look at what it’s like to be a co-chair for CVPR 2021 and the challenges with updating the peer review process for the larger academic conferences. </p> <p>The complete show notes for this episode can be found at twimlai.com/go/408.</p>]]>
      </content:encoded>
      <itunes:duration>2116</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4bc8576a-f2dc-4914-96cd-f95352a5da93]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7221776189.mp3?updated=1629216927"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What are the Implications of Algorithmic Thinking? with Michael I. Jordan - #407</title>
      <link>https://twimlai.com/go/407</link>
      <description>Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley. 
 Michael was gracious enough to connect us all the way from Italy after being named  IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path. 
 We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more.
 The complete show notes for this episode can be found at. twimlai.com/go/407.</description>
      <pubDate>Mon, 07 Sep 2020 11:43:29 -0000</pubDate>
      <itunes:title>What are the Implications of Algorithmic Thinking? with Michael I. Jordan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>407</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/44f4651a-ee98-11eb-9502-530ffb971fe9/image/TWIML_COVER_800x800_MIJ.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley.  Michael was gracious enough to connect us all the way from Italy after being named  recipient. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley. 
 Michael was gracious enough to connect us all the way from Italy after being named  IEEE’s 2020 John von Neumann Medal recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path. 
 We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more.
 The complete show notes for this episode can be found at. twimlai.com/go/407.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by the legendary Michael I. Jordan, Distinguished Professor in the Departments of EECS and Statistics at UC Berkeley. </p> <p>Michael was gracious enough to connect us all the way from Italy after being named <a href="https://eecs.berkeley.edu/news/2019/12/michael-jordan-wins-2020-ieee-john-von-neumann-medal"> IEEE’s 2020 John von Neumann Medal</a> recipient. In our conversation with Michael, we explore his career path, and how his influence from other fields like philosophy shaped his path. </p> <p>We spend quite a bit of time discussing his current exploration into the intersection of economics and AI, and how machine learning systems could be used to create value and empowerment across many industries through “markets.” We also touch on the potential of “interacting learning systems” at scale, the valuation of data, the commoditization of human knowledge into computational systems, and much, much more.</p> The complete show notes for this episode can be found at. <a href="https://twimlai.com/go/407">twimlai.com/go/407</a>.]]>
      </content:encoded>
      <itunes:duration>3393</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[132cd2ab-938b-4c85-ab42-2b96309a8d7a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6422249205.mp3?updated=1629244839"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh - #406</title>
      <link>https://twimlai.com/beyond-accuracy-behavioral-testing-of-nlp-models-with-sameer-singh</link>
      <description>Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine. 
 Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList.
 In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin. 
 The complete show notes for this episode can be found at twimlai.com/go/406.</description>
      <pubDate>Thu, 03 Sep 2020 19:10:48 -0000</pubDate>
      <itunes:title>Beyond Accuracy: Behavioral Testing of NLP Models with Sameer Singh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>406</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/451d1c8a-ee98-11eb-9502-53a9da1b66df/image/TWIML_COVER_800x800_SS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine.  Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine. 
 Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on Beyond Accuracy: Behavioral Testing of NLP Models with CheckList.
 In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin. 
 The complete show notes for this episode can be found at twimlai.com/go/406.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sameer Singh, an assistant professor in the department of computer science at UC Irvine. </p> <p>Sameer’s work centers on large-scale and interpretable machine learning applied to information extraction and natural language processing. We caught up with Sameer right after he was awarded the best paper award at ACL 2020 for his work on <em>Beyond Accuracy: Behavioral Testing of NLP Models with CheckList.</em></p> <p>In our conversation, we explore CheckLists, the task-agnostic methodology for testing NLP models introduced in the paper. We also discuss how well we understand the cause of pitfalls or failure modes in deep learning models, Sameer’s thoughts on embodied AI, and his work on the now famous LIME paper, which he co-authored alongside Carlos Guestrin. </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/406">twimlai.com/go/406</a>.</p>]]>
      </content:encoded>
      <itunes:duration>2497</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dca12a3b-b029-4284-9edc-8ff10c8f6825]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2076318216.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How Machine Learning Powers On-Demand Logistics at Doordash with Gary Ren - #405</title>
      <link>https://twimlai.com/how-machine-learning-powers-on-demand-logistics-at-doordash-with-gary-ren</link>
      <description>Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash. 
 In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.  
 The complete show notes for this episode can be found at twimlai.com/go/405!
 Check out our upcoming event at twimlai.com/twimlfest</description>
      <pubDate>Mon, 31 Aug 2020 20:27:27 -0000</pubDate>
      <itunes:title>How Machine Learning Powers On-Demand Logistics at Doordash with Gary Ren</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>405</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4540d698-ee98-11eb-9502-874921426d9a/image/TWIML_COVER_800x800_GR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash.  In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash. 
 In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.  
 The complete show notes for this episode can be found at twimlai.com/go/405!
 Check out our upcoming event at twimlai.com/twimlfest</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Gary Ren, a machine learning engineer for the logistics team at DoorDash. </p> <p>In our conversation, we explore how machine learning powers the entire logistics ecosystem. We discuss the stages of their “marketplace,” and how using ML for optimized route planning and matching affects consumers, dashers, and merchants. We also talk through how they use traditional mathematics, classical machine learning, potential use cases for reinforcement learning frameworks, and challenges to implementing these explorations.  </p> <p>The complete show notes for this episode can be found at <a href="https://twimlai.com/go/405">twimlai.com/go/405</a>!</p> <p>Check out our upcoming event at twimlai.com/twimlfest</p>]]>
      </content:encoded>
      <itunes:duration>2595</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[21208693-8f5d-4cd2-a042-2fdb585920ff]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7000786274.mp3?updated=1629244765"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning as a Software Engineering Discipline with Dillon Erb - #404</title>
      <link>https://twimlai.com/machine-learning-as-a-software-engineering-discipline-with-dillon-erb</link>
      <description>Today we’re joined by Dillon Erb, Co-founder &amp; CEO of Paperspace.
 We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices. 
 We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow.
 The complete show notes for this episode can be found at twimlai.com/go/404.</description>
      <pubDate>Thu, 27 Aug 2020 19:23:44 -0000</pubDate>
      <itunes:title>Machine Learning as a Software Engineering Discipline with Dillon Erb</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>404</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4565cb38-ee98-11eb-9502-afd3376094e5/image/TWIML_COVER_800x800_DE.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dillon Erb, Co-founder &amp; CEO of Paperspace. We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dillon Erb, Co-founder &amp; CEO of Paperspace.
 We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices. 
 We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow.
 The complete show notes for this episode can be found at twimlai.com/go/404.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Dillon Erb, Co-founder &amp; CEO of Paperspace.</p> <p>We’ve followed Paperspace since their origins offering GPU-enabled compute resources to data scientists and machine learning developers, to the release of their Jupyter-based Gradient service. Our conversation with Dillon centered on the challenges that organizations face building and scaling repeatable machine learning workflows, and how they’ve done this in their own platform by applying time-tested software engineering practices. </p> <p>We also discuss the importance of reproducibility in production machine learning pipelines, how the processes and tools of software engineering map to the machine learning workflow, and technical issues that ML teams run into when trying to scale the ML workflow.</p> <p>The complete show notes for this episode can be found at twimlai.com/go/404.</p>]]>
      </content:encoded>
      <itunes:duration>2673</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[da8dbe6e-2e86-4ca1-8a7d-3cabd8bfc760]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9780847616.mp3?updated=1629244763"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI and the Responsible Data Economy with Dawn Song - #403</title>
      <link>https://twimlai.com/ai-and-the-responsible-data-economy-with-dawn-song</link>
      <description>Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. 
 In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. 
 We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data.
 The complete show notes for this episode can be found twimlai.com/go/403.</description>
      <pubDate>Mon, 24 Aug 2020 20:02:06 -0000</pubDate>
      <itunes:title>AI and the Responsible Data Economy with Dawn Song</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>403</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/458e0558-ee98-11eb-9502-bf7b8cef27f4/image/TWIML_COVER_800x800_DS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. 
 In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. 
 We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data.
 The complete show notes for this episode can be found twimlai.com/go/403.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Professor of Computer Science at UC Berkeley, Dawn Song. Dawn’s research is centered at the intersection of AI, deep learning, security, and privacy. She’s currently focused on bringing these disciplines together with her startup, Oasis Labs. </p> <p>In our conversation, we explore their goals of building a ‘platform for a responsible data economy,’ which would combine techniques like differential privacy, blockchain, and homomorphic encryption. The platform would give consumers more control of their data, and enable businesses to better utilize data in a privacy-preserving and responsible way. </p> <p>We also discuss how to privatize and anonymize data in language models like GPT-3, real-world examples of adversarial attacks and how to train against them, her work on program synthesis to get towards AGI, and her work on privatizing coronavirus contact tracing data.</p> <p>The complete show notes for this episode can be found <a href="https://twimlai.com/go/403">twimlai.com/go/403</a>.</p>]]>
      </content:encoded>
      <itunes:duration>3204</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cd1ca804-4fd7-4fb4-92b8-d5a556c08cab]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5749012157.mp3?updated=1629244802"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho - #402</title>
      <link>https://twimlai.com/relational-object-centric-agents-for-completing-simulated-household-tasks-with-wilka-carvalho</link>
      <description>Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. In our conversation, we focus on his paper ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions. We discuss how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.</description>
      <pubDate>Thu, 20 Aug 2020 17:52:49 -0000</pubDate>
      <itunes:title>Relational, Object-Centric Agents for Completing Simulated Household Tasks with Wilka Carvalho</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>402</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/45da8860-ee98-11eb-9502-77bed8d79b51/image/TWIML_COVER_800x800_WC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. We first met Wilka at the Black in AI workshop at last year’s NeurIPS conference, and finally got a chance to catch up about his latest research, ‘.’...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. In our conversation, we focus on his paper ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions. We discuss how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Wilka Carvalho, a PhD student at the University of Michigan, Ann Arbor. In our conversation, we focus on his paper ‘ROMA: A Relational, Object-Model Learning Agent for Sample-Efficient Reinforcement Learning.’ In the paper, Wilka explores the challenge of object interaction tasks, focusing on every day, in-home functions. We discuss how he’s addressing the challenge of ‘object-interaction’ tasks, the biggest obstacles he’s run into along the way.]]>
      </content:encoded>
      <itunes:duration>2481</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3e0d522e-5b54-4fdb-8931-80c7117b34d0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4221759076.mp3?updated=1629216926"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Model Explainability Forum - #401</title>
      <link>https://twimlai.com/model-explainability-forum</link>
      <description>Today we bring you the latest Discussion Series: The Model Explainability Forum. Our group of experts and researchers explore the current state of explainability and discuss the key emerging ideas shaping the field. Each guest shares their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more.</description>
      <pubDate>Mon, 17 Aug 2020 19:28:01 -0000</pubDate>
      <itunes:title>Model Explainability Forum</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>401</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/46042e04-ee98-11eb-9502-3f7dde77784e/image/2020_Model_Explainability_Forum_-_Square_1.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re bringing you the latest TWIML Discussion Series panel on Model Explainability. The use of machine learning in business, government, and other settings that require users to understand the model’s predictions has exploded in recent...</itunes:subtitle>
      <itunes:summary>Today we bring you the latest Discussion Series: The Model Explainability Forum. Our group of experts and researchers explore the current state of explainability and discuss the key emerging ideas shaping the field. Each guest shares their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we bring you the latest Discussion Series: The Model Explainability Forum. Our group of experts and researchers explore the current state of explainability and discuss the key emerging ideas shaping the field. Each guest shares their unique perspective and contributions to thinking about model explainability in a practical way. We explore concepts like stakeholder-driven explainability, adversarial attacks on explainability methods, counterfactual explanations, legal and policy implications, and more.]]>
      </content:encoded>
      <itunes:duration>5222</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7f5f81f8-89a1-4743-9df3-5905f20e9322]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6156138685.mp3?updated=1629217107"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What NLP Tells Us About COVID-19 and Mental Health with Johannes Eichstaedt - #400</title>
      <link>https://twimlai.com/what-nlp-tells-us-about-covid-19-and-mental-health-with-johannes-eichstaedt</link>
      <description>Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the da</description>
      <pubDate>Thu, 13 Aug 2020 15:31:37 -0000</pubDate>
      <itunes:title>What NLP Tells Us About COVID-19 and Mental Health with Johannes Eichstaedt</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>400</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/462751d6-ee98-11eb-9502-8b287fa773c1/image/TWIML_COVER_800x800_JE2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University.  Johannes joined us at the outset of the coronavirus pandemic to discuss his use of Facebook and Twitter data to measure the psychological...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the da</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Johannes Eichstaedt, an Assistant Professor of Psychology at Stanford University. In our conversation, we explore how Johannes applies his physics background to a career as a computational social scientist, some of the major patterns in the data that emerged over the first few months of lockdown, including mental health, social norms, and political patterns. We also explore how Johannes built the process, and the techniques he’s using to collect, sift through, and understand the da]]>
      </content:encoded>
      <itunes:duration>3524</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[48d08bc4-7807-4a7e-ab08-808a0dec443c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8422742767.mp3?updated=1627362769"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human-AI Collaboration for Creativity with Devi Parikh - #399</title>
      <link>https://twimlai.com/twiml-talk-399-human-ai-collaboration-for-creativity-with-devi-parikh</link>
      <description>Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). In our conversation, we touch on Devi’s definition of creativity, explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling.</description>
      <pubDate>Mon, 10 Aug 2020 19:24:54 -0000</pubDate>
      <itunes:title>Human-AI Collaboration for Creativity with Devi Parikh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>399</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4644b550-ee98-11eb-9502-57b835e1bd27/image/TWIML_COVER_800x800_DP.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR).  While Devi’s work is more broadly focused on computer vision applications,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). In our conversation, we touch on Devi’s definition of creativity, explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Devi Parikh, Associate Professor at the School of Interactive Computing at Georgia Tech, and research scientist at Facebook AI Research (FAIR). In our conversation, we touch on Devi’s definition of creativity, explore multiple ways that AI could impact the creative process for artists, and help humans become more creative. We investigate tools like casual creator for preference prediction, neuro-symbolic generative art, and visual journaling.]]>
      </content:encoded>
      <itunes:duration>2672</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5c81b676-d3c1-4507-8943-52d70cacb5aa]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9137995685.mp3?updated=1629216930"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Augmentation for Wireless Communication with Max Welling - #398</title>
      <link>https://twimlai.com/twiml-talk-398-neural-augmentation-for-wireless-communication-with-max-welling</link>
      <description>Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In our conversation, we explore Max’s work in neural augmentation, and how it’s being deployed. We also discuss his work with federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.</description>
      <pubDate>Thu, 06 Aug 2020 19:12:09 -0000</pubDate>
      <itunes:title>Neural Augmentation for Wireless Communication with Max Welling</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>398</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/466b3fe0-ee98-11eb-9502-c30f7985f451/image/TWIML_COVER_800x800_MW.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In case you missed it, Max joined us last year to discuss his work on   - the 2nd most popular episode of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In our conversation, we explore Max’s work in neural augmentation, and how it’s being deployed. We also discuss his work with federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Max Welling, Vice President of Technologies at Qualcomm Netherlands, and Professor at the University of Amsterdam. In our conversation, we explore Max’s work in neural augmentation, and how it’s being deployed. We also discuss his work with federated learning and incorporating the technology on devices to give users more control over the privacy of their personal data. Max also shares his thoughts on quantum mechanics and the future of quantum neural networks for chip design.]]>
      </content:encoded>
      <itunes:duration>2928</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fc79bc0c-f69a-47b4-b11b-1ffa6ecfc54a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4543370484.mp3?updated=1629216931"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis - #397</title>
      <link>https://twimlai.com/twiml-talk-397-quantum-machine-learning-the-next-frontier-with-iordanis-kerenidis</link>
      <description>Today we're joined by Iordanis Kerenidis, Research Director CNRS Paris and Head of Quantum Algorithms at QC Ware. 

Iordanis was an ICML main conference Keynote speaker on the topic of Quantum ML, and we focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.</description>
      <pubDate>Tue, 04 Aug 2020 17:09:42 -0000</pubDate>
      <itunes:title>Quantum Machine Learning: The Next Frontier? with Iordanis Kerenidis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>397</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/468b4f42-ee98-11eb-9502-1be656ba98f3/image/TWIML_COVER_800x800_IK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our 2020 ICML coverage joined by Iordanis Kerenidis, Research Director at Centre National de la Recherche Scientifique (CNRS) in Paris, and Head of Quantum Algorithms at QC Ware. Iordanis’ research centers around quantum algorithms...</itunes:subtitle>
      <itunes:summary>Today we're joined by Iordanis Kerenidis, Research Director CNRS Paris and Head of Quantum Algorithms at QC Ware. 

Iordanis was an ICML main conference Keynote speaker on the topic of Quantum ML, and we focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Iordanis Kerenidis, Research Director CNRS Paris and Head of Quantum Algorithms at QC Ware. 

Iordanis was an ICML main conference Keynote speaker on the topic of Quantum ML, and we focus our conversation on his presentation, exploring the prospects and challenges of quantum machine learning, as well as the field’s history, evolution, and future. We’ll also discuss the foundations of quantum computing, and some of the challenges to consider for breaking into the field.]]>
      </content:encoded>
      <itunes:duration>3626</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[543d645a-0f95-4df8-8b3e-47dc556cf2e7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8472259437.mp3?updated=1629217038"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML and Epidemiology with Elaine Nsoesie - #396</title>
      <link>https://twimlai.com/twiml-talk-396-ml-and-epidemiology-with-elaine-nsoesie</link>
      <description>Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including infectious disease surveillance, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology and the importance of recognizing how the disease is affecting people of different races and economic backgrounds.</description>
      <pubDate>Thu, 30 Jul 2020 18:44:10 -0000</pubDate>
      <itunes:title>ML and Epidemiology with Elaine Nsoesie</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>396</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/46ae4e66-ee98-11eb-9502-939604e05a50/image/TWIML_COVER_800x800_EN.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University.  Elaine presented a keynote talk at the ML for Global Health workshop at ICML 2020, where she shared her research centered around data-driven...</itunes:subtitle>
      <itunes:summary>Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including infectious disease surveillance, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology and the importance of recognizing how the disease is affecting people of different races and economic backgrounds.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our ICML series with Elaine Nsoesie, assistant professor at Boston University. In our conversation, we discuss the different ways that machine learning applications can be used to address global health issues, including infectious disease surveillance, and tracking search data for changes in health behavior in African countries. We also discuss COVID-19 epidemiology and the importance of recognizing how the disease is affecting people of different races and economic backgrounds.
]]>
      </content:encoded>
      <itunes:duration>2819</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8aec7ddd-df9c-4a6f-9175-dad014fc7650]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5177178974.mp3?updated=1629216929"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III - #395</title>
      <link>https://twimlai.com/twiml-talk-395-language-technology-is-power-exploring-the-inherent-complexity-of-nlp-systems-with-hal-daume-iii</link>
      <description>Today we’re joined by Hal Daume III, professor at the University of Maryland and Co-Chair of the 2020 ICML Conference. We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models, exploring language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language.</description>
      <pubDate>Mon, 27 Jul 2020 21:06:07 -0000</pubDate>
      <itunes:title>Language (Technology) Is Power: Exploring the Inherent Complexity of NLP Systems with Hal Daumé III</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>395</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/46cfc94c-ee98-11eb-9502-e7597505a045/image/TWIML_COVER_800x800_HD.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Hal Daume III, professor at the University of Maryland, Senior Principal Researcher at Microsoft Research, and Co-Chair of the 2020 ICML Conference.  We had the pleasure of catching up with Hal ahead of this year's ICML to...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Hal Daume III, professor at the University of Maryland and Co-Chair of the 2020 ICML Conference. We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models, exploring language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Hal Daume III, professor at the University of Maryland and Co-Chair of the 2020 ICML Conference. We had the pleasure of catching up with Hal ahead of this year's ICML to discuss his research at the intersection of bias, fairness, NLP, and the effects language has on machine learning models, exploring language in two categories as they appear in machine learning models and systems: (1) How we use language to interact with the world, and (2) how we “do” language.]]>
      </content:encoded>
      <itunes:duration>3758</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[68e45104-3f2a-416e-a67c-fad91d1dcc94]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7539689898.mp3?updated=1629244856"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Graph ML Research at Twitter with Michael Bronstein - #394</title>
      <link>https://twimlai.com/twiml-talk-394-graph-ml-research-at-twitter-with-michael-bronstein</link>
      <description>Today we’re excited to be joined by return guest Michael Bronstein, Head of Graph Machine Learning at Twitter. In our conversation, we discuss the evolution of the graph machine learning space, his new role at Twitter, and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.</description>
      <pubDate>Thu, 23 Jul 2020 19:11:20 -0000</pubDate>
      <itunes:title>Graph ML Research at Twitter with Michael Bronstein</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>394</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/46eebed8-ee98-11eb-9502-6b39177077fa/image/TWIML_COVER_800x800_MB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re excited to be joined by return guest Michael Bronstein, Professor at Imperial College London, and Head of Graph Machine Learning at Twitter. We last spoke with Michael at NeurIPS in 2017 about .  Since then, his research focus has...</itunes:subtitle>
      <itunes:summary>Today we’re excited to be joined by return guest Michael Bronstein, Head of Graph Machine Learning at Twitter. In our conversation, we discuss the evolution of the graph machine learning space, his new role at Twitter, and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re excited to be joined by return guest Michael Bronstein, Head of Graph Machine Learning at Twitter. In our conversation, we discuss the evolution of the graph machine learning space, his new role at Twitter, and some of the research challenges he’s faced, including scalability and working with dynamic graphs. Michael also dives into his work on differential graph modules for graph CNNs, and the various applications of this work.]]>
      </content:encoded>
      <itunes:duration>3320</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[78bd34ff-f651-4970-841d-59ddfec065d6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4651892930.mp3?updated=1629244827"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Panel: The Great ML Language (Un)Debate! - #393</title>
      <link>https://twimlai.com/the-great-ml-language-un-debate/</link>
      <description>Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts to discuss both popular and emerging programming languages for machine learning, along with the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&amp;A (58:28).</description>
      <pubDate>Mon, 20 Jul 2020 18:15:33 -0000</pubDate>
      <itunes:title>Panel: The Great ML Language (Un)Debate!</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>393</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4713c32c-ee98-11eb-9502-3751832540a9/image/2020_The_Great_ML_Language_Un-Debate_1.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts representing an array of both popular and emerging programming...</itunes:subtitle>
      <itunes:summary>Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts to discuss both popular and emerging programming languages for machine learning, along with the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&amp;A (58:28).</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re excited to bring ‘The Great ML Language (Un)Debate’ to the podcast! In the latest edition of our series of live discussions, we brought together experts and enthusiasts to discuss both popular and emerging programming languages for machine learning, along with the strengths, weaknesses, and approaches offered by Clojure, JavaScript, Julia, Probabilistic Programming, Python, R, Scala, and Swift. We round out the session with an audience Q&amp;A (58:28).]]>
      </content:encoded>
      <itunes:duration>5643</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b7830598-bb98-44fd-8e41-a4b5e1229cc1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4234169208.mp3?updated=1629217105"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What the Data Tells Us About COVID-19 with Eric Topol - #392</title>
      <link>https://twimlai.com/twiml-talk-392-what-the-data-tells-us-about-covid-19-with-eric-topol</link>
      <description>Today we’re joined by Eric Topol, Director &amp; Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. We caught up with Eric to talk through what we’ve learned about the coronavirus since it's emergence, and the role of tech in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise of personalized medicine, and how techniques like federated learning can offer more privacy in healthc</description>
      <pubDate>Thu, 16 Jul 2020 18:12:40 -0000</pubDate>
      <itunes:title>What the Data Tells Us About COVID-19 with Eric Topol</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>392</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/47399b9c-ee98-11eb-9502-33918c0c1a4d/image/TWIML_COVER_800x800_ET.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Eric Topol, Director &amp; Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine.  Eric is also one of the most trusted voices on the COVID-19 pandemic, giving those that follow his...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Eric Topol, Director &amp; Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. We caught up with Eric to talk through what we’ve learned about the coronavirus since it's emergence, and the role of tech in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise of personalized medicine, and how techniques like federated learning can offer more privacy in healthc</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Eric Topol, Director &amp; Founder of the Scripps Research Translational Institute, and author of the book Deep Medicine. We caught up with Eric to talk through what we’ve learned about the coronavirus since it's emergence, and the role of tech in understanding and preventing the spread of the disease. We also explore the broader opportunity for medical applications of AI, the promise of personalized medicine, and how techniques like federated learning can offer more privacy in healthc]]>
      </content:encoded>
      <itunes:duration>2553</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c79efdd3-5e92-4899-b2bb-b05c90d7d18c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1767505343.mp3?updated=1629216927"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Case for Hardware-ML Model Co-design	with Diana Marculescu - #391</title>
      <link>https://twimlai.com/twiml-talk-391-the-case-for-hardware-ml-model-co-designwith-diana-marculescu</link>
      <description>Today we’re joined by Diana Marculescu, Professor of Electrical and Computer Engineering at UT Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from CVPR 2020. We explore how her research group is focusing on making models more efficient so that they run better on current hardware systems, and how they plan on achieving true co</description>
      <pubDate>Mon, 13 Jul 2020 20:03:18 -0000</pubDate>
      <itunes:title>The Case for Hardware-ML Model Co-design	with Diana Marculescu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>391</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/475c11fe-ee98-11eb-9502-5b4c58d1a471/image/TWIML_COVER_800x800_DM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Diana Marculescu, Department Chair and Professor of Electrical and Computer Engineering at University of Texas at Austin.  We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Diana Marculescu, Professor of Electrical and Computer Engineering at UT Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from CVPR 2020. We explore how her research group is focusing on making models more efficient so that they run better on current hardware systems, and how they plan on achieving true co</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Diana Marculescu, Professor of Electrical and Computer Engineering at UT Austin. 

We caught up with Diana to discuss her work on hardware-aware machine learning. In particular, we explore her keynote, “Putting the “Machine” Back in Machine Learning: The Case for Hardware-ML Model Co-design” from CVPR 2020. We explore how her research group is focusing on making models more efficient so that they run better on current hardware systems, and how they plan on achieving true co]]>
      </content:encoded>
      <itunes:duration>2748</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d9c962f6-700b-4a9b-b8e5-69f09cf5102d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2128554218.mp3?updated=1629244763"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Computer Vision for Remote AR with Flora Tasse - #390</title>
      <link>HTTps://twiml-talk-390-computer-vision-for-remote-ar-with-flora-tasse</link>
      <description>Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision &amp; AI Research at Streem. Flora, a keynote speaker at the AR/VR workshop, walks us through some of the interesting use cases at the intersection of AI, CV, and AR technologies, her current work and the origin of her company Selerio, which was eventually acquired by Streem, the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation and more.</description>
      <pubDate>Thu, 09 Jul 2020 18:34:44 -0000</pubDate>
      <itunes:title>Computer Vision for Remote AR with Flora Tasse</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>390</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/477f94e4-ee98-11eb-9502-cb73778ceabe/image/TWIML_COVER_800x800_FT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision &amp; AI Research at Streem.  Flora, a keynote speaker at the AR/VR workshop at CVPR, walks us through some of the interesting use cases at the intersection of AI,...</itunes:subtitle>
      <itunes:summary>Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision &amp; AI Research at Streem. Flora, a keynote speaker at the AR/VR workshop, walks us through some of the interesting use cases at the intersection of AI, CV, and AR technologies, her current work and the origin of her company Selerio, which was eventually acquired by Streem, the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation and more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we conclude our CVPR coverage joined by Flora Tasse, Head of Computer Vision &amp; AI Research at Streem. Flora, a keynote speaker at the AR/VR workshop, walks us through some of the interesting use cases at the intersection of AI, CV, and AR technologies, her current work and the origin of her company Selerio, which was eventually acquired by Streem, the difficulties associated with building 3D mesh environments, extracting metadata from those environments, the challenges of pose estimation and more.]]>
      </content:encoded>
      <itunes:duration>2459</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cef24017-ca87-4b37-98f9-cda4c507b0e2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2340611886.mp3?updated=1629244760"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for Automatic Basketball Video Production with Julian Quiroga - #389</title>
      <link>https://twimlai.com/twiml-talk-389-deep-learning-for-automatic-basketball-video-production-with-julian-quiroga</link>
      <description>Today we're Julian Quiroga, a Computer Vision Team Lead at Genius Sports, to discuss his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition.” We explore camera setups and angles, detection and localization of figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that he is looking to improve i</description>
      <pubDate>Mon, 06 Jul 2020 18:03:13 -0000</pubDate>
      <itunes:title>Deep Learning for Automatic Basketball Video Production with Julian Quiroga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>389</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/479c82b6-ee98-11eb-9502-e3c56ddbdbc6/image/TWIML_COVER_800x800_JQ_2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we return to our coverage of the 2020 CVPR conference with a conversation with Julian Quiroga, a Computer Vision Team Lead at Genius Sports. Julian presented his recent paper “n” at the CVSports workshop. We jump right into the paper,...</itunes:subtitle>
      <itunes:summary>Today we're Julian Quiroga, a Computer Vision Team Lead at Genius Sports, to discuss his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition.” We explore camera setups and angles, detection and localization of figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that he is looking to improve i</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're Julian Quiroga, a Computer Vision Team Lead at Genius Sports, to discuss his recent paper “As Seen on TV: Automatic Basketball Video Production using Gaussian-based Actionness and Game States Recognition.” We explore camera setups and angles, detection and localization of figures on the court (players, refs, and of course, the ball), and the role that deep learning plays in the process. We also break down how this work applies to different sports, and the ways that he is looking to improve i]]>
      </content:encoded>
      <itunes:duration>2507</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55a502a6-f14a-49db-9c00-a7d5f28ad2cb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9547526381.mp3?updated=1629244758"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How External Auditing is Changing the Facial Recognition Landscape with Deb Raji - #388</title>
      <link>https://twimlai.com/twiml-talk-388-how-external-auditing-is-changing-the-facial-recognition-landscape-with-deb-raji</link>
      <description>Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute.

Recently there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition tech from Amazon, IBM and Microsoft. In our conversation with Deb, we dig into these stories, discussing the origins of Deb’s work on the Gender Shades project, the harms of facial recognition, and much more.</description>
      <pubDate>Thu, 02 Jul 2020 18:38:22 -0000</pubDate>
      <itunes:title>How External Auditing is Changing the Facial Recognition Landscape with Deb Raji</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>388</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/47c43bd0-ee98-11eb-9502-3f9902759e26/image/TWIML_COVER_800x800_DR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute at New York University.  Over the past week or two, there have been quite a few major news stories in the...</itunes:subtitle>
      <itunes:summary>Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute.

Recently there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition tech from Amazon, IBM and Microsoft. In our conversation with Deb, we dig into these stories, discussing the origins of Deb’s work on the Gender Shades project, the harms of facial recognition, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re taking a break from our CVPR coverage to bring you this interview with Deb Raji, a Technology Fellow at the AI Now Institute.

Recently there have been quite a few major news stories in the AI community, including the self-imposed moratorium on facial recognition tech from Amazon, IBM and Microsoft. In our conversation with Deb, we dig into these stories, discussing the origins of Deb’s work on the Gender Shades project, the harms of facial recognition, and much more.]]>
      </content:encoded>
      <itunes:duration>4850</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d5544226-6a00-47d3-b5c8-9fcc5d88db77]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2610905352.mp3?updated=1629244905"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for High-Stakes Decision Making with Hima Lakkaraju - #387</title>
      <link>https://twimlai.com/twiml-talk-387-ai-for-high-stakes-decision-making-with-hima-lakkaraju</link>
      <description>Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what they look like.</description>
      <pubDate>Mon, 29 Jun 2020 19:44:24 -0000</pubDate>
      <itunes:title>AI for High-Stakes Decision Making with Hima Lakkaraju</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>387</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/47e84af2-ee98-11eb-9502-7fa087ea2d59/image/TWIML_COVER_800x800_HL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University with appointments in both the Business School and Department of Computer Science.  At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what they look like.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Hima Lakkaraju, an Assistant Professor at Harvard University. At CVPR, Hima was a keynote speaker at the Fair, Data-Efficient and Trusted Computer Vision Workshop, where she spoke on Understanding the Perils of Black Box Explanations. Hima talks us through her presentation, which focuses on the unreliability of explainability techniques that center perturbations, such as LIME or SHAP, as well as how attacks on these models can be carried out, and what they look like.]]>
      </content:encoded>
      <itunes:duration>2718</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e3e05293-ceed-4946-9133-d1c00caa261c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7731939757.mp3?updated=1629244765"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Invariance, Geometry and Deep Neural Networks with Pavan Turaga - #386</title>
      <link>https://twimlai.com/twiml-talk-386-invariance-geometry-and-deep-neural-networks-with-pavan-turaga</link>
      <description>We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University. Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. We go in-depth on Pavan’s research on integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and Pavan contextualizes this work in relation to Hinton’s similar Capsule Network res</description>
      <pubDate>Thu, 25 Jun 2020 17:08:44 -0000</pubDate>
      <itunes:title>Invariance, Geometry and Deep Neural Networks with Pavan Turaga</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>386</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4811d30e-ee98-11eb-9502-67636b0fb2e7/image/TWIML_COVER_800x800_PT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University, with dual appointments as the Director of the Geometric Media Lab, and Interim Director of the School of Arts, Media, and Engineering....</itunes:subtitle>
      <itunes:summary>We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University. Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. We go in-depth on Pavan’s research on integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and Pavan contextualizes this work in relation to Hinton’s similar Capsule Network res</itunes:summary>
      <content:encoded>
        <![CDATA[We continue our CVPR coverage with today’s guest, Pavan Turaga, Associate Professor at Arizona State University. Pavan gave a keynote presentation at the Differential Geometry in CV and ML Workshop, speaking on Revisiting Invariants with Geometry and Deep Learning. We go in-depth on Pavan’s research on integrating physics-based principles into computer vision. We also discuss the context of the term “invariant,” and Pavan contextualizes this work in relation to Hinton’s similar Capsule Network res]]>
      </content:encoded>
      <itunes:duration>2760</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cb478b8f-c050-48c5-b86b-2ca3d5f83e8c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6460572941.mp3?updated=1629244782"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi - #385</title>
      <link>https://twimlai.com/twiml-talk-385-channel-gating-for-cheaper-and-more-accurate-neural-nets-with-babak-ehteshami-bejnordi</link>
      <description>Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning, covering how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more!</description>
      <pubDate>Mon, 22 Jun 2020 20:19:02 -0000</pubDate>
      <itunes:title>Channel Gating for Cheaper and More Accurate Neural Nets with Babak Ehteshami Bejnordi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>385</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4832e0bc-ee98-11eb-9502-8b318e3d9add/image/TWIML_COVER_800x800_BEB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm. Babak works closely with former guest Max Welling and is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning, covering how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Babak Ehteshami Bejnordi, a Research Scientist at Qualcomm.

Babak is currently focused on conditional computation, which is the main driver for today’s conversation. We dig into a few papers in great detail including one from this year’s CVPR conference, Conditional Channel Gated Networks for Task-Aware Continual Learning, covering how gates are used to drive efficiency and accuracy, while decreasing model size, how this research manifests into actual products, and more! 
]]>
      </content:encoded>
      <itunes:duration>3318</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e8086bb7-757c-469c-ae93-2b05261541d8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6232594150.mp3?updated=1629244831"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning Commerce at Square with Marsal Gavalda - #384</title>
      <link>https://twimlai.com/twiml-talk-384-machine-learning-commerce-at-square-with-marsal-gavalda</link>
      <description>Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square, where he manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage their vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success, tips and best practices for internal democratization of ML, and much more.</description>
      <pubDate>Thu, 18 Jun 2020 18:17:41 -0000</pubDate>
      <itunes:title>Building an ML-Forward Commerce Platform at Square with Marsal Gavalda - #384</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>384</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/48603dd2-ee98-11eb-9502-17cd0d9c8b0f/image/TWIML_COVER_800x800_MG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square.  Marsal, who hails from Barcelona, Catalonia, kicks off our conversation by indulging Sam in their shared love for language, which is what put...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square, where he manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage their vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success, tips and best practices for internal democratization of ML, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Marsal Gavalda, head of machine learning for the Commerce platform at Square, where he manages the development of machine learning for various tools and platforms, including marketing, appointments, and above all, risk management. 

We explore how they manage their vast portfolio of projects, and how having an ML and technology focus at the outset of the company has contributed to their success, tips and best practices for internal democratization of ML, and much more.]]>
      </content:encoded>
      <itunes:duration>3091</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e64565dd-337b-43cc-803f-6475845d1b39]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8705819693.mp3?updated=1629244810"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Cell Exploration with ML at the Allen Institute w/ Jianxu Chen - #383</title>
      <link>https://twimlai.com/twiml-talk-383-cell-exploration-with-ml-at-the-allen-institute-w-jianxu-chen</link>
      <description>Today we’re joined by Jianxu Chen, a scientist at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. We discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer</description>
      <pubDate>Mon, 15 Jun 2020 20:41:27 -0000</pubDate>
      <itunes:title>Cell Exploration with ML at the Allen Institute w/ Jianxu Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>383</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/487d52be-ee98-11eb-9502-73b6593b8b44/image/TWIML_COVER_800x800_JC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jianxu Chen, a scientist in the Assay Development group at the Allen Institute for Cell Science.  At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jianxu Chen, a scientist at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. We discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jianxu Chen, a scientist at the Allen Institute for Cell Science. 

At the latest GTC conference, Jianxu presented his work on the Allen Cell Explorer Toolkit, an open-source project that allows users to do 3D segmentation of intracellular structures in fluorescence microscope images at high resolutions, making the images more accessible for data analysis. We discuss three of the major components of the toolkit: the cell image analyzer, the image generator, and the image visualizer]]>
      </content:encoded>
      <itunes:duration>2656</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8d8114dc-1e8a-44d8-b7da-668cde99f907]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2322325633.mp3?updated=1629244782"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Arithmetic Units &amp; Experiences as an Independent ML Researcher with Andreas Madsen - #382</title>
      <link>https://twimlai.com/twiml-talk-382-neural-arithmetic-units-experiences-as-an-independent-ml-researcher-with-andreas-madsen</link>
      <description>Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.</description>
      <pubDate>Thu, 11 Jun 2020 19:12:27 -0000</pubDate>
      <itunes:title>Neural Arithmetic Units &amp; Experiences as an Independent ML Researcher with Andreas Madsen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>382</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/489f93a6-ee98-11eb-9502-6f6805a54ee8/image/TWIML_COVER_800x800_AM3.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Andreas Madsen, an independent researcher based in Denmark whose research focuses on developing interpretable machine learning models.  While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Andreas Madsen, an independent researcher based in Denmark. While we caught up with Andreas to discuss his ICLR spotlight paper, “Neural Arithmetic Units,” we also spend time exploring his experience as an independent researcher, discussing the difficulties of working with limited resources, the importance of finding peers to collaborate with, and tempering expectations of getting papers accepted to conferences -- something that might take a few tries to get right.]]>
      </content:encoded>
      <itunes:duration>1908</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5b4862ba-28f4-4f9f-88c7-e2eec81904e0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4650279675.mp3?updated=1629244745"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury - #381</title>
      <link>https://twimlai.com/twiml-talk-381-2020-a-critical-inflection-point-for-responsible-ai-with-rumman-chowdhury</link>
      <description>Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like: 

• Why is now such a critical inflection point in the application of responsible AI?
• How should engineers and practitioners think about AI ethics and responsible AI?
• Why is AI ethics inherently personal and how can you define your own personal approach?
• Is the implementation of AI governance necessarily authoritarian?</description>
      <pubDate>Mon, 08 Jun 2020 19:52:00 -0000</pubDate>
      <itunes:title>2020: A Critical Inflection Point for Responsible AI with Rumman Chowdhury</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>381</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/48c4308a-ee98-11eb-9502-4bfe2c9953d0/image/TWIML_COVER_800x800_RC_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible Artificial Intelligence at Accenture. In our conversation with Rumman, we explored questions like:   Why is now such a critical inflection point in the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like: 

• Why is now such a critical inflection point in the application of responsible AI?
• How should engineers and practitioners think about AI ethics and responsible AI?
• Why is AI ethics inherently personal and how can you define your own personal approach?
• Is the implementation of AI governance necessarily authoritarian?</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Rumman Chowdhury, Managing Director and Global Lead of Responsible AI at Accenture. In our conversation with Rumman, we explored questions like: 

• Why is now such a critical inflection point in the application of responsible AI?
• How should engineers and practitioners think about AI ethics and responsible AI?
• Why is AI ethics inherently personal and how can you define your own personal approach?
• Is the implementation of AI governance necessarily authoritarian?]]>
      </content:encoded>
      <itunes:duration>3699</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[78bac2c3-15de-4008-b7ca-2f1581c7edfb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4994267772.mp3?updated=1629244862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Panel: Advancing Your Data Science Career During the Pandemic - #380</title>
      <link>https://twimlai.com/twiml-talk-380-panel-advancing-your-data-science-career-during-the-pandemic</link>
      <description>Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.</description>
      <pubDate>Thu, 04 Jun 2020 20:02:37 -0000</pubDate>
      <itunes:title>Panel: Advancing Your Data Science Career During the Pandemic</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>380</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/48e8d566-ee98-11eb-9502-875812a5a322/image/3.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel. In this conversation, we explore ways that Data Scientists and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Ana Maria Echeverri, Caroline Chavier, Hilary Mason, and Jacqueline Nolis, our guests for the recent Advancing Your Data Science Career During the Pandemic panel.

In this conversation, we explore ways that Data Scientists and ML/AI practitioners can continue to advance their careers despite current challenges. Our panelists provide concrete tips, advice, and direction for those just starting out, those affected by layoffs, and those just wanting to move forward in their careers.]]>
      </content:encoded>
      <itunes:duration>4041</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4106f540-ad09-4bd5-8531-f550a1587d50]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9392107820.mp3?updated=1635370901"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>On George Floyd, Empathy, and the Road Ahead</title>
      <link>https://twimlai.com/blacklivesmatter</link>
      <description>Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. </description>
      <pubDate>Tue, 02 Jun 2020 01:43:07 -0000</pubDate>
      <itunes:title>On George Floyd, Empathy, and the Road Ahead</itunes:title>
      <itunes:episodeType>bonus</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/490f6a3c-ee98-11eb-9502-03e5a0479466/image/TWIML_AI_Podcast_Official_Cover_Art_1400px.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. </itunes:subtitle>
      <itunes:summary>Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Visit twimlai.com/blacklivesmatter for resources to support organizations pushing for social equity like Black Lives Matter, and groups offering relief for those jailed for exercising their rights to peaceful protest. </p>]]>
      </content:encoded>
      <itunes:duration>379</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d8f5a086-7065-43f1-abff-a997a2f8c265]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2224918382.mp3?updated=1627362775"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Engineering a Less Artificial Intelligence with Andreas Tolias - #379</title>
      <link>https://twimlai.com/twiml-talk-379-engineering-a-less-artificial-intelligence-with-andreas-tolias</link>
      <description>Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.</description>
      <pubDate>Thu, 28 May 2020 16:29:20 -0000</pubDate>
      <itunes:title>Engineering a Less Artificial Intelligence with Andreas Tolias</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>379</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/49379606-ee98-11eb-9502-67437d8f36cd/image/TWIML_COVER_800x800_AT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine and Principal Investigator of the Neuroscience-Inspired Networks for Artificial Intelligence organization. We caught up with Andreas to discuss his recent...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Andreas Tolias, Professor of Neuroscience at Baylor College of Medicine.

We caught up with Andreas to discuss his recent perspective piece, “Engineering a Less Artificial Intelligence,” which explores the shortcomings of state-of-the-art learning algorithms in comparison to the brain. The paper also offers several ideas about how neuroscience can lead the quest for better inductive biases by providing useful constraints on representations and network architecture.]]>
      </content:encoded>
      <itunes:duration>2781</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ecf0e705-b43d-4587-9570-abf2c739bd49]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4110815043.mp3?updated=1629244782"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez - #378</title>
      <link>https://twimlai.com/twiml-talk-378-rethinking-model-size-train-large-then-compress-with-joseph-gonzalez</link>
      <description>Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?</description>
      <pubDate>Mon, 25 May 2020 13:59:00 -0000</pubDate>
      <itunes:title>Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>378</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4959d2d4-ee98-11eb-9502-5714fad81317/image/TWIML_COVER_800x800_JG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley.  Our main focus in the conversation is Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Joseph Gonzalez, Assistant Professor in the EECS department at UC Berkeley. 

In our conversation, we explore Joseph’s paper “Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers,” which looks at compute-efficient training strategies for models. We discuss the two main problems being solved; 1) How can we rapidly iterate on variations in architecture? And 2) If we make models bigger, is it really improving any efficiency?]]>
      </content:encoded>
      <itunes:duration>3126</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[07de1ef7-31e9-494b-80a5-dc4fd891f37a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6143641630.mp3?updated=1629244891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Physics of Data with Alpha Lee - #377</title>
      <link>https://twimlai.com/twiml-talk-377-the-physics-of-data-with-alpha-lee</link>
      <description>Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more</description>
      <pubDate>Thu, 21 May 2020 18:10:30 -0000</pubDate>
      <itunes:title>The Physics of Data with Alpha Lee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>377</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/49776d9e-ee98-11eb-9502-d3fc18894dd6/image/TWIML_COVER_800x800_AL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge, and Co-Founder of  startup, PostEra. Our conversation centers around Alpha’s research which can be broken down into three main...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Alpha Lee, Winton Advanced Fellow in the Department of Physics at the University of Cambridge. Our conversation centers around Alpha’s research which can be broken down into three main categories: data-driven drug discovery, material discovery, and physical analysis of machine learning. We discuss the similarities and differences between drug discovery and material science, his startup, PostEra which offers medicinal chemistry as a service powered by machine learning, and much more]]>
      </content:encoded>
      <itunes:duration>2039</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2d0a82a1-189f-4020-a852-918cf93d62ee]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8035621851.mp3?updated=1629244746"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Is Linguistics Missing from NLP Research? w/ Emily M. Bender - #376 &#129436;</title>
      <link>https://twimlai.com/twiml-talk-376-is-linguistics-missing-from-nlp-research-w-emily-m-bender</link>
      <description>Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. 

Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?</description>
      <pubDate>Mon, 18 May 2020 15:19:21 -0000</pubDate>
      <itunes:title>Is Linguistics Missing from NLP Research? w/ Emily M. Bender &#129436;</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>376</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/499fe92c-ee98-11eb-9502-cb7d5763ac2c/image/TWIML_COVER_800x800_EB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington.  Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. 

Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Emily M. Bender, Professor of Linguistics at the University of Washington. 

Our discussion covers a lot of ground, but centers on the question, "Is Linguistics Missing from NLP Research?" We explore if we would be making more progress, on more solid foundations, if more linguists were involved in NLP research, or is the progress we're making (e.g. with deep learning models like Transformers) just fine?]]>
      </content:encoded>
      <itunes:duration>3153</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f8ff823f-c6e7-4a08-8dca-dce85647041a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6824881631.mp3?updated=1629244781"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz - #375</title>
      <link>https://twimlai.com/twiml-talk-375-disrupting-deepfakes-adversarial-attacks-against-conditional-image-translation-networks-with-nataniel-ruiz</link>
      <description>Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. 

We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work.</description>
      <pubDate>Thu, 14 May 2020 15:49:36 -0000</pubDate>
      <itunes:title>Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks with Nataniel Ruiz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>375</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/49bdaa48-ee98-11eb-9502-dfd1c13d44a1/image/TWIML_COVER_800x800_NR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nataniel Ruiz, a PhD Student in the Image &amp; Video Computing group at Boston University.  We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. 

We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Nataniel Ruiz, a PhD Student at Boston University. 

We caught up with Nataniel to discuss his paper “Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems.” In our conversation, we discuss the concept of this work, as well as some of the challenging parts of implementing this work, potential scenarios in which this could be deployed, and the broader contributions that went into this work.]]>
      </content:encoded>
      <itunes:duration>2552</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0c961165-c7ce-4693-82ee-4a3c2ab4e773]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3102485483.mp3?updated=1629244757"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374</title>
      <link>https://twimlai.com/twiml-talk-374-understanding-the-covid-19-data-quality-problem-with-sherri-rose</link>
      <description>Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape</description>
      <pubDate>Mon, 11 May 2020 18:26:42 -0000</pubDate>
      <itunes:title>Understanding the COVID-19 Data Quality Problem with Sherri Rose - #374</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>374</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4a037abe-ee98-11eb-9502-b72988399a87/image/TWIML_COVER_800x800_SR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School.  Sherri’s research centers around developing and integrating statistical machine learning approaches to improve human health. We cover a lot of ground in our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Sherri Rose, Associate Professor at Harvard Medical School. We cover a lot of ground in our conversation, including the intersection of her research with the current COVID-19 pandemic, the importance of quality in datasets and rigor when publishing papers, and the pitfalls of using causal inference.

We also touch on Sherri’s work in algorithmic fairness, the shift she’s seen in fairness conferences covering these issues in relation to healthcare research, and a few recent pape]]>
      </content:encoded>
      <itunes:duration>2657</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2c815544-cf0e-4be0-a1b2-4141b506fc50]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2545630024.mp3?updated=1629244761"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald - #373</title>
      <link>https://twimlai.com/twiml-talk-373-the-whys-and-hows-of-managing-machine-learning-artifacts-with-lukas-biewald</link>
      <description>Today we’re joined by Lukas Biewald, founder and CEO of Weights &amp; Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&amp;B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users.</description>
      <pubDate>Thu, 07 May 2020 14:35:05 -0000</pubDate>
      <itunes:title>The Whys and Hows of Managing Machine Learning Artifacts with Lukas Biewald</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>373</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4a361276-ee98-11eb-9502-bf533888f76e/image/TWIML_COVER_800x800_LB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Lukas Biewald, founder and CEO of Weights &amp; Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. You might remember Lukas from his original interview with us towards the end of last year, for more...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Lukas Biewald, founder and CEO of Weights &amp; Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&amp;B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Lukas Biewald, founder and CEO of Weights &amp; Biases, to discuss their new tool Artifacts, an end to end pipeline tracker. In our conversation, we explore Artifacts’ place in the broader machine learning tooling ecosystem through the lens of our eBook “The definitive guide to ML Platforms” and how it fits with the W&amp;B model management platform. We discuss also discuss what exactly “Artifacts” are, what the tool is tracking, and take a look at the onboarding process for users.]]>
      </content:encoded>
      <itunes:duration>3289</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6d46ccf6-0aae-46be-83f9-4229d3f432bc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2206160224.mp3?updated=1629244825"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Language Modeling and Protein Generation at Salesforce with Richard Socher - #372</title>
      <link>https://twimlai.com/twiml-talk-372-language-modeling-and-protein-generation-at-salesforce-with-richard-socher</link>
      <description>Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce.</description>
      <pubDate>Mon, 04 May 2020 19:10:44 -0000</pubDate>
      <itunes:title>Language Modeling and Protein Generation at Salesforce with Richard Socher - #372</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>372</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4a5cec20-ee98-11eb-9502-8b1a215c8e04/image/TWIML_COVER_800x800_RS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard, who has been at the forefront of Salesforce’s AI Research since they acquired his startup Metamind in 2016, and his team have been publishing a ton of...</itunes:subtitle>
      <itunes:summary>Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined Richard Socher, Chief Scientist and Executive VP at Salesforce. Richard and his team have published quite a few great projects lately, including CTRL: A Conditional Transformer Language Model for Controllable Generation, and ProGen, an AI Protein Generator, both of which we cover in-depth in this conversation. We also explore the balancing act between investments, product requirement research and otherwise at a large product-focused company like Salesforce.]]>
      </content:encoded>
      <itunes:duration>2526</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[93ba957b-8a50-466f-9922-4df7b4e9d44f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7771852150.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Research at JPMorgan Chase with Manuela Veloso - #371</title>
      <link>https://twimlai.com/twiml-talk-371-ai-research-at-jp-morgan-chase-with-manuela-veloso</link>
      <description>Today we’re joined by Manuela Veloso, Head of AI Research at J.P. Morgan Chase. Since moving from CMU to JP Morgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time CMU in the ‘80s, or as she describes it, the “mecca of AI,” and her founding role with RoboCup.</description>
      <pubDate>Thu, 30 Apr 2020 16:21:31 -0000</pubDate>
      <itunes:title>AI Research at JPMorgan Chase with Manuela Veloso</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>371</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4a8499c8-ee98-11eb-9502-13ed0d8c8aed/image/TWIML_COVER_800x800_MV.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Manuela Veloso, Head of AI Research at JPMorgan Chase and Professor at Carnegie Mellon University. Since moving from CMU to JPMorgan Chase, Manuela and her team established a set of seven lofty research goals. In this...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Manuela Veloso, Head of AI Research at J.P. Morgan Chase. Since moving from CMU to JP Morgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time CMU in the ‘80s, or as she describes it, the “mecca of AI,” and her founding role with RoboCup.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Manuela Veloso, Head of AI Research at J.P. Morgan Chase. Since moving from CMU to JP Morgan Chase, Manuela and her team established a set of seven lofty research goals. In this conversation we focus on the first three: building AI systems to eradicate financial crime, safely liberate data, and perfect client experience. We also explore Manuela’s background, including her time CMU in the ‘80s, or as she describes it, the “mecca of AI,” and her founding role with RoboCup.]]>
      </content:encoded>
      <itunes:duration>2792</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1e979595-a6fd-43c8-ace5-9192d265a07b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1953844259.mp3?updated=1629244764"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Panel: Responsible Data Science in the Fight  Against COVID-19 - #370</title>
      <link>https://twimlai.com/rdscovid</link>
      <description>In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved.

We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.</description>
      <pubDate>Wed, 29 Apr 2020 19:26:10 -0000</pubDate>
      <itunes:title>Panel: Responsible Data Science in the Fight  Against COVID-19</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>370</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4aae5cf4-ee98-11eb-9502-9b8128674cd0/image/Responsible_Data_Science_in_the_Fight__Against_COVID-19_-_Promo_800x800_1.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Since the beginning of the coronavirus pandemic, we’ve seen an outpouring of interest on the part of data scientists and AI practitioners wanting to make a contribution. At the same time, some of the resulting efforts have been criticized for...</itunes:subtitle>
      <itunes:summary>In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved.

We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.</itunes:summary>
      <content:encoded>
        <![CDATA[In this discussion, we explore how data scientists and ML/AI practitioners can responsibly contribute to the fight against coronavirus and COVID-19. Four experts: Rex Douglass, Rob Munro, Lea Shanley, and Gigi Yuen-Reed shared a ton of valuable insight on the best ways to get involved.

We've gathered all the resources that our panelists discussed during the conversation, you can find those at twimlai.com/talk/370.]]>
      </content:encoded>
      <itunes:duration>3484</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[99627fc5-e0aa-44d3-a3e0-de15447908e8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5573551753.mp3?updated=1629216933"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry - #369</title>
      <link>https://twimlai.com/twiml-talk-369-adversarial-examples-are-not-bugs-they-are-features-with-aleksander-madry</link>
      <description>Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, to discuss his paper “Adversarial Examples Are Not Bugs, They Are Features.” In our conversation, we talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will help inform opinions on either side of the deep learning debate.</description>
      <pubDate>Mon, 27 Apr 2020 13:18:57 -0000</pubDate>
      <itunes:title>Adversarial Examples Are Not Bugs, They Are Features with Aleksander Madry</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>369</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ad2a33e-ee98-11eb-9502-07079caf4901/image/TWIML_COVER_800x800_AM2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, a member of CSAIL and of the Theory of Computation group. Aleksander, whose work is more on the theoretical side of machine learning research, walks us through his paper...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, to discuss his paper “Adversarial Examples Are Not Bugs, They Are Features.” In our conversation, we talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will help inform opinions on either side of the deep learning debate.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Aleksander Madry, Faculty in the MIT EECS Department, to discuss his paper “Adversarial Examples Are Not Bugs, They Are Features.” In our conversation, we talk through what we expect these systems to do, vs what they’re actually doing, if we’re able to characterize these patterns, and what makes them compelling, and if the insights from the paper will help inform opinions on either side of the deep learning debate.]]>
      </content:encoded>
      <itunes:duration>2461</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[54594d73-0482-4030-bb41-553cb50a51b2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3491846149.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Social Good: Why "Good" isn't Enough with Ben Green - #368</title>
      <link>https://twimlai.com/twiml-talk-368-good-isnt-good-enough-with-ben-green</link>
      <description>Today we’re joined by Ben Green, PhD Candidate at Harvard and Research Fellow at the AI Now Institute at NYU. 

Ben’s research is focused on the social and policy impacts of data science, with a focus on algorithmic fairness and the criminal justice system. We discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning research; A grounded definition of what “good” actually means, and the absence of a “theory of change.</description>
      <pubDate>Thu, 23 Apr 2020 12:58:56 -0000</pubDate>
      <itunes:title>AI for Social Good: Why "Good" isn't Enough with Ben Green</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>368</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4af89580-ee98-11eb-9502-cf0bf513539b/image/TWIML_COVER_800x800_BG1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ben Green, PhD Candidate at Harvard, Affiliate at the Berkman Klein Center for Internet &amp; Society at Harvard, Research Fellow at the AI Now Institute at NYU.  Ben’s research is focused on social and policy impacts of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ben Green, PhD Candidate at Harvard and Research Fellow at the AI Now Institute at NYU. 

Ben’s research is focused on the social and policy impacts of data science, with a focus on algorithmic fairness and the criminal justice system. We discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning research; A grounded definition of what “good” actually means, and the absence of a “theory of change.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Ben Green, PhD Candidate at Harvard and Research Fellow at the AI Now Institute at NYU. 

Ben’s research is focused on the social and policy impacts of data science, with a focus on algorithmic fairness and the criminal justice system. We discuss his paper ‘Good' Isn't Good Enough,’ which explores the 2 things he feels are missing from data science and machine learning research; A grounded definition of what “good” actually means, and the absence of a “theory of change.]]>
      </content:encoded>
      <itunes:duration>2499</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5b65b80c-5d59-4e91-ba1e-70ff0eb2659c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3362746667.mp3?updated=1629244768"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Evolution of Evolutionary AI with Risto Miikkulainen - #367</title>
      <link>https://twimlai.com/twiml-talk-367-the-evolution-of-evolutionary-ai-with-risto-miikkulainen</link>
      <description>Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI. Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we get an update on the latest on the topic. In our conversation, we discuss use cases for evolutionary AI and the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which digs into the historical evolution of AI.</description>
      <pubDate>Mon, 20 Apr 2020 12:58:17 -0000</pubDate>
      <itunes:title>The Evolution of Evolutionary AI with Risto Miikkulainen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>367</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4b196fd0-ee98-11eb-9502-b329e85bf10f/image/TWIML_COVER_800x800_RM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI, and Professor of Computer Science at the UT Austin. Risto joined us back on  to discuss evolutionary algorithms, and today we do an update of sorts on what is...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI. Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we get an update on the latest on the topic. In our conversation, we discuss use cases for evolutionary AI and the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which digs into the historical evolution of AI.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Risto Miikkulainen, Associate VP of Evolutionary AI at Cognizant AI. Risto joined us back on episode #47 to discuss evolutionary algorithms, and today we get an update on the latest on the topic. In our conversation, we discuss use cases for evolutionary AI and the latest approaches to deploying evolutionary models. We also explore his paper “Better Future through AI: Avoiding Pitfalls and Guiding AI Towards its Full Potential,” which digs into the historical evolution of AI. ]]>
      </content:encoded>
      <itunes:duration>2277</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[38861b37-fb58-4e59-aacc-f261a20022b4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6015357273.mp3?updated=1629244747"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Architecture Search and Google’s New AutoML Zero with Quoc Le - #366</title>
      <link>https://twimlai.com/twiml-talk-366-neural-architecture-search-and-googles-new-automl-zero-with-quoc-le</link>
      <description>Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google. Quoc joins us to discuss his work on Google’s AutoML Zero, semi-supervised learning, and the development of Meena, the multi-turn conversational chatbot. 

This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview on Youtube, and answer your questions in the chat. We’ll see you there!</description>
      <pubDate>Thu, 16 Apr 2020 05:00:00 -0000</pubDate>
      <itunes:title>Neural Architecture Search and Google’s New AutoML Zero with Quoc Le</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>366</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4b3ec0b4-ee98-11eb-9502-d79a4f230f36/image/TWIML_COVER_800x800_QL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google, on the Brain team. Quoc has been very busy recently with his work on Google’s AutoML Zero, which details significant advances in automated...</itunes:subtitle>
      <itunes:summary>Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google. Quoc joins us to discuss his work on Google’s AutoML Zero, semi-supervised learning, and the development of Meena, the multi-turn conversational chatbot. 

This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview on Youtube, and answer your questions in the chat. We’ll see you there!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re super excited to share our recent conversation with Quoc Le, a research scientist at Google. Quoc joins us to discuss his work on Google’s AutoML Zero, semi-supervised learning, and the development of Meena, the multi-turn conversational chatbot. 

This was a really fun conversation, so much so that we decided to release the video! April 16th at 12 pm PT, Quoc and Sam will premiere the video version of this interview on Youtube, and answer your questions in the chat. We’ll see you there!]]>
      </content:encoded>
      <itunes:duration>3253</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b4183bf4-c8b6-43bd-836f-a4c162c652b0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7598954815.mp3?updated=1629244798"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automating Electronic Circuit Design with Deep RL w/ Karim Beguir - #365</title>
      <link>https://twimlai.com/twiml-talk-365-automating-electronic-circuit-design-with-deep-rl-w-karim-beguir</link>
      <description>Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. In our conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS.</description>
      <pubDate>Mon, 13 Apr 2020 14:23:00 -0000</pubDate>
      <itunes:title>Automating Electronic Circuit Design with Deep RL w/ Karim Beguir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>365</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4b69e758-ee98-11eb-9502-1794b41820a6/image/TWIML_COVER_800x800_KB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. We originally spoke with Karim about InstaDeep’s work back on episode 302, check that episode out for a full brief of Karim’s background. In today’s...</itunes:subtitle>
      <itunes:summary>Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. In our conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by return guest Karim Beguir, Co-Founder and CEO of InstaDeep. In our conversation, we chat with Karim about InstaDeep’s new offering, DeepPCB, an end-to-end platform for automated circuit board design. We discuss challenges and problems with some of the original iterations of auto-routers, how Karim defines circuit board “complexity,” the differences between reinforcement learning being used for games and in this use case, and their spotlight paper from NeurIPS.</p>]]>
      </content:encoded>
      <itunes:duration>2104</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[918002f8-42c5-470c-8674-5c3ac781fb07]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3022781937.mp3?updated=1629244747"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Ordinary Differential Equations with David Duvenaud - #364</title>
      <link>https://twimlai.com/twiml-talk-364-neural-ordinary-differential-equations-with-david-duvenaud</link>
      <description>Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto, to discuss his research on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through a few of David’s papers on the subject. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering.</description>
      <pubDate>Thu, 09 Apr 2020 01:47:00 -0000</pubDate>
      <itunes:title>Neural Ordinary Differential Equations with David Duvenaud</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>364</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4b942e3c-ee98-11eb-9502-3f8e92336e19/image/TWIML_COVER_800x800_DD.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto. David, who joined us back on  back in January ‘18, is back to talk about the various papers that have come out of his lab over the last year and change,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto, to discuss his research on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through a few of David’s papers on the subject. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by David Duvenaud, Assistant Professor at the University of Toronto, to discuss his research on Neural Ordinary Differential Equations, a type of continuous-depth neural network. In our conversation, we talk through a few of David’s papers on the subject. We discuss the problem that David is trying to solve with this research, the potential that ODEs have to replace “the backbone” of the neural networks that are used to train today, and David’s approach to engineering.</p>]]>
      </content:encoded>
      <itunes:duration>2962</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[65750c00-9871-43bd-ba86-cae5fb14d31f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1693728183.mp3?updated=1629226629"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Measure and Mismeasure of Fairness with Sharad Goel - #363</title>
      <link>https://twimlai.com/twiml-talk-363-the-measure-and-mismeasure-of-fairness-with-sharad-goel</link>
      <description>Today we’re joined by Sharad Goel, Assistant Professor at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent recent years focused on applying ML to understanding and improving public policy. In our conversation, we discuss Sharad’s extensive work on discriminatory policing, and The Stanford Open Policing Project. We also dig into Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.”</description>
      <pubDate>Mon, 06 Apr 2020 04:00:00 -0000</pubDate>
      <itunes:title>The Measure and Mismeasure of Fairness with Sharad Goel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>363</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4bb75a10-ee98-11eb-9502-d399a7eb686b/image/TWIML_COVER_800x800_SG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Sharad Goel, Assistant Professor in the management science &amp; engineering department at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent the recent years focused...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Sharad Goel, Assistant Professor at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent recent years focused on applying ML to understanding and improving public policy. In our conversation, we discuss Sharad’s extensive work on discriminatory policing, and The Stanford Open Policing Project. We also dig into Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.”</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Sharad Goel, Assistant Professor at Stanford. Sharad, who also has appointments in the computer science, sociology, and law departments, has spent recent years focused on applying ML to understanding and improving public policy. In our conversation, we discuss Sharad’s extensive work on discriminatory policing, and The Stanford Open Policing Project. We also dig into Sharad’s paper “The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning.”</p>]]>
      </content:encoded>
      <itunes:duration>2909</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f25fa069-019e-4a07-8a8b-6a97e3b06f1c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6623867771.mp3?updated=1629244872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Simulating the Future of Traffic with RL w/ Cathy Wu - #362</title>
      <link>https://twimlai.com/twiml-talk-362-simulating-the-future-of-traffic-with-rl-w-cathy-wu</link>
      <description>Today we’re joined by Cathy Wu, Assistant Professor at MIT. We had the pleasure of catching up with Cathy to discuss her work applying RL to mixed autonomy traffic, specifically, understanding the potential impact autonomous vehicles would have on various mixed-autonomy scenarios. To better understand this, Cathy built multiple RL simulations, including a track, intersection, and merge scenarios. We talk through how each scenario is set up, how human drivers are modeled, the results, and much more.</description>
      <pubDate>Thu, 02 Apr 2020 05:13:26 -0000</pubDate>
      <itunes:title>Simulating the Future of Traffic with RL w/ Cathy Wu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>362</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4bd8d230-ee98-11eb-9502-2739ff4b57d0/image/TWIML_COVER_800x800_CW.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the department of Civil and Environmental Engineering at MIT. We had the pleasure of catching up with Cathy at NeurIPS to discuss her talk “Mixed Autonomy...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Cathy Wu, Assistant Professor at MIT. We had the pleasure of catching up with Cathy to discuss her work applying RL to mixed autonomy traffic, specifically, understanding the potential impact autonomous vehicles would have on various mixed-autonomy scenarios. To better understand this, Cathy built multiple RL simulations, including a track, intersection, and merge scenarios. We talk through how each scenario is set up, how human drivers are modeled, the results, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Cathy Wu, Assistant Professor at MIT. We had the pleasure of catching up with Cathy to discuss her work applying RL to mixed autonomy traffic, specifically, understanding the potential impact autonomous vehicles would have on various mixed-autonomy scenarios. To better understand this, Cathy built multiple RL simulations, including a track, intersection, and merge scenarios. We talk through how each scenario is set up, how human drivers are modeled, the results, and much more.]]>
      </content:encoded>
      <itunes:duration>2112</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4b9a1116-e322-4135-8a63-3777a605acff]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1969393309.mp3?updated=1629244746"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Consciousness and COVID-19 with Yoshua Bengio - #361</title>
      <link>https://twiml-talk-361-consciousness-and-covid-19-with-yoshua-bengio/</link>
      <description>Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.</description>
      <pubDate>Mon, 30 Mar 2020 05:00:00 -0000</pubDate>
      <itunes:title>Consciousness and COVID-19 with Yoshua Bengio</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>361</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4bf4300c-ee98-11eb-9502-3f9414c34de3/image/TWIML_COVER_800x800_YB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio. Yoshua is a Professor in the Department of Computer Science and Operations Research at the University of Montreal and the Founder and Scientific...</itunes:subtitle>
      <itunes:summary>Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by one of, if not the most cited computer scientist in the world, Yoshua Bengio, Professor at the University of Montreal and the Founder and Scientific Director of MILA. We caught up with Yoshua to explore his work on consciousness, including how Yoshua defines consciousness, his paper “The Consciousness Prior,” as well as his current endeavor in building a COVID-19 tracing application, and the use of ML to propose experimental candidate drugs.</p>]]>
      </content:encoded>
      <itunes:duration>2944</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e48b97a6-9b73-45b8-a5e3-bced5a544f67]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7643639208.mp3?updated=1629244868"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Geometry-Aware Neural Rendering with Josh Tobin - #360</title>
      <link>https://twimlai.com/twiml-talk-360-geometry-aware-neural-rendering-with-josh-tobin</link>
      <description>Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor</description>
      <pubDate>Thu, 26 Mar 2020 05:00:00 -0000</pubDate>
      <itunes:title>Geometry-Aware Neural Rendering with Josh Tobin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>360</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c376836-ee98-11eb-9502-5f06f0048ed2/image/TWIML_COVER_800x800_JT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning, and more recently, the founder of a stealth startup. We had the pleasure of sitting down with Josh prior to his presentation of his...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Josh Tobin, Co-Organizer of the machine learning training program Full Stack Deep Learning. We had the pleasure of sitting down with Josh prior to his presentation of his paper Geometry-Aware Neural Rendering at NeurIPS. Josh's goal is to develop implicit scene understanding, building upon Deepmind's Neural scene representation and rendering work. We discuss challenges, the various datasets used to train his model, and the similarities between VAE training and his process, and mor</p>]]>
      </content:encoded>
      <itunes:duration>1613</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fd918a22-619d-4cb1-b82e-911a8c7bc6b7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2983706106.mp3?updated=1629244737"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Third Wave of Robotic Learning with Ken Goldberg - #359</title>
      <link>https://twimlai.com/twiml-talk-359-the-third-wave-of-robotic-learning-with-ken-goldberg</link>
      <description>Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, and his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, agriculture, and even robotic Covid-19 testing.</description>
      <pubDate>Mon, 23 Mar 2020 02:47:00 -0000</pubDate>
      <itunes:title>The Third Wave of Robotic Learning with Ken Goldberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>359</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c5c1398-ee98-11eb-9502-ef51e414db8f/image/TWIML_COVER_800x800_KG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ken Goldberg, professor of engineering and William S. Floyd Jr. distinguished chair in engineering at UC Berkeley. Ken, who is also an accomplished artist, and collaborator on projects such as DexNet and The Telegarden, has...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, and his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, agriculture, and even robotic Covid-19 testing.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ken Goldberg, professor of engineering at UC Berkeley, focused on robotic learning. In our conversation with Ken, we chat about some of the challenges that arise when working on robotic grasping, including uncertainty in perception, control, and physics. We also discuss his view on the role of physics in robotic learning, and his thoughts on potential robot use cases, from the use of robots in assisting in telemedicine, agriculture, and even robotic Covid-19 testing.</p>]]>
      </content:encoded>
      <itunes:duration>3695</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[354c65d9-3725-4b6e-8823-a2d98ab48b08]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4189481762.mp3?updated=1629244843"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee - #358</title>
      <link>https://twimlai.com/twiml-talk-358-learning-visiolinguistic-representations-with-vilbert-w-stefan-lee</link>
      <description>Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.</description>
      <pubDate>Wed, 18 Mar 2020 21:04:00 -0000</pubDate>
      <itunes:title>Learning Visiolinguistic Representations with ViLBERT w/ Stefan Lee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>358</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4c7bbe3c-ee98-11eb-9502-6713e51bdbc9/image/TWIML_COVER_800x800_SL2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Stefan Lee, assistant professor at the school of electrical engineering and computer science at Oregon State University. Stefan, who we sat down with at NeurIPS this past winter, is focused on the development of agents that can...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Stefan Lee, an assistant professor at Oregon State University. In our conversation, we focus on his paper ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. We discuss the development and training process for this model, the adaptation of the training process to incorporate additional visual information to BERT models, where this research leads from the perspective of integration between visual and language tasks.</p>]]>
      </content:encoded>
      <itunes:duration>1653</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1c84eecd-7286-4d89-a683-ecf5bba260e0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4340996432.mp3?updated=1629244735"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Upside-Down Reinforcement Learning with Jürgen Schmidhuber - #357</title>
      <link>https://twimlai.com/twiml-talk-357-upside-down-reinforcement-learning-with-jurgen-schmidhuber</link>
      <description>Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.</description>
      <pubDate>Mon, 16 Mar 2020 07:24:00 -0000</pubDate>
      <itunes:title>Upside-Down Reinforcement Learning with Jürgen Schmidhuber</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>357</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ca53a8c-ee98-11eb-9502-531f521ebbcf/image/TWIML_COVER_800x800_JS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Jürgen Schmidhuber, Co-Founder and Chief Scientist of NNAISENSE, the Scientific Director at IDSIA, as well as a Professor of AI at USI and SUPSI in Switzerland. Jürgen’s lab is well known for creating the Long Short-Term Memory (LSTM) network, and in this conversation, we discuss some of the recent research coming out of his lab, namely Upside-Down Reinforcement Learning.</p>]]>
      </content:encoded>
      <itunes:duration>2054</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[da1e2cd9-d75d-49db-8022-f9bea392e7aa]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2538819391.mp3?updated=1629244741"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen - #356</title>
      <link>https://twimlai.com/twiml-talk-356-slide-smart-algorithms-over-hardware-acceleration-for-large-scale-deep-learning-with-beidi-chen</link>
      <description>Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.</description>
      <pubDate>Thu, 12 Mar 2020 04:43:00 -0000</pubDate>
      <itunes:title>SLIDE: Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning with Beidi Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>356</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4cc9b57e-ee98-11eb-9502-eb260c5a5946/image/TWIML_COVER_800x800_BC2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Beidi Chen, PhD student at Rice University. Beidi is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a...</itunes:subtitle>
      <itunes:summary>Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Beidi Chen is part of the team that developed a cheaper, algorithmic, CPU alternative to state-of-the-art GPU machines. They presented their findings at NeurIPS 2019 and have since gained a lot of attention for their paper, SLIDE: In Defense of Smart Algorithms Over Hardware Acceleration for Large-Scale Deep Learning Systems. Beidi shares how the team took a new look at deep learning with the case of extreme classification by turning it into a search problem and using locality-sensitive hashing.</p>]]>
      </content:encoded>
      <itunes:duration>1919</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[66df59a4-09c8-42cb-a219-24c1a07709ee]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9941430601.mp3?updated=1629244743"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancements in Machine Learning with Sergey Levine - #355</title>
      <link>https://twimlai.com/twiml-talk-355-advancements-in-reinforcement-learning-with-sergey-levine</link>
      <description>Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!</description>
      <pubDate>Mon, 09 Mar 2020 20:16:00 -0000</pubDate>
      <itunes:title>Advancements in Machine Learning with Sergey Levine</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>355</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ce58268-ee98-11eb-9502-277f5406bbfd/image/TWIML_COVER_800x800_SL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Sergey Levine, an Assistant Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. We caught up with Sergey at...</itunes:subtitle>
      <itunes:summary>Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we're joined by Sergey Levine, an Assistant Professor at UC Berkeley. We last heard from Sergey back in 2017, where we explored Deep Robotic Learning. Sergey and his lab’s recent efforts have been focused on contributing to a future where machines can be “out there in the real world, learning continuously through their own experience.” We caught up with Sergey at NeurIPS 2019, where Sergey and his team presented 12 different papers -- which means a lot of ground to cover!</p>]]>
      </content:encoded>
      <itunes:duration>2588</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[826f7d98-368f-420f-80b8-9721343beb76]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9773594544.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Secrets of a Kaggle Grandmaster with David Odaibo - #354</title>
      <link>https://twimlai.com/twiml-talk-354-secrets-of-a-kaggle-grandmaster-with-david-odaibo</link>
      <description>Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical</description>
      <pubDate>Thu, 05 Mar 2020 21:16:03 -0000</pubDate>
      <itunes:title>Secrets of a Kaggle Grandmaster with David Odaibo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>354</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4d0a887e-ee98-11eb-9502-af5624da268a/image/TWIML_COVER_800x800_OD.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s...</itunes:subtitle>
      <itunes:summary>Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical</itunes:summary>
      <content:encoded>
        <![CDATA[Imagine spending years learning ML from the ground up, from its theoretical foundations, but still feeling like you didn’t really know how to apply it. That’s where David Odaibo found himself in 2015, after the second year of his PhD. David’s solution was Kaggle, a popular platform for data science competitions.

Fast forward four years, and David is now a Kaggle Grandmaster, the highest designation, with particular accomplishment in computer vision competitions, and co-founder and CTO of Analytical]]>
      </content:encoded>
      <itunes:duration>2469</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6ec56bf8-2b02-4a27-b1d9-e5200bb9d64a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4080488248.mp3?updated=1629244749"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>NLP for Mapping Physics Research with Matteo Chinazzi - #353</title>
      <link>https://twimlai.com/twiml-talk-353-nlp-for-mapping-physics-research-with-matteo-chinazzi</link>
      <description>Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.</description>
      <pubDate>Mon, 02 Mar 2020 23:21:00 -0000</pubDate>
      <itunes:title>NLP for Mapping Physics Research with Matteo Chinazzi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>353</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4d2a14be-ee98-11eb-9502-a7b65038d1eb/image/TWIML_COVER_800x800_MC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper , along with co-authors including former TWIML AI Podcast guest . In addition...</itunes:subtitle>
      <itunes:summary>Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Predicting the future of science, particularly physics, is the task that Matteo Chinazzi, an associate research scientist at Northeastern University focused on in his paper Mapping the Physics Research Space: a Machine Learning Approach. In addition to predicting the trajectory of physics research, Matteo is also active in the computational epidemiology field. His work in that area involves building simulators that can model the spread of diseases like Zika or the seasonal flu at a global scale.</p>]]>
      </content:encoded>
      <itunes:duration>2108</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3ff3c244-14e2-4de8-b931-e435f3a9d451]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3156110080.mp3?updated=1629244741"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo - #352</title>
      <link>https://twimlai.com/twiml-talk-352-metric-elicitation-and-robust-distributed-learning-with-sanmi-koyejo</link>
      <description>The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”</description>
      <pubDate>Thu, 27 Feb 2020 16:38:00 -0000</pubDate>
      <itunes:title>Metric Elicitation and Robust Distributed Learning with Sanmi Koyejo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>352</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4d53956e-ee98-11eb-9502-cf5b4baf391d/image/TWIML_COVER_800x800_SK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that today’s guest, Sanmi Koyejo has dedicated...</itunes:subtitle>
      <itunes:summary>The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”</itunes:summary>
      <content:encoded>
        <![CDATA[<p>The unfortunate reality is that many of the most commonly used machine learning metrics don't account for the complex trade-offs that come with real-world decision making. This is one of the challenges that Sanmi Koyejo, assistant professor at the University of Illinois, has dedicated his research to address. Sanmi applies his background in cognitive science, probabilistic modeling, and Bayesian inference to pursue his research which focuses broadly on “adaptive and robust machine learning.”</p>]]>
      </content:encoded>
      <itunes:duration>3368</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e29ff6a2-8766-43b1-bf32-03767e6b0bc6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6373925987.mp3?updated=1629244819"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>High-Dimensional Robust Statistics with Ilias Diakonikolas - #351</title>
      <link>https://twimlai.com/twiml-talk-351-high-dimensional-robust-statistics-with-ilias-diakonikolas</link>
      <description>Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.</description>
      <pubDate>Mon, 24 Feb 2020 21:14:00 -0000</pubDate>
      <itunes:title>High-Dimensional Robust Statistics with Ilias Diakonikolas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>351</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4d7ce504-ee98-11eb-9502-33a4fdafda2f/image/TWIML_COVER_800x800_ID.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, which was the recipient of the NeurIPS...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ilias Diakonikolas, faculty in the CS department at the University of Wisconsin-Madison, and author of the paper Distribution-Independent PAC Learning of Halfspaces with Massart Noise, recipient of the NeurIPS 2019 Outstanding Paper award. The paper is regarded as the first progress made around distribution-independent learning with noise since the 80s. In our conversation, we explore robustness in ML, problems with corrupt data in high-dimensional settings, and of course, the paper.</p>]]>
      </content:encoded>
      <itunes:duration>2165</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ab64f0a5-097b-4a22-ae28-b361e12aba3c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5996243600.mp3?updated=1629244744"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How AI Predicted the Coronavirus Outbreak with Kamran Khan - #350</title>
      <link>https://twimlai.com/twiml-talk-350-how-ai-predicted-the-coronavirus-outbreak-with-kamran-khan</link>
      <description>Today we’re joined by Kamran Khan, founder &amp; CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor</description>
      <pubDate>Wed, 19 Feb 2020 18:31:00 -0000</pubDate>
      <itunes:title>How AI Predicted the Coronavirus Outbreak with Kamran Khan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>350</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4d9ef874-ee98-11eb-9502-830182830561/image/TWIML_COVER_800x800_KK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kamran Khan, founder &amp; CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot, a digital health company with a focus on surveilling global infectious disease outbreaks, has been...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kamran Khan, founder &amp; CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Kamran Khan, founder &amp; CEO of BlueDot, and professor of medicine and public health at the University of Toronto. BlueDot has been the recipient of a lot of attention for being the first to publicly warn about the coronavirus that started in Wuhan. How did the company’s system of algorithms and data processing techniques help flag the potential dangers of the disease? In our conversation, Kamran talks us through how the technology works, its limits, and the motivation behind the wor</p>]]>
      </content:encoded>
      <itunes:duration>3062</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c9d9cf23-c5b1-4f56-871c-0f128a8c1123]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3041452865.mp3?updated=1629244770"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Turning Ideas into ML Powered Products with Emmanuel Ameisen - #349</title>
      <link>https://twimlai.com/twiml-talk-349-turning-ideas-into-ml-powered-products-with-emmanuel-ameisen</link>
      <description>Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring. </description>
      <pubDate>Mon, 17 Feb 2020 22:02:00 -0000</pubDate>
      <itunes:title>Turning Ideas into ML Powered Products with Emmanuel Ameisen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>349</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4dcde74c-ee98-11eb-9502-bbfcdce3e1b0/image/TWIML_COVER_800x800_EA_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring. </itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Emmanuel Ameisen, machine learning engineer at Stripe, and author of the recently published book “Building Machine Learning Powered Applications; Going from Idea to Product.” In our conversation, we discuss structuring end-to-end machine learning projects, debugging and explainability in the context of models, the various types of models covered in the book, and the importance of post-deployment monitoring. </p>]]>
      </content:encoded>
      <itunes:duration>2541</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[06db2e5d-78c2-48a6-87a4-d1c25b089005]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7773551214.mp3?updated=1629244757"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Algorithmic Injustices and Relational Ethics with Abeba Birhane - #348</title>
      <link>https://twimlai.com/twiml-talk-348-algorithmic-injustices-and-relational-ethics-with-abeba-birhane/</link>
      <description>Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics, which was the recipient of the Best Paper award at the 2019 Black in AI Workshop at NeurIPS. In our conversation, break down the paper and the thought process around AI ethics, the “harm of categorization,” how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve the issue, and much more.</description>
      <pubDate>Thu, 13 Feb 2020 20:53:00 -0000</pubDate>
      <itunes:title>Algorithmic Injustices and Relational Ethics with Abeba Birhane</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>348</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e0c53b0-ee98-11eb-9502-1f2cc3a61dd9/image/TWIML_COVER_800x800_AB2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper . We caught up with Abeba, whose aforementioned paper was the recipient of the Best Paper award at the most recent Black in AI Workshop at...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics, which was the recipient of the Best Paper award at the 2019 Black in AI Workshop at NeurIPS. In our conversation, break down the paper and the thought process around AI ethics, the “harm of categorization,” how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve the issue, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Abeba Birhane, PhD Student at University College Dublin and author of the recent paper Algorithmic Injustices: Towards a Relational Ethics, which was the recipient of the Best Paper award at the 2019 Black in AI Workshop at NeurIPS. In our conversation, break down the paper and the thought process around AI ethics, the “harm of categorization,” how ML generally doesn’t account for the ethics of various scenarios and how relational ethics could solve the issue, and much more.</p>]]>
      </content:encoded>
      <itunes:duration>2468</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[242f17cb-faa8-4f2d-9f28-ceaae23429ca]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9304011539.mp3?updated=1629244752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Agriculture and Global Food Security with Nemo Semret - #347</title>
      <link>https://twimlai.com/twiml-talk-347-ai-for-global-food-security-with-nemo-semret</link>
      <description>Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling.</description>
      <pubDate>Mon, 10 Feb 2020 20:29:00 -0000</pubDate>
      <itunes:title>AI for Agriculture and Global Food Security with Nemo Semret</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>347</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e29c27e-ee98-11eb-9502-9b24c89294a4/image/TWIML_COVER_800x800_NS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our...</itunes:subtitle>
      <itunes:summary>Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re excited to kick off our annual Black in AI Series joined by Nemo Semret, CTO of Gro Intelligence. Gro provides an agricultural data platform dedicated to improving global food security, focused on applying AI at macro scale. In our conversation with Nemo, we discuss Gro’s approach to data acquisition, how they apply machine learning to various problems, and their approach to modeling.</p>]]>
      </content:encoded>
      <itunes:duration>3853</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7879e947-44f7-482f-a33d-9eb629662d4d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1027811528.mp3?updated=1629244873"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Practical Differential Privacy at LinkedIn with Ryan Rogers - #346</title>
      <link>https://twimlai.com/twiml-talk-346-practical-differential-privacy-at-linkedin-with-ryan-rogers</link>
      <description>Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn, to discuss his paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition.” In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy through differential privacy, and the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise.</description>
      <pubDate>Fri, 07 Feb 2020 19:39:00 -0000</pubDate>
      <itunes:title>Practical Differential Privacy at LinkedIn with Ryan Rogers</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>346</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e48253e-ee98-11eb-9502-8302c647966e/image/TWIML_COVER_800x800_RR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn. We caught up with Ryan at NeurIPS, where he presented the paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition” as a spotlight talk. In...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn, to discuss his paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition.” In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy through differential privacy, and the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by Ryan Rogers, Senior Software Engineer at LinkedIn, to discuss his paper “Practical Differentially Private Top-k Selection with Pay-what-you-get Composition.” In our conversation, we discuss how LinkedIn allows its data scientists to access aggregate user data for exploratory analytics while maintaining its users’ privacy through differential privacy, and the connection between a common algorithm for implementing differential privacy, the exponential mechanism, and Gumbel noise.</p>]]>
      </content:encoded>
      <itunes:duration>2023</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a40545a6-6336-4abc-a923-ff6b3d74dde4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4589853428.mp3?updated=1629244743"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Networking Optimizations for Multi-Node Deep Learning on Kubernetes with Erez Cohen - #345</title>
      <link>https://twimlai.com/twiml-talk-345-networking-optimizations-for-multi-node-deep-learning-on-kubernetes-with-erez-cohen</link>
      <description>Today we conclude the KubeCon ‘19 series joined by Erez Cohen, VP of CloudX &amp; AI at Mellanox, who we caught up with before his talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” In our conversation, we discuss NVIDIA’s recent acquisition of Mellanox, the evolution of technologies like RDMA and GPU Direct, how Mellanox is enabling Kubernetes and other platforms to take advantage of the recent advancements in networking tech, and why we should care about networking in Deep Lea</description>
      <pubDate>Wed, 05 Feb 2020 17:33:00 -0000</pubDate>
      <itunes:title>Networking Optimizations for Multi-Node Deep Learning on Kubernetes with Erez Cohen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>345</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e666fc6-ee98-11eb-9502-f7cb74bf2ba3/image/TWIML_COVER_800x800_EC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our KubeCon ‘19 Series joined by Erez Cohen, VP of CloudX &amp; AI at Mellanox. In our conversation, we discuss:  Erez’s talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” where he discusses problems...</itunes:subtitle>
      <itunes:summary>Today we conclude the KubeCon ‘19 series joined by Erez Cohen, VP of CloudX &amp; AI at Mellanox, who we caught up with before his talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” In our conversation, we discuss NVIDIA’s recent acquisition of Mellanox, the evolution of technologies like RDMA and GPU Direct, how Mellanox is enabling Kubernetes and other platforms to take advantage of the recent advancements in networking tech, and why we should care about networking in Deep Lea</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we conclude the KubeCon ‘19 series joined by Erez Cohen, VP of CloudX &amp; AI at Mellanox, who we caught up with before his talk “Networking Optimizations for Multi-Node Deep Learning on Kubernetes.” In our conversation, we discuss NVIDIA’s recent acquisition of Mellanox, the evolution of technologies like RDMA and GPU Direct, how Mellanox is enabling Kubernetes and other platforms to take advantage of the recent advancements in networking tech, and why we should care about networking in Deep Lea</p>]]>
      </content:encoded>
      <itunes:duration>1891</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e1b60a43-9ff1-47ce-b80b-3988305c3a58]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5195527368.mp3?updated=1629244740"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Managing Research Needs at the University of Michigan using Kubernetes w/ Bob Killen - #344</title>
      <link>https://twimlai.com/twiml-talk-344-managing-research-needs-at-the-university-of-michigan-using-kubernetes-w-bob-killen</link>
      <description>Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we explore how Bob and his group at UM are deploying Kubernetes, the user experience, and how those users are taking advantage of distributed computing. We also discuss if ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs, where gaps currently exist in trying to support these ML/AI users’ workloads, and more!</description>
      <pubDate>Mon, 03 Feb 2020 16:38:25 -0000</pubDate>
      <itunes:title>Managing Research Needs at the University of Michigan using Kubernetes w/ Bob Killen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>344</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4e8738e6-ee98-11eb-9502-0b6d2a4c6f0a/image/TWIML_COVER_800x800_BK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we discuss:  How his group is deploying Kubernetes at UM. The user experience of his broad user base, including those using KubeFlow...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we explore how Bob and his group at UM are deploying Kubernetes, the user experience, and how those users are taking advantage of distributed computing. We also discuss if ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs, where gaps currently exist in trying to support these ML/AI users’ workloads, and more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Bob Killen, Research Cloud Administrator at the University of Michigan. In our conversation, we explore how Bob and his group at UM are deploying Kubernetes, the user experience, and how those users are taking advantage of distributed computing. We also discuss if ML/AI focused Kubernetes users should fear that the larger non-ML/AI user base will negatively impact their feature needs, where gaps currently exist in trying to support these ML/AI users’ workloads, and more!]]>
      </content:encoded>
      <itunes:duration>1528</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f6414487-9daf-419a-939d-43057f3504a0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4792315297.mp3?updated=1629244734"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scalable and Maintainable Workflows at Lyft with Flyte w/ Haytham AbuelFutuh and Ketan Umare - #343</title>
      <link>https://twimlai.com/twiml-talk-343-scalable-and-maintainable-workflows-at-lyft-with-flyte-w-haytham-abuelfutuh-and-ketan-umare</link>
      <description>Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. We caught up with Haytham and Ketan at KubeCo, where they were presenting their newly open-sourced, cloud-native ML and data processing platform, Flyte. We discuss what prompted Ketan to undertake this project and his experience building Flyte, the core value proposition, what type systems mean for the user experience, how it relates to Kubeflow and how Flyte is used across Lyft.</description>
      <pubDate>Thu, 30 Jan 2020 19:30:40 -0000</pubDate>
      <itunes:title>Scalable and Maintainable Workflows at Lyft with Flyte w/ Haytham AbuelFutuh and Ketan Umare</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>343</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ea9ebb6-ee98-11eb-9502-872f16acd6a5/image/TWIML_COVER_800x800_HaKu.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. In our conversation, we discuss:   Their newly open-sourced, cloud-native ML and data processing platform, Flyte. What...</itunes:subtitle>
      <itunes:summary>Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. We caught up with Haytham and Ketan at KubeCo, where they were presenting their newly open-sourced, cloud-native ML and data processing platform, Flyte. We discuss what prompted Ketan to undertake this project and his experience building Flyte, the core value proposition, what type systems mean for the user experience, how it relates to Kubeflow and how Flyte is used across Lyft.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we kick off our KubeCon ‘19 series joined by Haytham AbuelFutuh and Ketan Umare, a pair of software engineers at Lyft. We caught up with Haytham and Ketan at KubeCo, where they were presenting their newly open-sourced, cloud-native ML and data processing platform, Flyte. We discuss what prompted Ketan to undertake this project and his experience building Flyte, the core value proposition, what type systems mean for the user experience, how it relates to Kubeflow and how Flyte is used across Lyft.]]>
      </content:encoded>
      <itunes:duration>2722</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[09d7cfcf-7e23-411f-a3bd-8f935d5cf4b3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3348741983.mp3?updated=1629244764"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Causality 101 with Robert Osazuwa Ness - #342</title>
      <link>https://twimlai.com/twiml-talk-342-causality-101-with-robert-ness</link>
      <description>Today Robert Osazuwa Ness, ML Research Engineer at Gamalon and Instructor at Northeastern University joins us to discuss Causality, what it means, and how that meaning changes across domains and users, and our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning," for which you can find details at twimlai.com/community.</description>
      <pubDate>Mon, 27 Jan 2020 20:30:27 -0000</pubDate>
      <itunes:title>Causality 101 with Robert Osazuwa Ness</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>342</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ec712ae-ee98-11eb-9502-9b4cf9029fab/image/TWIML_COVER_800x800_RN.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re accompanied by Robert Osazuwa Ness, Machine Learning Research Engineer at ML Startup Gamalon and Instructor at Northeastern University. Robert, who we had the pleasure of meeting at the Black in AI Workshop at NeurIPS last month, joins...</itunes:subtitle>
      <itunes:summary>Today Robert Osazuwa Ness, ML Research Engineer at Gamalon and Instructor at Northeastern University joins us to discuss Causality, what it means, and how that meaning changes across domains and users, and our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning," for which you can find details at twimlai.com/community.</itunes:summary>
      <content:encoded>
        <![CDATA[Today Robert Osazuwa Ness, ML Research Engineer at Gamalon and Instructor at Northeastern University joins us to discuss Causality, what it means, and how that meaning changes across domains and users, and our upcoming study group based around his new course sequence, “Causal Modeling in Machine Learning," for which you can find details at twimlai.com/community. ]]>
      </content:encoded>
      <itunes:duration>2372</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5216fe66-a710-459f-b5a6-b03d7bbb18de]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6136802313.mp3?updated=1629244751"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>PaccMann^RL: Designing Anticancer Drugs with Reinforcement Learning w/ Jannis Born - #341</title>
      <link>https://twimlai.com/twiml-talk-341-paccmannrl-designing-anticancer-drugs-with-reinforcement-learning-with-jannis-born</link>
      <description>Today we’re joined by Jannis Born, Ph.D. student at ETH &amp; IBM Research Zurich, to discuss his “PaccMann^RL” research. Jannis details how his background in computational neuroscience applies to this research, how RL fits into the goal of anticancer drug discovery, the effect DL has had on his research, and of course, a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and then discover new anticancer drugs.</description>
      <pubDate>Thu, 23 Jan 2020 17:06:00 -0000</pubDate>
      <itunes:title>PaccMann^RL: Designing Anticancer Drugs with Reinforcement Learning w/ Jannis Born</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>341</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ee36ddc-ee98-11eb-9502-c39f4ea48bb6/image/TWIML_COVER_800x800_JB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jannis Born, Ph.D. student at ETH &amp; IBM Research Zurich. We caught up with Jannis a few weeks back at NeurIPS, to discuss:   His research paper “PaccMann&lt;sup&gt;RL&lt;/sup&gt;: Designing anticancer drugs from...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jannis Born, Ph.D. student at ETH &amp; IBM Research Zurich, to discuss his “PaccMann^RL” research. Jannis details how his background in computational neuroscience applies to this research, how RL fits into the goal of anticancer drug discovery, the effect DL has had on his research, and of course, a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and then discover new anticancer drugs.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jannis Born, Ph.D. student at ETH &amp; IBM Research Zurich, to discuss his “PaccMann^RL” research. Jannis details how his background in computational neuroscience applies to this research, how RL fits into the goal of anticancer drug discovery, the effect DL has had on his research, and of course, a step-by-step walkthrough of how the framework works to predict the sensitivity of cancer drugs on a cell and then discover new anticancer drugs.]]>
      </content:encoded>
      <itunes:duration>2524</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b9c0529a-068f-4df7-97ca-6c7396703ac5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6034661214.mp3?updated=1629244758"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Social Intelligence with Blaise Aguera y Arcas - #340</title>
      <link>https://twimlai.com/twiml-talk-340-social-intelligence-with-blaise-aguera-y-arcas</link>
      <description>Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss his role at Google, and his team’s approach to machine learning, and of course his presentation, in which he touches discussing today’s ML landscape, the gap between AI and ML/DS, the difference between intelligent systems and true intelligence, and much more.</description>
      <pubDate>Mon, 20 Jan 2020 19:56:49 -0000</pubDate>
      <itunes:title>Social Intelligence with Blaise Aguera y Arcas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>340</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4f03e5ee-ee98-11eb-9502-83dcc605b79b/image/TWIML_COVER_800x800_BAA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss: ...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss his role at Google, and his team’s approach to machine learning, and of course his presentation, in which he touches discussing today’s ML landscape, the gap between AI and ML/DS, the difference between intelligent systems and true intelligence, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Blaise Aguera y Arcas, a distinguished scientist at Google. We had the pleasure of catching up with Blaise at NeurIPS last month, where he was invited to speak on “Social Intelligence.” In our conversation, we discuss his role at Google, and his team’s approach to machine learning, and of course his presentation, in which he touches discussing today’s ML landscape, the gap between AI and ML/DS, the difference between intelligent systems and true intelligence, and much more. ]]>
      </content:encoded>
      <itunes:duration>2877</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ab386ce1-9992-46a3-ae9c-be266c6cbd18]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1077021232.mp3?updated=1629244762"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Music &amp; AI Plus a Geometric Perspective on Reinforcement Learning with Pablo Samuel Castro - #339</title>
      <link>https://twimlai.com/twiml-talk-339-music-ai-plus-a-geometric-perspective-on-reinforcement-learning</link>
      <description>Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his papers including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.”</description>
      <pubDate>Thu, 16 Jan 2020 19:27:40 -0000</pubDate>
      <itunes:title>Music &amp; AI Plus a Geometric Perspective on Reinforcement Learning with Pablo Samuel Castro</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>339</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4f2ad456-ee98-11eb-9502-1b66849dc93a/image/TWIML_COVER_800x800_PC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. Pablo, whose research is mainly focused on reinforcement learning, and I caught up at NeurIPS last month. We cover a lot of ground in our conversation, including...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his papers including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Pablo Samuel Castro, Staff Research Software Developer at Google. We cover a lot of ground in our conversation, including his love for music, and how that has guided his work on the Lyric AI project, and a few of his papers including “A Geometric Perspective on Optimal Representations for Reinforcement Learning” and “Estimating Policy Functions in Payments Systems using Deep Reinforcement Learning.” 
]]>
      </content:encoded>
      <itunes:duration>2685</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2a41556f-c06a-4325-805a-98d130931650]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3492476531.mp3?updated=1629244758"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Computer Vision with Amir Zamir - #338</title>
      <link>http://twimlai.com/twiml-talk-338-trends-in-computer-vision-with-amir-zamir</link>
      <description>Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more!</description>
      <pubDate>Mon, 13 Jan 2020 23:10:19 -0000</pubDate>
      <itunes:title>Trends in Computer Vision with Amir Zamir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>338</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4f489dd8-ee98-11eb-9502-c31b6a6a4315/image/TWIML_COVER_800x800_AZ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in...</itunes:subtitle>
      <itunes:summary>Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we close out AI Rewind 2019 joined by Amir Zamir, who recently began his tenure as an Assistant Professor of Computer Science at the Swiss Federal Institute of Technology. Amir joined us back in 2018 to discuss his CVPR Best Paper winner, and in today’s conversation, we continue with the thread of Computer Vision. In our conversation, we discuss quite a few topics, including Vision-for-Robotics, the expansion of the field of 3D Vision, Self-Supervised Learning for CV Tasks, and much more! ]]>
      </content:encoded>
      <itunes:duration>5823</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[397d5d3d-8cfe-482c-a490-4b7bf4bfbd10]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1242760008.mp3?updated=1629244969"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Natural Language Processing with Nasrin Mostafazadeh - #337</title>
      <link>https://twimlai.com/twiml-talk-337-trends-in-natural-language-processing-with-nasrin-mostafazadeh</link>
      <description>Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.</description>
      <pubDate>Thu, 09 Jan 2020 22:33:10 -0000</pubDate>
      <itunes:title>Trends in Natural Language Processing with Nasrin Mostafazadeh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>337</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4f6fa07c-ee98-11eb-9502-ffc1ca3a43a1/image/TWIML_COVER_800x800_NM_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language...</itunes:subtitle>
      <itunes:summary>Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue the AI Rewind 2019 joined by friend-of-the-show Nasrin Mostafazadeh, Senior AI Research Scientist at Elemental Cognition. We caught up with Nasrin to discuss the latest and greatest developments and trends in Natural Language Processing, including Interpretability, Ethics, and Bias in NLP, how large pre-trained models have transformed NLP research, and top tools and frameworks in the space.]]>
      </content:encoded>
      <itunes:duration>4356</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ba301a5d-7d1b-4082-9369-5ed712b96219]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1932718652.mp3?updated=1629244862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Fairness and AI Ethics with Timnit Gebru - #336</title>
      <link>https://twimlai.com/twiml-talk-336-trends-in-fairness-and-ai-ethics-with-timnit-gebru</link>
      <description>Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.</description>
      <pubDate>Mon, 06 Jan 2020 20:02:14 -0000</pubDate>
      <itunes:title>Trends in Fairness and AI Ethics with Timnit Gebru - #336</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>336</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4f90c770-ee98-11eb-9502-bf2f8d28994d/image/TWIML_COVER_800x800_TG.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our...</itunes:subtitle>
      <itunes:summary>Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we keep the 2019 AI Rewind series rolling with friend-of-the-show Timnit Gebru, a research scientist on the Ethical AI team at Google. A few weeks ago at NeurIPS, Timnit joined us to discuss the ethics and fairness landscape in 2019. In our conversation, we discuss diversification of NeurIPS, with groups like Black in AI, WiML and others taking huge steps forward, trends in the fairness community, quite a few papers, and much more.]]>
      </content:encoded>
      <itunes:duration>2984</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8e8b3c28-e03d-4f04-af53-75e984287daa]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5192573946.mp3?updated=1627362787"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Reinforcement Learning with Chelsea Finn - #335</title>
      <link>https://twimlai.com/twiml-talk-335-trends-in-reinforcement-learning-with-chelsea-finn</link>
      <description>Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the CS Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year.</description>
      <pubDate>Thu, 02 Jan 2020 19:59:28 -0000</pubDate>
      <itunes:title>Trends in Reinforcement Learning with Chelsea Finn</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>335</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4fb6121e-ee98-11eb-9502-5716f448f5e9/image/TWIML_COVER_800x800_CF.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the Computer Science Department at Stanford University. Chelsea’s research focuses on Reinforcement...</itunes:subtitle>
      <itunes:summary>Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the CS Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue to review the year that was 2019 via our AI Rewind series, and do so with friend of the show Chelsea Finn, Assistant Professor in the CS Department at Stanford University. Chelsea’s research focuses on Reinforcement Learning, so we couldn’t think of a better person to join us to discuss the topic. In this conversation, we cover topics like Model-based RL, solving hard exploration problems, along with RL libraries and environments that Chelsea thought moved the needle last year.]]>
      </content:encoded>
      <itunes:duration>4085</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f4176c74-f0a6-4ff1-bc4f-14cc57acb84d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2179559979.mp3?updated=1629244862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Machine Learning &amp; Deep Learning with Zack Lipton - #334</title>
      <link>https://twimlai.com/twiml-talk-334-trends-in-machine-learning-deep-learning-with-zack-lipton</link>
      <description>Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.</description>
      <pubDate>Mon, 30 Dec 2019 19:23:14 -0000</pubDate>
      <itunes:title>Trends in Machine Learning &amp; Deep Learning with Zack Lipton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>334</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4fd1bd02-ee98-11eb-9502-3b17c2e36603/image/TWIML_COVER_800x800_ZL.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, a jointly appointed Professor in the Tepper School of Business and the Machine Learning Department at CMU. You might remember Zack from our conversation earlier this year,...</itunes:subtitle>
      <itunes:summary>Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we kick off our 2019 AI Rewind Series joined by Zack Lipton, Professor at CMU.

You might remember Zack from our conversation earlier this year, “Fairwashing” and the Folly of ML Solutionism. In today's conversation, Zack recaps advancements across the vast fields of Machine Learning and Deep Learning, including trends, tools, research papers and more.

We want to hear from you! Send your thoughts on the year that was 2019 below in the comments, or via Twitter @samcharrington or @twimlai.]]>
      </content:encoded>
      <itunes:duration>4781</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8af7ed59-9644-45e6-baf9-fcfda1c6b196]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3884854015.mp3?updated=1627362787"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>FaciesNet &amp; Machine Learning Applications in Energy with Mohamed Sidahmed - #333</title>
      <link>https://twimlai.com/twiml-talk-333-faciesnet-machine-learning-applications-in-energy-with-mohamed-sidahmed</link>
      <description>Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&amp;D Manager at Shell. In our conversation, we discuss two papers Mohamed and his team submitted to the conference this year, Accelerating Least Squares Imaging Using Deep Learning Techniques, and FaciesNet: Machine Learning Applications for Facies Classification in Well Logs. The show notes for this episode can be found at twimlai.com/talk/333/, where you’ll find links to both of these papers!</description>
      <pubDate>Fri, 27 Dec 2019 20:08:21 -0000</pubDate>
      <itunes:title>FaciesNet &amp; Machine Learning Applications in Energy with Mohamed Sidahmed</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>333</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/4ff864d4-ee98-11eb-9502-f3cbe4567967/image/TWIML_COVER_800x800_MS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&amp;D Manager at Shell. In our conversation, we discuss:   The papers Mohamed and his team submitted to the conference this year, in...</itunes:subtitle>
      <itunes:summary>Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&amp;D Manager at Shell. In our conversation, we discuss two papers Mohamed and his team submitted to the conference this year, Accelerating Least Squares Imaging Using Deep Learning Techniques, and FaciesNet: Machine Learning Applications for Facies Classification in Well Logs. The show notes for this episode can be found at twimlai.com/talk/333/, where you’ll find links to both of these papers!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we close out our 2019 NeurIPS series with Mohamed Sidahmed, Machine Learning and Artificial Intelligence R&amp;D Manager at Shell. In our conversation, we discuss two papers Mohamed and his team submitted to the conference this year, Accelerating Least Squares Imaging Using Deep Learning Techniques, and FaciesNet: Machine Learning Applications for Facies Classification in Well Logs. The show notes for this episode can be found at twimlai.com/talk/333/, where you’ll find links to both of these papers!]]>
      </content:encoded>
      <itunes:duration>2395</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bf202e56-bb97-480a-a0c7-8e9e5f6f644a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9921232565.mp3?updated=1629244754"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning: A New Approach to Drug Discovery with Daphne Koller - #332</title>
      <link>https://twiml-talk-332-machine-learning-a-new-approach-to-drug-discovery-with-daphne-koller</link>
      <description>Today we’re joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. In our conversation, discuss the current landscape of pharmaceutical drugs and drug discovery, including the current pricing of drugs, and an overview of Insitro’s goal of using ML as a “compass” in drug discovery. We also explore how Insitro functions as a company, their focus on the biology of drug discovery and the landscape of ML techniques being used, Daphne’s thoughts on AutoML, and</description>
      <pubDate>Thu, 26 Dec 2019 18:41:47 -0000</pubDate>
      <itunes:title>Machine Learning: A New Approach to Drug Discovery with Daphne Koller</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>332</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/50180550-ee98-11eb-9502-5372aebeed0e/image/TWIML_COVER_800x800_DK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our 2019 NeurIPS coverage joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. We caught up with Daphne to discuss:   Her background in machine learning, beginning in ‘93, and her...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. In our conversation, discuss the current landscape of pharmaceutical drugs and drug discovery, including the current pricing of drugs, and an overview of Insitro’s goal of using ML as a “compass” in drug discovery. We also explore how Insitro functions as a company, their focus on the biology of drug discovery and the landscape of ML techniques being used, Daphne’s thoughts on AutoML, and</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Daphne Koller, co-Founder and former co-CEO of Coursera and Founder and CEO of Insitro. In our conversation, discuss the current landscape of pharmaceutical drugs and drug discovery, including the current pricing of drugs, and an overview of Insitro’s goal of using ML as a “compass” in drug discovery. We also explore how Insitro functions as a company, their focus on the biology of drug discovery and the landscape of ML techniques being used, Daphne’s thoughts on AutoML, and ]]>
      </content:encoded>
      <itunes:duration>2589</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3b15ffb4-cc9d-49d7-b4de-7106fc430a75]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4113206451.mp3?updated=1629244760"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Sensory Prediction Error Signals in the Neocortex with Blake Richards - #331</title>
      <link>https://twimlai.com/twiml-talk-331-sensory-prediction-error-signals-in-the-neocortex-with-blake-richards</link>
      <description>Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. Blake was an invited speaker at the Neuro-AI Workshop, and presented his research on “Sensory Prediction Error Signals in the Neocortex.” In our conversation, we discuss a series of recent studies on two-photon calcium imaging. We talk predictive coding, hierarchical inference, and Blake’s recent work on memory systems for reinforcement lea</description>
      <pubDate>Tue, 24 Dec 2019 18:55:44 -0000</pubDate>
      <itunes:title>Sensory Prediction Error Signals in the Neocortex with Blake Richards - #331</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>331</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/504a9fb0-ee98-11eb-9502-9b128609680c/image/TWIML_COVER_800x800_BR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. In our conversation, we discuss:  His invited talk at the Neuro-AI Workshop “Sensory...</itunes:subtitle>
      <itunes:summary>Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. Blake was an invited speaker at the Neuro-AI Workshop, and presented his research on “Sensory Prediction Error Signals in the Neocortex.” In our conversation, we discuss a series of recent studies on two-photon calcium imaging. We talk predictive coding, hierarchical inference, and Blake’s recent work on memory systems for reinforcement lea</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our 2019 NeurIPS coverage, this time around joined by Blake Richards, Assistant Professor at McGill University and a Core Faculty Member at Mila. Blake was an invited speaker at the Neuro-AI Workshop, and presented his research on “Sensory Prediction Error Signals in the Neocortex.” In our conversation, we discuss a series of recent studies on two-photon calcium imaging. We talk predictive coding, hierarchical inference, and Blake’s recent work on memory systems for reinforcement lea]]>
      </content:encoded>
      <itunes:duration>2429</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7f6d7700-72f2-4bea-a9dd-6afc2f616940]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2255079505.mp3?updated=1629244750"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How to Know with Celeste Kidd - #330</title>
      <link>https://twimlai.com/talk/330</link>
      <description>Today we’re joined by Celeste Kidd, Assistant Professor at UC Berkeley, to discuss her invited talk “How to Know” which details her lab’s research about the core cognitive systems people use to guide their learning about the world. We explore why people are curious about some things but not others, and how past experiences and existing knowledge shape future interests, why people believe what they believe, and how these beliefs are influenced, and how machine learning figures into the equation.</description>
      <pubDate>Mon, 23 Dec 2019 18:46:40 -0000</pubDate>
      <itunes:title>How to Know with Celeste Kidd</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>330</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/50687ca6-ee98-11eb-9502-aba6d1ed1a0c/image/TWIML_COVER_800x800_CK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we begin our coverage of the 2019 NeurIPS conference with Celeste Kidd, Assistant Professor of Psychology at UC Berkeley. In our conversation, we discuss:  The research at the Kidd Lab, which is focused on understanding “how people come to...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Celeste Kidd, Assistant Professor at UC Berkeley, to discuss her invited talk “How to Know” which details her lab’s research about the core cognitive systems people use to guide their learning about the world. We explore why people are curious about some things but not others, and how past experiences and existing knowledge shape future interests, why people believe what they believe, and how these beliefs are influenced, and how machine learning figures into the equation.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Celeste Kidd, Assistant Professor at UC Berkeley, to discuss her invited talk “How to Know” which details her lab’s research about the core cognitive systems people use to guide their learning about the world. We explore why people are curious about some things but not others, and how past experiences and existing knowledge shape future interests, why people believe what they believe, and how these beliefs are influenced, and how machine learning figures into the equation.]]>
      </content:encoded>
      <itunes:duration>3209</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[faadd503-04c1-4785-952d-e5734fdb397e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8793289302.mp3?updated=1629244798"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Using Deep Learning to Predict Wildfires with Feng Yan - #329</title>
      <link>https://twimlai.com/twiml-talk-329-using-deep-learning-to-predict-wildfires-with-feng-yan</link>
      <description>Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno to discuss ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. In our conversation, Feng details the development of the machine learning models and surrounding infrastructure. We also talk through problem formulation, challenges with using camera and satellite data in this use case, and how he has combined the use of IaaS and FaaS tools for cost-effectiveness and scalability</description>
      <pubDate>Fri, 20 Dec 2019 22:17:04 -0000</pubDate>
      <itunes:title>Using Deep Learning to Predict Wildfires with Feng Yan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>329</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/508f3332-ee98-11eb-9502-b34ac62be20f/image/TWIML_COVER_800x800_FY.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno. In our conversation, we discuss:  ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires.   The many purposes of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno to discuss ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. In our conversation, Feng details the development of the machine learning models and surrounding infrastructure. We also talk through problem formulation, challenges with using camera and satellite data in this use case, and how he has combined the use of IaaS and FaaS tools for cost-effectiveness and scalability</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Feng Yan, Assistant Professor at the University of Nevada, Reno to discuss ALERTWildfire, a camera-based network infrastructure that captures satellite imagery of wildfires. In our conversation, Feng details the development of the machine learning models and surrounding infrastructure. We also talk through problem formulation, challenges with using camera and satellite data in this use case, and how he has combined the use of IaaS and FaaS tools for cost-effectiveness and scalability]]>
      </content:encoded>
      <itunes:duration>3072</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5fc44692-8e05-408e-8f21-79e5dcac8521]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4749219290.mp3?updated=1629244796"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing Machine Learning at Capital One with Dave Castillo - #328</title>
      <link>https://twimlai.com/twiml-talk-328-advancing-machine-learning-at-capital-one-with-dave-castillo</link>
      <description>Today we’re joined by Dave Castillo, Managing VP for ML at Capital One and head of their Center for Machine Learning. In our conversation, we explore Capital One’s transition from “lab-based” ML to enterprise-wide adoption and support of ML, surprising ML use cases, their current platform ecosystem, their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and much more.</description>
      <pubDate>Thu, 19 Dec 2019 16:56:58 -0000</pubDate>
      <itunes:title>Advancing Machine Learning at Capital One with Dave Castillo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>328</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/50b74b9c-ee98-11eb-9502-bf0761d0f1ee/image/TWIML_COVER_800x800_DC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dave Castillo, Managing Vice President for ML at Capital One and head of their Center for Machine Learning. We caught up with David at re:Invent to discuss the aforementioned Center for Machine Learning, and what has changed...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dave Castillo, Managing VP for ML at Capital One and head of their Center for Machine Learning. In our conversation, we explore Capital One’s transition from “lab-based” ML to enterprise-wide adoption and support of ML, surprising ML use cases, their current platform ecosystem, their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Dave Castillo, Managing VP for ML at Capital One and head of their Center for Machine Learning. In our conversation, we explore Capital One’s transition from “lab-based” ML to enterprise-wide adoption and support of ML, surprising ML use cases, their current platform ecosystem, their design vision in building this into a larger, all-encompassing platform, pain points in building this platform, and much more. ]]>
      </content:encoded>
      <itunes:duration>2823</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4f7cc697-6411-4ec1-b6d1-24f086c0f87e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9666105278.mp3?updated=1629244775"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang - #327</title>
      <link>https://twimlai.com/twiml-talk-327-helping-fish-farmers-feed-the-world-with-deep-learning-w-bryton-shang://twimlai.com/</link>
      <description>Today we’re joined by Bryton Shang, Founder &amp; CEO at Aquabyte, a company focused on the application of computer vision to various fish farming use cases. In our conversation, we discuss how Bryton identified the various problems associated with mass fish farming, challenges developing computer algorithms that can measure the height and weight of fish, assess issues like sea lice, and how they’re developing interesting new features such as facial recognition for fish!</description>
      <pubDate>Tue, 17 Dec 2019 17:00:07 -0000</pubDate>
      <itunes:title>Helping Fish Farmers Feed the World with Deep Learning w/ Bryton Shang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>327</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/50d87984-ee98-11eb-9502-cbda5f3364d8/image/TWIML_COVER_800x800_BS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Bryton Shang, Founder &amp; CEO at Aquabyte. We caught up with Bryton after his talk at re:Invent’s ML Summit to discuss:  Aquabyte, a company focused on the application of computer vision fish farming. How Bryton identified...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Bryton Shang, Founder &amp; CEO at Aquabyte, a company focused on the application of computer vision to various fish farming use cases. In our conversation, we discuss how Bryton identified the various problems associated with mass fish farming, challenges developing computer algorithms that can measure the height and weight of fish, assess issues like sea lice, and how they’re developing interesting new features such as facial recognition for fish!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Bryton Shang, Founder &amp; CEO at Aquabyte, a company focused on the application of computer vision to various fish farming use cases. In our conversation, we discuss how Bryton identified the various problems associated with mass fish farming, challenges developing computer algorithms that can measure the height and weight of fish, assess issues like sea lice, and how they’re developing interesting new features such as facial recognition for fish!
]]>
      </content:encoded>
      <itunes:duration>2275</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[111dd697-27ac-4e67-bc5c-f191721dc6a0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5748797170.mp3?updated=1629244745"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Metaflow, a Human-Centric Framework for Data Science with Ville Tuulos - #326</title>
      <link>https://twimlai.com/talk/326</link>
      <description>Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including features, user experience, tooling, supported libraries, and much more. 

If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact!</description>
      <pubDate>Fri, 13 Dec 2019 20:56:49 -0000</pubDate>
      <itunes:title>Metaflow, a Human-Centric Framework for Data Science with Ville Tuulos</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>326</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/511082a2-ee98-11eb-9502-9f2f7d5b949e/image/TWIML_COVER_800x800_VT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our...</itunes:subtitle>
      <itunes:summary>Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including features, user experience, tooling, supported libraries, and much more. 

If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we kick off our re:Invent 2019 series with Ville Tuulos, Machine Learning Infrastructure Manager at Netflix. At re:Invent, Netflix announced the open-sourcing of Metaflow, their “human-centric framework for data science.” In our conversation, we discuss all things Metaflow, including features, user experience, tooling, supported libraries, and much more. 

If you’re interested in checking out a Metaflow democast with Villa, reach out at twimlai.com/contact! ]]>
      </content:encoded>
      <itunes:duration>3368</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1c85fcf4-4945-4b3d-befb-fd6e2d8b0e05]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4835392173.mp3?updated=1629244839"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Single Headed Attention RNN: Stop Thinking With Your Head with Stephen Merity - #325</title>
      <link>https://twimlai.com/twiml-talk-325-single-headed-attention-rnn-stop-thinking-with-your-head-with-stephen-merity</link>
      <description>Today we’re joined by Stephen Merity, an independent researcher focused on NLP and Deep Learning. In our conversation, we discuss Stephens latest paper, Single Headed Attention RNN: Stop Thinking With Your Head, detailing his primary motivations behind the paper, the decision to use SHA-RNNs for this research, how he built and trained the model, his approach to benchmarking, and finally his goals for the research in the broader research community.</description>
      <pubDate>Thu, 12 Dec 2019 19:04:00 -0000</pubDate>
      <itunes:title>Single Headed Attention RNN: Stop Thinking With Your Head with Stephen Merity</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>325</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/513a2fa8-ee98-11eb-9502-4f8929f5c233/image/TWIML_COVER_800x800_SM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Stephen Merity, startup founder and independent researcher, with  a focus on NLP and Deep Learning. In our conversation, we discuss:  Stephen’s newest paper, Single Headed Attention RNN: Stop Thinking With Your Head. His...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Stephen Merity, an independent researcher focused on NLP and Deep Learning. In our conversation, we discuss Stephens latest paper, Single Headed Attention RNN: Stop Thinking With Your Head, detailing his primary motivations behind the paper, the decision to use SHA-RNNs for this research, how he built and trained the model, his approach to benchmarking, and finally his goals for the research in the broader research community.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Stephen Merity, an independent researcher focused on NLP and Deep Learning. In our conversation, we discuss Stephens latest paper, Single Headed Attention RNN: Stop Thinking With Your Head, detailing his primary motivations behind the paper, the decision to use SHA-RNNs for this research, how he built and trained the model, his approach to benchmarking, and finally his goals for the research in the broader research community.]]>
      </content:encoded>
      <itunes:duration>3543</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a5064ede-bbb2-49bb-8356-21691bfba529]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3242181689.mp3?updated=1635370717"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automated Model Tuning with SigOpt - #324</title>
      <link>https://twimlai.com/twiml-talk-324-platform-optimization-with-sigopt</link>
      <description>In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo!

This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324. </description>
      <pubDate>Mon, 09 Dec 2019 20:43:21 -0000</pubDate>
      <itunes:title>Automated Model Tuning with SigOpt with SigOpt</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>324</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/516e4edc-ee98-11eb-9502-a3ceffa0a7c1/image/TWIML_Webinar_800x800_SC.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo! This episode is best consumed by watching the corresponding video demo, which you can find at .     </itunes:subtitle>
      <itunes:summary>In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo!

This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324. </itunes:summary>
      <content:encoded>
        <![CDATA[In this TWIML Democast, we're joined by SigOpt Co-Founder and CEO Scott Clark. Scott details the SigOpt platform, and gives us a live demo!

This episode is best consumed by watching the corresponding video demo, which you can find at twimlai.com/talk/324. ]]>
      </content:encoded>
      <itunes:duration>2773</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d881bf1b-7e05-404d-b2a5-c63d2495879c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6214594648.mp3?updated=1629244762"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automated Machine Learning with Erez Barak - #323</title>
      <link>https://twimlai.com/twiml-talk-323-automated-machine-learning-with-erez-barak</link>
      <description>Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more!</description>
      <pubDate>Fri, 06 Dec 2019 16:32:25 -0000</pubDate>
      <itunes:title>Automated Machine Learning with Erez Barak</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>323</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5198d5f8-ee98-11eb-9502-9b7fa02831b5/image/TWIML_COVER_800x800_EB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the final episode of our Azure ML series, we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, we discuss:  Erez’s AutoML philosophy, including how he defines “true AutoML” and his take on the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Erez Barak, Partner Group Manager of Azure ML at Microsoft. In our conversation, Erez gives us a full breakdown of his AutoML philosophy, and his take on the AutoML space, its role, and its importance. We also discuss the application of AutoML as a contributor to the end-to-end data science process, which Erez breaks down into 3 key areas; Featurization, Learner/Model Selection, and Tuning/Optimizing Hyperparameters. We also discuss post-deployment AutoML use cases, and much more!]]>
      </content:encoded>
      <itunes:duration>2565</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[648dd288-b138-4471-8bbc-ade0982797af]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2387798837.mp3?updated=1629244757"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Responsible AI in Practice with Sarah Bird - #322</title>
      <link>https://twimlai.com/twiml-talk-322-responsible-ai-in-practice-with-sarah-bird</link>
      <description>Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. At Ignite, Microsoft released new tools focused on responsible machine learning, which fall under the umbrella of the Azure ML 'Machine Learning Interpretability Toolkit.’ In our conversation, Sarah walks us this toolkit, detailing use cases and the user experience. We also discuss her work in differential privacy, and in the broader ML community, in particular, the MLSys conference.</description>
      <pubDate>Wed, 04 Dec 2019 16:10:39 -0000</pubDate>
      <itunes:title>Responsible AI in Practice with Sarah Bird</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>322</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/51bb462e-ee98-11eb-9502-fbea81670eb3/image/TWIML_COVER_800x800_SB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. In our conversation, we discuss:  Sarah’s work in machine learning systems, with a focus on bringing machine learning research...</itunes:subtitle>
      <itunes:summary>Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. At Ignite, Microsoft released new tools focused on responsible machine learning, which fall under the umbrella of the Azure ML 'Machine Learning Interpretability Toolkit.’ In our conversation, Sarah walks us this toolkit, detailing use cases and the user experience. We also discuss her work in differential privacy, and in the broader ML community, in particular, the MLSys conference.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our Azure ML at Microsoft Ignite series joined by Sarah Bird, Principal Program Manager at Microsoft. At Ignite, Microsoft released new tools focused on responsible machine learning, which fall under the umbrella of the Azure ML 'Machine Learning Interpretability Toolkit.’ In our conversation, Sarah walks us this toolkit, detailing use cases and the user experience. We also discuss her work in differential privacy, and in the broader ML community, in particular, the MLSys conference.]]>
      </content:encoded>
      <itunes:duration>2280</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f7c14f0c-ff6f-4d90-a15a-14e0bc9bacca]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7612018851.mp3?updated=1629244750"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Enterprise Readiness, MLOps and Lifecycle Management with Jordan Edwards - #321</title>
      <link>https://twimlai.com/twiml-talk-321-enterprise-readiness-mlops-and-lifecycle-management-with-jordan-edwards</link>
      <description>Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details how Azure ML accelerates model lifecycle management with MLOps, which enables data scientists to collaborate with IT teams to increase the pace of model development and deployment. We discuss various problems associated with generalizing ML at scale at Microsoft, what exactly MLOps is, the “four phases” along the journey of customer implementation of MLOps, and much m</description>
      <pubDate>Mon, 02 Dec 2019 16:24:31 -0000</pubDate>
      <itunes:title>Enterprise Readiness, MLOps and Lifecycle Management with Jordan Edwards</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>321</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/51e8fd1c-ee98-11eb-9502-afe10b638f4d/image/TWIML_COVER_800x800_JE.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details:  How Azure ML accelerates model lifecycle management with MLOps, enabling data scientists to collaborate with IT...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details how Azure ML accelerates model lifecycle management with MLOps, which enables data scientists to collaborate with IT teams to increase the pace of model development and deployment. We discuss various problems associated with generalizing ML at scale at Microsoft, what exactly MLOps is, the “four phases” along the journey of customer implementation of MLOps, and much m</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jordan Edwards, Principal Program Manager for MLOps on Azure ML at Microsoft. In our conversation, Jordan details how Azure ML accelerates model lifecycle management with MLOps, which enables data scientists to collaborate with IT teams to increase the pace of model development and deployment. We discuss various problems associated with generalizing ML at scale at Microsoft, what exactly MLOps is, the “four phases” along the journey of customer implementation of MLOps, and much m]]>
      </content:encoded>
      <itunes:duration>2342</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1d9ce7e1-a7c9-4733-ae13-c29b460fffd3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4110319185.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>DevOps for ML with Dotscience - #320</title>
      <link>https://twimlai.com/twiml-talk-320-reproducible-accouontable-collaborative-and-continuous-ml-with-dotscience</link>
      <description>Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML.

Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.  

Head to https://twimlai.com/democast/dotscience to watch the full democast!</description>
      <pubDate>Tue, 26 Nov 2019 00:44:04 -0000</pubDate>
      <itunes:title>DevOps for ML with Dotscience</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>320</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/521137be-ee98-11eb-9502-e3e544288ac9/image/TWIML_Webinar_800x800_LM1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML. Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML.

Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.  

Head to https://twimlai.com/democast/dotscience to watch the full democast!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Luke Marsden, Founder and CEO of Dotscience. Luke walks us through the Dotscience platform and their manifesto on DevOps for ML.

Thanks to Luke and Dotscience for their sponsorship of this Democast and their continued support of TWIML.  

Head to https://twimlai.com/democast/dotscience to watch the full democast!]]>
      </content:encoded>
      <itunes:duration>2811</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b75583b2-0aa4-48a4-ad3d-36aae92c4c63]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7503310558.mp3?updated=1629244762"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building an Autonomous Knowledge Graph with Mike Tung - #319</title>
      <link>https://twimlai.com/twiml-talk-319-building-an-autonomous-knowledge-graph-with-mike-tung</link>
      <description>Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community.</description>
      <pubDate>Thu, 21 Nov 2019 20:27:15 -0000</pubDate>
      <itunes:title>Building an Autonomous Knowledge Graph with Mike Tung</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>319</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5235edfc-ee98-11eb-9502-4325f119e3c0/image/TWIML_COVER_800x800_MT.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss:   Their various tools, including their Knowledge Graph, Extraction API, and CrawlBot. How Knowledge Graph was inspired by Imagenet, how it was built,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Mike Tung, Founder, and CEO of Diffbot. In our conversation, we discuss Diffbot’s Knowledge Graph, including how it differs from more mainstream use cases like Google Search and MSFT Bing. We also discuss the developer experience with the knowledge graph and other tools, like Extraction API and Crawlbot, challenges like knowledge fusion, balancing being a research company that is also commercially viable, and how they approach their role in the research community. 
]]>
      </content:encoded>
      <itunes:duration>2646</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[335cc560-df77-4b81-891d-b1f5d29ebaad]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2583691248.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Next Generation of Self-Driving Engineers with Aaron Ma - Talk #318</title>
      <link>https://twimlai.com/twiml-talk-318-the-next-generation-of-self-driving-engineers-with-aaron-ma</link>
      <description>Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss Aaron’s research interests in reinforcement learning and self-driving cars, his journey from programmer to ML engineer, his experiences participating in kaggle competitions, and how he balances his passion for ML with day-to-day life.</description>
      <pubDate>Mon, 18 Nov 2019 21:13:18 -0000</pubDate>
      <itunes:title>The Next Generation of Self-Driving Engineers with Aaron Ma</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>318</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/524d52bc-ee98-11eb-9502-cb5eb99370ef/image/TWIML_COVER_800x800_AM1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss Aaron’s research interests in reinforcement learning and self-driving cars, his journey from programmer to ML engineer, his experiences participating in kaggle competitions, and how he balances his passion for ML with day-to-day life.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by our youngest guest ever (by far), Aaron Ma, an 11-year-old middle school student and machine learning engineer in training. Aaron has completed over 80(!) Coursera courses and is the recipient of 3 Udacity nano-degrees. In our conversation, we discuss Aaron’s research interests in reinforcement learning and self-driving cars, his journey from programmer to ML engineer, his experiences participating in kaggle competitions, and how he balances his passion for ML with day-to-day life.]]>
      </content:encoded>
      <itunes:duration>2865</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d91f50f2-642b-42a9-9827-358b0130463b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3839635040.mp3?updated=1629244764"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Spiking Neural Networks: A Primer with Terrence Sejnowski - #317</title>
      <link>https://twimlai.com/twiml-talk-317-spiking-neural-networks-a-primer-with-dr-terrence-sejnowski</link>
      <description>On today’s episode, we’re joined by Terrence Sejnowski, to discuss the ins and outs of spiking neural networks, including brain architecture, the relationship between neuroscience and machine learning, and ways to make NN’s more efficient through spiking. Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks.</description>
      <pubDate>Thu, 14 Nov 2019 17:46:31 -0000</pubDate>
      <itunes:title>Spiking Neural Networks: A Primer with Dr. Terrence Sejnowski</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>317</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/52b78376-ee98-11eb-9502-830b311a00b3/image/TWIML_COVER_800x800_TS.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>On today’s episode, we’re joined by Terrence Sejnowski, Francis Crick Chair, head of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies and faculty member at UC San Diego. In our conversation with Terry, we...</itunes:subtitle>
      <itunes:summary>On today’s episode, we’re joined by Terrence Sejnowski, to discuss the ins and outs of spiking neural networks, including brain architecture, the relationship between neuroscience and machine learning, and ways to make NN’s more efficient through spiking. Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks.</itunes:summary>
      <content:encoded>
        <![CDATA[On today’s episode, we’re joined by Terrence Sejnowski, to discuss the ins and outs of spiking neural networks, including brain architecture, the relationship between neuroscience and machine learning, and ways to make NN’s more efficient through spiking. Terry also gives us some insight into hardware used in this field, characterizes the major research problems currently being undertaken, and the future of spiking networks. ]]>
      </content:encoded>
      <itunes:duration>2976</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[116ef3a8-5e00-4324-a03d-104b78340d9b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3161264436.mp3?updated=1627362792"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain - #316</title>
      <link>https://twimlai.com/twiml-talk-316-bridging-the-patient-physician-gap-with-ml-and-expert-systems-w-xavier-amatriain</link>
      <description>Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building.</description>
      <pubDate>Mon, 11 Nov 2019 22:05:16 -0000</pubDate>
      <itunes:title>Bridging the Patient-Physician Gap with ML and Expert Systems w/ Xavier Amatriain</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>316</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/52e42dd6-ee98-11eb-9502-db8c193d4afe/image/TWIML_COVER_800x800_XA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai. In our conversation, we discuss  Curai’s goal of providing the world’s best primary care to patients via their smartphone, and how ML &amp; AI will bring down...</itunes:subtitle>
      <itunes:summary>Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by return guest Xavier Amatriain, Co-founder and CTO of Curai, whose goal is to make healthcare accessible and scaleable while bringing down costs. In our conversation, we touch on the shortcomings of traditional primary care, and how Curai fills that role, and some of the unique challenges his team faces in applying ML in the healthcare space. We also discuss the use of expert systems, how they train them, and how NLP projects like BERT and GPT-2 fit into what they’re building.]]>
      </content:encoded>
      <itunes:duration>2342</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1d46a86bfbdb49e69b8aea203189e5de]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7117614828.mp3?updated=1629244749"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>What Does it Mean for a Machine to "Understand"? with Thomas Dietterich - #315</title>
      <link>https://twimlai.com/twiml-talk-315-what-does-it-mean-for-a-machine-to-understand-with-thomas-dietterich</link>
      <description>Today we have the pleasure of being joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. Tom recently wrote a blog post titled "What does it mean for a machine to “understand”, and in our conversation, he goes into great detail on his thoughts. We cover a lot of ground, including Tom’s position in the debate, his thoughts on the role of systems like deep learning in potentially getting us to AGI, the “hype engine” around AI advancements, and so much more.</description>
      <pubDate>Thu, 07 Nov 2019 19:50:53 -0000</pubDate>
      <itunes:title>What Does it Mean for a Machine to "Understand"? with Thomas Dietterich</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>315</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5310d37c-ee98-11eb-9502-93af8628a209/image/TWIML_COVER_800x800_TD.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. We had the pleasure of discussing Tom’s recent blog post, “What does it mean for a machine to “understand,” in which he discusses:  Tom’s...</itunes:subtitle>
      <itunes:summary>Today we have the pleasure of being joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. Tom recently wrote a blog post titled "What does it mean for a machine to “understand”, and in our conversation, he goes into great detail on his thoughts. We cover a lot of ground, including Tom’s position in the debate, his thoughts on the role of systems like deep learning in potentially getting us to AGI, the “hype engine” around AI advancements, and so much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we have the pleasure of being joined by Tom Dietterich, Distinguished Professor Emeritus at Oregon State University. Tom recently wrote a blog post titled "What does it mean for a machine to “understand”, and in our conversation, he goes into great detail on his thoughts. We cover a lot of ground, including Tom’s position in the debate, his thoughts on the role of systems like deep learning in potentially getting us to AGI, the “hype engine” around AI advancements, and so much more. ]]>
      </content:encoded>
      <itunes:duration>2300</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a3514983e46f4534b401876337532c7a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9755862978.mp3?updated=1629244744"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling TensorFlow at LinkedIn with Jonathan Hung - #314</title>
      <link>https://twimlai.com/twiml-talk-314-scaling-tensorflow-at-linkedin-with-jonathan-hung</link>
      <description>Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn. Jonathan presented at TensorFlow world last week, titled Scaling TensorFlow at LinkedIn. In our conversation, we discuss their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure, TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, and their foray into using Kubernetes for research.</description>
      <pubDate>Mon, 04 Nov 2019 19:46:11 -0000</pubDate>
      <itunes:title>Scaling TensorFlow with LinkedIn with Jonathan Hung</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>314</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/53336036-ee98-11eb-9502-2b6ef09b32e3/image/TWIML_COVER_800x800_JH.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn, who we caught up with at TensorFlow World last week. In our conversation, we discuss:   Jonathan’s presentation at the event focused on LinkedIn’s efforts scaling...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn. Jonathan presented at TensorFlow world last week, titled Scaling TensorFlow at LinkedIn. In our conversation, we discuss their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure, TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, and their foray into using Kubernetes for research.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jonathan Hung, Sr. Software Engineer at LinkedIn. Jonathan presented at TensorFlow world last week, titled Scaling TensorFlow at LinkedIn. In our conversation, we discuss their motivation for using TensorFlow on their pre-existing Hadoop clusters infrastructure, TonY, or TensorFlow on Yard, LinkedIn’s framework that natively runs deep learning jobs on Hadoop, and its relationship with Pro-ML, LinkedIn’s internal AI Platform, and their foray into using Kubernetes for research. ]]>
      </content:encoded>
      <itunes:duration>2120</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f947bfcf7da9450cb03876654c505e38]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8176371241.mp3?updated=1629244748"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning at GitHub with Omoju Miller - #313</title>
      <link>https://twimlai.com/twiml-talk-313-machine-learning-at-github-with-omoju-miller</link>
      <description>Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

• Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
• Her work as an inaugural member of the Github machine learning team
• Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”</description>
      <pubDate>Thu, 31 Oct 2019 19:43:46 -0000</pubDate>
      <itunes:title>Machine Learning at GitHub with Omoju Miller</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>313</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/53536a16-ee98-11eb-9502-f7bec7ebfb04/image/TWIML_COVER_800x800_OM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:  Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science,  Her work as an inaugural member of the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

• Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
• Her work as an inaugural member of the Github machine learning team
• Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Omoju Miller, a Sr. machine learning engineer at GitHub. In our conversation, we discuss:

• Her dissertation, Hiphopathy, A Socio-Curricular Study of Introductory Computer Science, 
• Her work as an inaugural member of the Github machine learning team
• Her two presentations at Tensorflow World, “Why is machine learning seeing exponential growth in its communities” and “Automating your developer workflow on GitHub with Tensorflow.”
]]>
      </content:encoded>
      <itunes:duration>2624</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[13b4d54a56434887b20aa2b96f44835b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3668980060.mp3?updated=1627362793"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman - #312</title>
      <link>https://twimlai.com/twiml-talk-312-using-ai-to-diagnose-and-treat-neurological-disorders-with-archana-venkataraman</link>
      <description>Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor</description>
      <pubDate>Mon, 28 Oct 2019 21:43:31 -0000</pubDate>
      <itunes:title>Using AI to Diagnose and Treat Neurological Disorders with Archana Venkataraman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>312</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/537637a8-ee98-11eb-9502-37fff6d8557a/image/TWIML_COVER_800x800_AV.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University, and MIT 35 innovators under 35 recipient. Archana’s research at the Neural Systems Analysis...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Archana Venkataraman, John C. Malone Assistant Professor of Electrical and Computer Engineering at Johns Hopkins University. Archana’s research at the Neural Systems Analysis Laboratory focuses on developing tools, frameworks, and algorithms to better understand, and treat neurological and psychiatric disorders, including autism, epilepsy, and others. We explore her work applying machine learning to these problems, including biomarker discovery, disorder severity prediction and mor]]>
      </content:encoded>
      <itunes:duration>2818</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cb4b79e34de44f89b41d52629ff43ae9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1661617966.mp3?updated=1629244790"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for Earthquake Aftershock Patterns with Phoebe DeVries &amp; Brendan Meade - #311</title>
      <link>https://twimlai.com/twiml-talk-311-deep-learning-for-earthquake-aftershock-patterns-with-phoebe-devries-brendan-meade</link>
      <description>Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and Brendan Meade, Professor of Earth and Planetary Sciences at Harvard. Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and by measuring how the earth’s surface moves, predicting future movement location, as seen in their paper: ‘Deep learning of aftershock patterns following large earthquakes'.</description>
      <pubDate>Fri, 25 Oct 2019 17:35:36 -0000</pubDate>
      <itunes:title>Deep Learning for Earthquake Aftershock Patterns with Phoebe DeVries &amp; Brendan Meade</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>311</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5398f37e-ee98-11eb-9502-8ff8348d4805/image/TWIML_COVER_800x800_PDBM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and assistant faculty at the University of Connecticut and Brendan Meade, Professor of Earth and Planetary Sciences and affiliate...</itunes:subtitle>
      <itunes:summary>Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and Brendan Meade, Professor of Earth and Planetary Sciences at Harvard. Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and by measuring how the earth’s surface moves, predicting future movement location, as seen in their paper: ‘Deep learning of aftershock patterns following large earthquakes'.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we are joined by Phoebe DeVries, Postdoctoral Fellow in the Department of Earth and Planetary Sciences at Harvard and Brendan Meade, Professor of Earth and Planetary Sciences at Harvard. Phoebe and Brendan’s work is focused on discovering as much as possible about earthquakes before they happen, and by measuring how the earth’s surface moves, predicting future movement location, as seen in their paper: ‘Deep learning of aftershock patterns following large earthquakes'.
]]>
      </content:encoded>
      <itunes:duration>2160</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e707f8dd649844dc8ec53281e1e62964]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2793182593.mp3?updated=1629244746"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Operationalizing Responsible AI - #310</title>
      <link>https://twimlai.com/twiml-talk-310-live-from-twimlcon-operationalizing-responsible-ai</link>
      <description>An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat.</description>
      <pubDate>Tue, 22 Oct 2019 13:59:48 -0000</pubDate>
      <itunes:title>Live from TWIMLcon! Operationalizing Responsible AI</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>310</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/53bdee36-ee98-11eb-9502-db50a50b41b1/image/TWIML_Cover_800x800_Panel_4.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied...</itunes:subtitle>
      <itunes:summary>An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat.</itunes:summary>
      <content:encoded>
        <![CDATA[An often forgotten about topic garnered high praise at TWIMLcon this month: operationalizing responsible and ethical AI. This important topic was combined with an impressive panel of speakers, including: Rachel Thomas, Director, Center for Applied Data Ethics at the USF Data Institute, Guillaume Saint-Jacques, Head of Computational Science at LinkedIn, and Parinaz Sobahni, Director of Machine Learning at Georgian Partners, moderated by Khari Johnson, Senior AI Staff Writer at VentureBeat.]]>
      </content:encoded>
      <itunes:duration>1840</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[76dbb920be0542888cc115ee9d8e5313]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2461520892.mp3?updated=1629244735"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Scaling ML in the Traditional Enterprise - #309</title>
      <link>https://twimlai.com/twiml-talk-309-live-from-twimlcon-scaling-ml-in-the-traditional-enterprise</link>
      <description>Machine learning and AI is finding a place in the traditional enterprise - although the path to get there is different. In this episode, our panel analyzes the state and future of larger, more established brands. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss &amp; Co., and Jürgen Weichenberger, Data Science Senior Principal &amp; Global AI Lead at Accenture, moderated by Josh Bloom, Professor at UC Berkeley.</description>
      <pubDate>Fri, 18 Oct 2019 14:58:20 -0000</pubDate>
      <itunes:title>Live from TWIMLcon! Scaling ML in the Traditional Enterprise</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>309</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/53dc3062-ee98-11eb-9502-97990fa8657b/image/TWIML_Cover_800x800_Panel_2.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode from a stellar TWIMLcon panel, the state and future of larger, more established brands is analyzed and discussed. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss...</itunes:subtitle>
      <itunes:summary>Machine learning and AI is finding a place in the traditional enterprise - although the path to get there is different. In this episode, our panel analyzes the state and future of larger, more established brands. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss &amp; Co., and Jürgen Weichenberger, Data Science Senior Principal &amp; Global AI Lead at Accenture, moderated by Josh Bloom, Professor at UC Berkeley.</itunes:summary>
      <content:encoded>
        <![CDATA[Machine learning and AI is finding a place in the traditional enterprise - although the path to get there is different. In this episode, our panel analyzes the state and future of larger, more established brands. Hear from Amr Awadallah, Founder and Global CTO of Cloudera, Pallav Agrawal, Director of Data Science at Levi Strauss &amp; Co., and Jürgen Weichenberger, Data Science Senior Principal &amp; Global AI Lead at Accenture, moderated by Josh Bloom, Professor at UC Berkeley. ]]>
      </content:encoded>
      <itunes:duration>2019</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1a267706b6f44d328fd46382501b9a8d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8325586788.mp3?updated=1627362794"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Culture &amp; Organization for Effective ML at Scale (Panel) - #308</title>
      <link>https://twimlai.com/twiml-talk-308-live-from-twimlcon-culture-organization-for-effective-ml-at-scale-panel</link>
      <description>TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear about changing the way companies think about machine learning from a diverse set of panelists including Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder &amp; CEO at Alectio, moderated by Maribel Lopez, Founder &amp; Principal Analyst at Lopez Research.</description>
      <pubDate>Tue, 15 Oct 2019 18:51:40 -0000</pubDate>
      <itunes:title>Live from TWIMLcon! Culture &amp; Organization for Effective ML at Scale (Panel)</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>308</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/53fac752-ee98-11eb-9502-274a54cb1c20/image/TWIML_Cover_800x800_Panel_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear from a diverse set of panelists including: Pardis Noorzad, Data Science Manager at...</itunes:subtitle>
      <itunes:summary>TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear about changing the way companies think about machine learning from a diverse set of panelists including Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder &amp; CEO at Alectio, moderated by Maribel Lopez, Founder &amp; Principal Analyst at Lopez Research.</itunes:summary>
      <content:encoded>
        <![CDATA[TWIMLcon brought together so many in the ML/AI community to discuss the unique challenges to building and scaling machine learning platforms. In this episode, hear about changing the way companies think about machine learning from a diverse set of panelists including Pardis Noorzad, Data Science Manager at Twitter, Eric Colson, Chief Algorithms Officer Emeritus at Stitch Fix, and Jennifer Prendki, Founder &amp; CEO at Alectio, moderated by Maribel Lopez, Founder &amp; Principal Analyst at Lopez Research.
]]>
      </content:encoded>
      <itunes:duration>1659</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3d5b95fb10064240a99f129995d0a749]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6963613610.mp3?updated=1629244730"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Use-Case Driven ML Platforms with Franziska Bell - #307</title>
      <link>https://twimlai.com/twiml-talk-307-live-from-twimlcon-use-case-driven-ml-platforms-with-franziska-bell</link>
      <description>Today we're Franziska Bell, Ph.D., the Director of Data Science Platforms at Uber, who joined Sam on stage at TWIMLcon last week. Fran provided a look into the cutting edge data science available company-wide at the push of a button. Since joining Uber, Fran has developed a portfolio of platforms, ranging from forecasting to conversational AI. Hear how use cases can strategically guide platform development, the evolving relationship between her team and Michelangelo (Uber’s ML Platform) and much more!</description>
      <pubDate>Thu, 10 Oct 2019 17:47:43 -0000</pubDate>
      <itunes:title>Live from TWIMLcon! Use-Case Driven ML Platforms with Franziska Bell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>307</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/541cb07e-ee98-11eb-9502-870db1642627/image/TWIMLcon_800x800_FB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Franziska Bell, Ph.D., is the Director of Data Science Platforms at Uber, and joined Sam on stage at TWIMLcon last week to discuss all things platform at Uber. With the goal of providing cutting edge data science company-wide at the push of a button,...</itunes:subtitle>
      <itunes:summary>Today we're Franziska Bell, Ph.D., the Director of Data Science Platforms at Uber, who joined Sam on stage at TWIMLcon last week. Fran provided a look into the cutting edge data science available company-wide at the push of a button. Since joining Uber, Fran has developed a portfolio of platforms, ranging from forecasting to conversational AI. Hear how use cases can strategically guide platform development, the evolving relationship between her team and Michelangelo (Uber’s ML Platform) and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're Franziska Bell, Ph.D., the Director of Data Science Platforms at Uber, who joined Sam on stage at TWIMLcon last week. Fran provided a look into the cutting edge data science available company-wide at the push of a button. Since joining Uber, Fran has developed a portfolio of platforms, ranging from forecasting to conversational AI. Hear how use cases can strategically guide platform development, the evolving relationship between her team and Michelangelo (Uber’s ML Platform) and much more!]]>
      </content:encoded>
      <itunes:duration>1936</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b856faee45654f4ab31ed1d3763ce1da]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4288040420.mp3?updated=1627362795"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Operationalizing ML at Scale with Hussein Mehanna - #306</title>
      <link>https://twimlai.com/twiml-talk-306-live-from-twimlcon-operationalizing-ml-at-scale-with-hussein-mehanna</link>
      <description>The live interviews from TWIMLcon continue with Hussein Mehanna, Head of ML and AI at Cruise. From his start at Facebook to his current work at Cruise, Hussein has seen first hand what it takes to scale and sustain machine learning programs. Hear him discuss the challenges (and joys) of working in the industry, his insight into analyzing scale when innovation is happening in parallel with development, his experiences at Facebook, Google, and Cruise, and his predictions for the future of ML platforms!</description>
      <pubDate>Tue, 08 Oct 2019 15:56:33 -0000</pubDate>
      <itunes:title>Live from TWIMLcon! Operationalizing ML at Scale with Hussein Mehanna</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>306</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/543e8474-ee98-11eb-9502-87036505fb05/image/TWIMLcon_800x800_HM.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The live interviews from TWIMLcon continue with Hussein Mehanna, Head of Machine Learning and Artificial Intelligence at Cruise. From his start at Facebook and then Google and now to Cruise, leading the trend of autonomous vehicles, Hussein has seen...</itunes:subtitle>
      <itunes:summary>The live interviews from TWIMLcon continue with Hussein Mehanna, Head of ML and AI at Cruise. From his start at Facebook to his current work at Cruise, Hussein has seen first hand what it takes to scale and sustain machine learning programs. Hear him discuss the challenges (and joys) of working in the industry, his insight into analyzing scale when innovation is happening in parallel with development, his experiences at Facebook, Google, and Cruise, and his predictions for the future of ML platforms!</itunes:summary>
      <content:encoded>
        <![CDATA[The live interviews from TWIMLcon continue with Hussein Mehanna, Head of ML and AI at Cruise. From his start at Facebook to his current work at Cruise, Hussein has seen first hand what it takes to scale and sustain machine learning programs. Hear him discuss the challenges (and joys) of working in the industry, his insight into analyzing scale when innovation is happening in parallel with development, his experiences at Facebook, Google, and Cruise, and his predictions for the future of ML platforms!
]]>
      </content:encoded>
      <itunes:duration>2022</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0208c1d906254e7bb23a00f599e2ddda]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8225652586.mp3?updated=1627362795"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Encoding Company Culture in Applied AI Systems - #305</title>
      <link>https://twimlai.com/twiml-talk-305-live-from-twimlcon-encoding-company-culture-in-applied-ai-systems</link>
      <description>In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. Deepak shares the impact that standardizing processes and tools have on a company’s culture and productivity levels, and best practices to increasing ML ROI. He also details the Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more</description>
      <pubDate>Fri, 04 Oct 2019 09:00:00 -0000</pubDate>
      <itunes:title>LIVE FROM TWIMLcon! Encoding Company Culture in Applied AI Systems</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>305</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5465117a-ee98-11eb-9502-03d515ab6f5d/image/TWIMLcon_800x800_DA.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. In this episode Deepak shares:  The incredible impact that standardizing processes and tools...</itunes:subtitle>
      <itunes:summary>In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. Deepak shares the impact that standardizing processes and tools have on a company’s culture and productivity levels, and best practices to increasing ML ROI. He also details the Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, Sam is joined by Deepak Agarwal, VP of Engineering at LinkedIn, who graced the stage at TWIMLcon: AI Platforms for a keynote interview. Deepak shares the impact that standardizing processes and tools have on a company’s culture and productivity levels, and best practices to increasing ML ROI. He also details the Pro-ML initiative for delivering machine learning systems at scale, specifically looking at aligning improvement of tooling and infrastructure with the pace of innovation and more]]>
      </content:encoded>
      <itunes:duration>1944</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8efd2de732ff40afb3396592e917840b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2668468008.mp3?updated=1627362795"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Live from TWIMLcon! Overcoming the Barriers to Deep Learning in Production with Andrew Ng - #304</title>
      <link>https://twimlai.com/twiml-talk-304-live-from-twimlcon-overcoming-the-barriers-to-deep-learning-in-production-with-andrew-ng</link>
      <description>Earlier today, Andrew Ng joined us onstage at TWIMLcon - as the Founder and CEO of Landing AI and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful. Hear about the work that Landing AI is doing to help organizations adopt modern AI, his experience in overcoming challenges for large companies, how enterprises can get the most value for their ML investment as well as addressing the ‘essential complexity’ of software engineering.</description>
      <pubDate>Tue, 01 Oct 2019 18:55:00 -0000</pubDate>
      <itunes:title>LIVE FROM TWIMLcon! Overcoming the Barriers to Deep Learning in Production with Andrew Ng</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>304</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/548ac208-ee98-11eb-9502-03c94c9216d5/image/TWIMLcon_800x800_AN.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Earlier today, Andrew Ng joined us onstage at TWIMLcon to share some of his immense knowledge. As the Founder and CEO of Landing AI, Co-Chairman and Co-Founder of Coursera, and founding lead of Google Brain, Andrew is no stranger to knowing what it...</itunes:subtitle>
      <itunes:summary>Earlier today, Andrew Ng joined us onstage at TWIMLcon - as the Founder and CEO of Landing AI and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful. Hear about the work that Landing AI is doing to help organizations adopt modern AI, his experience in overcoming challenges for large companies, how enterprises can get the most value for their ML investment as well as addressing the ‘essential complexity’ of software engineering.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Earlier today, Andrew Ng joined us onstage at TWIMLcon - as the Founder and CEO of Landing AI and founding lead of Google Brain, Andrew is no stranger to knowing what it takes for AI and machine learning to be successful. Hear about the work that Landing AI is doing to help organizations adopt modern AI, his experience in overcoming challenges for large companies, how enterprises can get the most value for their ML investment as well as addressing the ‘essential complexity’ of software engineering.</p>]]>
      </content:encoded>
      <itunes:duration>2041</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f396bddc70cc4be2ba90127dc50cab42]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4138815001.mp3?updated=1628706222"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Future of Mixed-Autonomy Traffic with Alexandre Bayen - #303</title>
      <link>https://twimlai.com/twiml-talk-303-the-future-of-mixed-autonomy-traffic-with-alexandre-bayen</link>
      <description>Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley. Alex's current research is in mixed-autonomy traffic to understand how the growing automation in self-driving vehicles can be used to improve mobility and flow of traffic. At the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years.</description>
      <pubDate>Fri, 27 Sep 2019 18:29:28 -0000</pubDate>
      <itunes:title>The Future of Mixed-Autonomy Traffic with Alexandre Bayen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>303</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/54ac7858-ee98-11eb-9502-2f569d08c329/image/TWIML_COVER_800x800_AB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley.In this episode, we discuss Alex’s background in machine learning, his current research in mixed-autonomy traffic, and the idea...</itunes:subtitle>
      <itunes:summary>Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley. Alex's current research is in mixed-autonomy traffic to understand how the growing automation in self-driving vehicles can be used to improve mobility and flow of traffic. At the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we are joined by Alexandre Bayen, Director of the Institute for Transportation Studies and Professor at UC Berkeley. Alex's current research is in mixed-autonomy traffic to understand how the growing automation in self-driving vehicles can be used to improve mobility and flow of traffic. At the AWS re:Invent conference last year, Alex presented on the future of mixed-autonomy traffic and the two major revolutions he predicts will take place in the next 10-15 years. ]]>
      </content:encoded>
      <itunes:duration>2642</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fef7363e36a74cb397b1f465b858ecc5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1409893975.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Reinforcement Learning for Logistics at Instadeep with Karim Beguir - #302</title>
      <link>https://twimlai.com/twiml-talk-302-deep-reinforcement-learning-for-logistics-at-instadeep-with-karim-beguir</link>
      <description>Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company focusing on building advanced decision-making systems for the enterprise. In this episode, we focus on logistical problems that require decision-making in complex environments using deep learning and reinforcement learning. Karim explains the InstaDeep process and mindset, where they get their data sets, the efficiency of RL, heuristic vs learnability approaches and how explainability fits into the model.</description>
      <pubDate>Wed, 25 Sep 2019 12:54:54 -0000</pubDate>
      <itunes:title>Deep Reinforcement Learning for Logistics at Instadeep with Karim Beguir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>302</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/54d57992-ee98-11eb-9502-6bbfe76216de/image/TWIML_COVER_800x800_KB.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company in Tunisia, Africa focusing on building advanced decision-making systems for the enterprise. In this episode, we discuss where his and InstaDeep’s journey began in...</itunes:subtitle>
      <itunes:summary>Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company focusing on building advanced decision-making systems for the enterprise. In this episode, we focus on logistical problems that require decision-making in complex environments using deep learning and reinforcement learning. Karim explains the InstaDeep process and mindset, where they get their data sets, the efficiency of RL, heuristic vs learnability approaches and how explainability fits into the model.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we are joined by Karim Beguir, Co-Founder and CEO of InstaDeep, a company focusing on building advanced decision-making systems for the enterprise. In this episode, we focus on logistical problems that require decision-making in complex environments using deep learning and reinforcement learning. Karim explains the InstaDeep process and mindset, where they get their data sets, the efficiency of RL, heuristic vs learnability approaches and how explainability fits into the model.
]]>
      </content:encoded>
      <itunes:duration>2633</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[331815c01d9c4911a2519917940bd7b5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9302060155.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning with Structured Data w/ Mark Ryan - #301</title>
      <link>https://twimlai.com/twiml-talk-301-deep-learning-with-structured-data-w-mark-ryan</link>
      <description>Today we're joined by Mark Ryan, author of the upcoming book Deep Learning with Structured Data. Working on the support team at IBM Data and AI, he saw a lack of general structured data sets people could apply their models to. Using the streetcar network in Toronto, Mark gathered an open data set that started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data, details of his experience with a range of data sets, and details his new book.</description>
      <pubDate>Thu, 19 Sep 2019 01:43:40 -0000</pubDate>
      <itunes:title>Deep Learning with Structured Data w/ Mark Ryan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>301</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/54fb7e94-ee98-11eb-9502-770fc34c92d6/image/TWIML_COVER_800x800_MR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Mark Ryan, author of Deep Learning with Structured Data, currently in the Manning Early Access Program (MEAP), due for publication in Spring 2020. While working on the Support team at IBM Data and AI, he saw that there was a lack...</itunes:subtitle>
      <itunes:summary>Today we're joined by Mark Ryan, author of the upcoming book Deep Learning with Structured Data. Working on the support team at IBM Data and AI, he saw a lack of general structured data sets people could apply their models to. Using the streetcar network in Toronto, Mark gathered an open data set that started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data, details of his experience with a range of data sets, and details his new book.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Mark Ryan, author of the upcoming book Deep Learning with Structured Data. Working on the support team at IBM Data and AI, he saw a lack of general structured data sets people could apply their models to. Using the streetcar network in Toronto, Mark gathered an open data set that started the research for his latest book. In this episode, Mark shares the benefits of applying deep learning to structured data, details of his experience with a range of data sets, and details his new book.]]>
      </content:encoded>
      <itunes:duration>2394</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[12a7dbbe044d41b496469364f506e59d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3552976428.mp3?updated=1629244749"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Time Series Clustering for Monitoring Fueling Infrastructure Performance with Kalai Ramea  - #300</title>
      <link>https://twimlai.com/twiml-talk-300-time-series-clustering-for-monitoring-fueling-infrastructure-performance-with-kalai-ramea</link>
      <description>Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. In this episode we discuss her journey buying a hydrogen car and the subsequent journey and paper that followed assessing fueling stations. In her next paper, Kalai looked at fuel consumption at hydrogen stations and used temporal clustering to identify signatures of usage over time. As the number of fueling stations is planned to increase dramatically in the future, building reliability on their performance is crucial.</description>
      <pubDate>Wed, 18 Sep 2019 02:04:53 -0000</pubDate>
      <itunes:title>Time Series Clustering for Monitoring Fueling Infrastructure Performance with Kalai Ramea</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>300</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/551f8492-ee98-11eb-9502-275fec1ff832/image/TWIML_Cover_800x800_KR.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. With a background in transportation, energy efficiency, art, and machine learning, Kalai has been fortunate enough to follow her passions through her work. In this episode we...</itunes:subtitle>
      <itunes:summary>Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. In this episode we discuss her journey buying a hydrogen car and the subsequent journey and paper that followed assessing fueling stations. In her next paper, Kalai looked at fuel consumption at hydrogen stations and used temporal clustering to identify signatures of usage over time. As the number of fueling stations is planned to increase dramatically in the future, building reliability on their performance is crucial.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Kalai Ramea, Data Scientist at PARC, a Xerox Company. In this episode we discuss her journey buying a hydrogen car and the subsequent journey and paper that followed assessing fueling stations. In her next paper, Kalai looked at fuel consumption at hydrogen stations and used temporal clustering to identify signatures of usage over time. As the number of fueling stations is planned to increase dramatically in the future, building reliability on their performance is crucial.
]]>
      </content:encoded>
      <itunes:duration>1806</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f7714d0f2321464c9b6598f7a372f40a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1950670754.mp3?updated=1627362797"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Swarm AI for Event Outcome Prediction with Gregg Willcox - TWIML Talk #299</title>
      <link>https://twimlai.com/twiml-talk-299-swarm-ai-for-event-outcome-prediction-with-gregg-willcox</link>
      <description>Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results.</description>
      <pubDate>Fri, 13 Sep 2019 16:58:09 -0000</pubDate>
      <itunes:title>Swarm AI for Event Outcome Prediction with Gregg Willcox</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>299</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5540ab0e-ee98-11eb-9502-53310bc3b8a5/image/TWIMLAI_Background_800x800_GreggWillcox.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual...</itunes:subtitle>
      <itunes:summary>Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results.</itunes:summary>
      <content:encoded>
        <![CDATA[ Today we're joined by Gregg Willcox, Director of Research and Development at Unanimous AI. Inspired by the natural phenomenon called 'swarming', which uses the collective intelligence of a group to produce more accurate results than an individual alone, ‘Swarm AI’ was born. A game-like platform that channels the convictions of individuals to come to a consensus and using a behavioral neural network trained on people’s behavior called ‘Conviction’, to further amplify the results. 
]]>
      </content:encoded>
      <itunes:duration>2483</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a28057d8e0074baf91d7d53f41b3bd66]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7531603267.mp3?updated=1629244752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Rebooting AI: What's Missing, What's Next with Gary Marcus - TWIML Talk #298</title>
      <link>https://twimlai.com/twiml-talk-298-rebooting-ai-whats-missing-whats-next-with-gary-marcus</link>
      <description>Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, well-known scientist, bestselling author, professor and entrepreneur. Hear Gary discuss his latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI. In this episode, Gary provides insight into what we should be talking and thinking about to make even greater (and safer) strides in AI.</description>
      <pubDate>Tue, 10 Sep 2019 14:21:35 -0000</pubDate>
      <itunes:title>Rebooting AI: What's Missing, What's Next with Gary Marcus</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>298</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5571cd38-ee98-11eb-9502-9b232752d0bb/image/TWIMLAI_Background_800x800_GaryMarcus.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, former CEO and Founder of Geometric Intelligence (acquired by Uber) and well-known scientist, bestselling author, professor and entrepreneur. In this episode hear Gary discuss:  His...</itunes:subtitle>
      <itunes:summary>Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, well-known scientist, bestselling author, professor and entrepreneur. Hear Gary discuss his latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI. In this episode, Gary provides insight into what we should be talking and thinking about to make even greater (and safer) strides in AI.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Gary Marcus, CEO and Founder at Robust.AI, well-known scientist, bestselling author, professor and entrepreneur. Hear Gary discuss his latest book, ‘Rebooting AI: Building Artificial Intelligence We Can Trust’, an extensive look into the current gaps, pitfalls and areas for improvement in the field of machine learning and AI. In this episode, Gary provides insight into what we should be talking and thinking about to make even greater (and safer) strides in AI.
]]>
      </content:encoded>
      <itunes:duration>2850</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[81bc07abfe03492986eb22cea89ec365]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9063559946.mp3?updated=1629244756"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>DeepQB: Deep Learning to Quantify Quarterback Decision-Making with Brian Burke - TWIML Talk #297</title>
      <link>https://twimlai.com/twiml-talk-297-deepqb-deep-learning-to-quantify-quarterback-decision-making-with-brian-burke</link>
      <description>Today we're joined by Brian Burke, Analytics Specialist with the Stats &amp; Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick decisions both roles make on a regular basis. In this episode, we discuss his paper: “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making &amp; Performance”, what it means for football, and his excitement for machine learning in sports.</description>
      <pubDate>Thu, 05 Sep 2019 18:11:17 -0000</pubDate>
      <itunes:title>DeepQB: Deep Learning to Quantify Quarterback Decision-Making with Brian Burke</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>297</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/559cfe22-ee98-11eb-9502-e73fd62884bc/image/TWIMLAI_Background_800x800_BrianBurke.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Brian Burke, Analytics Specialist with the Stats &amp; Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick, pressure-filled...</itunes:subtitle>
      <itunes:summary>Today we're joined by Brian Burke, Analytics Specialist with the Stats &amp; Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick decisions both roles make on a regular basis. In this episode, we discuss his paper: “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making &amp; Performance”, what it means for football, and his excitement for machine learning in sports.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Brian Burke, Analytics Specialist with the Stats &amp; Information Group at ESPN. A former Navy pilot and lifelong football fan, Brian saw the correlation between fighter pilots and quarterbacks in the quick decisions both roles make on a regular basis. In this episode, we discuss his paper: “DeepQB: Deep Learning with Player Tracking to Quantify Quarterback Decision-Making &amp; Performance”, what it means for football, and his excitement for machine learning in sports.
]]>
      </content:encoded>
      <itunes:duration>3049</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bedc36561a40401c93fbb537f5e63432]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4571622826.mp3?updated=1629244765"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Measuring Performance Under Pressure Using ML with Lotte Bransen - TWIML Talk #296</title>
      <link>https://twimlai.com/twiml-talk-296-measuring-performance-under-pressure-using-ml-with-lotte-bransen</link>
      <description>Today we're joined by Lotte Bransen, a Scientific Researcher at SciSports. With a background in mathematics, econometrics, and soccer, Lotte has honed her research on analytics of the game and its players, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses her paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and the implications of her research in the world of sports.</description>
      <pubDate>Tue, 03 Sep 2019 17:30:13 -0000</pubDate>
      <itunes:title>Measuring Performance Under Pressure Using ML with Lotte Bransen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>296</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/55c19818-ee98-11eb-9502-5368cdc77936/image/TWIMLAI_Background_800x800_LotteBransen.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Lotte Bransen, Scientific Researcher at SciSports. With a background in mathematics, econometrics and soccer, Lotte has honed her research on analytics of the game and its players. More specifically, using trained models to...</itunes:subtitle>
      <itunes:summary>Today we're joined by Lotte Bransen, a Scientific Researcher at SciSports. With a background in mathematics, econometrics, and soccer, Lotte has honed her research on analytics of the game and its players, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses her paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and the implications of her research in the world of sports.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Lotte Bransen, a Scientific Researcher at SciSports. With a background in mathematics, econometrics, and soccer, Lotte has honed her research on analytics of the game and its players, using trained models to understand the impact of mental pressure on a player’s performance. In this episode, Lotte discusses her paper, ‘Choke or Shine? Quantifying Soccer Players' Abilities to Perform Under Mental Pressure’ and the implications of her research in the world of sports.
]]>
      </content:encoded>
      <itunes:duration>2080</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4e6930fa7cf34469addf4792ceb852a1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7494549478.mp3?updated=1629244737"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Managing Deep Learning Experiments with Lukas Biewald - TWIML Talk #295</title>
      <link>https://twimlai.com/twiml-talk-295-managing-deep-learning-experiments-with-lukas-biewald</link>
      <description>Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights &amp; Biases. Lukas founded the company after seeing a need for reproducibility in deep learning experiments. In this episode, we discuss his experiment tracking tool, how it works, the components that make it unique, and the collaborative culture that Lukas promotes. Listen in to how he got his start in deep learning and experiment tracking, the current Weights &amp; Biases success strategy, and what his team is working on today.</description>
      <pubDate>Thu, 29 Aug 2019 18:09:23 -0000</pubDate>
      <itunes:title>Managing Deep Learning Experiments with Lukas Biewald</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>295</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/55f10526-ee98-11eb-9502-8f8206245097/image/TWIMLAI_Background_800x800_LukasBiewald.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights &amp; Biases. Lukas, previously CEO and Founder of Figure Eight (CrowdFlower), has a straightforward goal: provide researchers with SaaS that is easy to install, simple to operate, and...</itunes:subtitle>
      <itunes:summary>Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights &amp; Biases. Lukas founded the company after seeing a need for reproducibility in deep learning experiments. In this episode, we discuss his experiment tracking tool, how it works, the components that make it unique, and the collaborative culture that Lukas promotes. Listen in to how he got his start in deep learning and experiment tracking, the current Weights &amp; Biases success strategy, and what his team is working on today.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Lukas Biewald, CEO and Co-Founder of Weights &amp; Biases. Lukas founded the company after seeing a need for reproducibility in deep learning experiments. In this episode, we discuss his experiment tracking tool, how it works, the components that make it unique, and the collaborative culture that Lukas promotes. Listen in to how he got his start in deep learning and experiment tracking, the current Weights &amp; Biases success strategy, and what his team is working on today.
]]>
      </content:encoded>
      <itunes:duration>2537</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f3658ebacc4747c6b504892187a29e65]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9460720347.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Re-Architecting Data Science at iRobot with Angela Bassa - TWIML Talk #294</title>
      <link>https://twimlai.com/twiml-talk-294-re-architecting-data-science-at-irobot-with-angela-bassa</link>
      <description>Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss:

• iRobot's re-architecture, and a look at the evolution of iRobot.

• Where iRobot gets its data from and how they taxonomize data science.

• The platforms and processes that have been put into place to support delivering models in production.

•The role of DevOps in bringing these various platforms together, and much more!</description>
      <pubDate>Mon, 26 Aug 2019 18:54:24 -0000</pubDate>
      <itunes:title>Re-Architecting Data Science at iRobot with Angela Bassa</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>294</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/561c0d5c-ee98-11eb-9502-a38006b88044/image/TWIMLAI_Background_800x800_AngelaBassa.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss: • iRobot's re-architecture, and a look at the evolution of iRobot.  • Where iRobot gets its data from and how they taxonomize data...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss:

• iRobot's re-architecture, and a look at the evolution of iRobot.

• Where iRobot gets its data from and how they taxonomize data science.

• The platforms and processes that have been put into place to support delivering models in production.

•The role of DevOps in bringing these various platforms together, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Angela Bassa, Director of Data Science at iRobot. In our conversation, Angela and I discuss:

• iRobot's re-architecture, and a look at the evolution of iRobot.

• Where iRobot gets its data from and how they taxonomize data science.

• The platforms and processes that have been put into place to support delivering models in production.

•The role of DevOps in bringing these various platforms together, and much more!]]>
      </content:encoded>
      <itunes:duration>2934</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6d1f9be2b1e34813af123513aea65855]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7977406296.mp3?updated=1629244766"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Disentangled Representations &amp; Google Research Football with Olivier Bachem - TWIML Talk #293</title>
      <link>https://twimlai.com/twiml-talk-293-disentangled-representations-google-research-football-with-olivier-bachem</link>
      <description>Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team.

Olivier joins us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment. Olivier and Sam discuss what makes this environment different than other available RL environments, such as OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.</description>
      <pubDate>Thu, 22 Aug 2019 17:00:45 -0000</pubDate>
      <itunes:title>Disentangled Representations &amp; Google Research Football with Olivier Bachem</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>293</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/564448bc-ee98-11eb-9502-837e72132d97/image/TWIMLAI_Background_800x800_OlivierBachem.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team. Initially, Olivier joined us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team.

Olivier joins us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment. Olivier and Sam discuss what makes this environment different than other available RL environments, such as OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Olivier Bachem, a research scientist at Google AI on the Brain team.

Olivier joins us to discuss his work on Google’s research football project, their foray into building a novel reinforcement learning environment. Olivier and Sam discuss what makes this environment different than other available RL environments, such as OpenAI Gym and PyGame, what other techniques they explored while using this environment, and what’s on the horizon for their team and Football RLE.

]]>
      </content:encoded>
      <itunes:duration>2570</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[129fdce395b641d88013fb8134230ebf]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1170095484.mp3?updated=1629244757"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292</title>
      <link>https://twimlai.com/twiml-talk-292-neural-network-quantization-and-compression-with-tijmen-blankevoort</link>
      <description>Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."</description>
      <pubDate>Mon, 19 Aug 2019 18:07:03 -0000</pubDate>
      <itunes:title>Neural Network Quantization and Compression with Tijmen Blankevoort</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>292</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5664b5e8-ee98-11eb-9502-f342aa558575/image/TWIMLAI_Background_800x800_TijmenBlankevoort.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. Tijmen is also co-founder of ML startup Scyfer, along with Qualcomm colleague Max Welling, who we spoke with back on...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."]]>
      </content:encoded>
      <itunes:duration>3017</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6d9dfa86122047bebc49abad3ae78bc8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9516373983.mp3?updated=1629244768"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Identifying New Materials with NLP with Anubhav Jain - TWIML Talk #291</title>
      <link>https://twimlai.com/twiml-talk-291-identifying-new-materials-with-nlp-with-anubhav-jain</link>
      <description>Today we are joined by Anubhav Jain, Staff Scientist &amp; Chemist at Lawrence Berkeley National Lab. We discuss his latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’. Anubhav explains the design of a system that takes the literature and uses natural language processing to conceptualize complex material science concepts. He also discusses scientific literature mining and how the method can recommend materials for functional applications in the future.</description>
      <pubDate>Thu, 15 Aug 2019 18:58:01 -0000</pubDate>
      <itunes:title>Identifying New Materials with NLP with Anubhav Jain</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>291</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/568c0bd4-ee98-11eb-9502-e7f64a2eda95/image/TWIMLAI_Background_800x800_AnubhavJain.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we are joined by Anubhav Jain, Staff Scientist &amp; Chemist at Lawrence Berkeley National Lab. Anubhav leads the Hacker Materials Research Group, where his research focuses on applying computing to accelerate the process of finding new...</itunes:subtitle>
      <itunes:summary>Today we are joined by Anubhav Jain, Staff Scientist &amp; Chemist at Lawrence Berkeley National Lab. We discuss his latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’. Anubhav explains the design of a system that takes the literature and uses natural language processing to conceptualize complex material science concepts. He also discusses scientific literature mining and how the method can recommend materials for functional applications in the future.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we are joined by Anubhav Jain, Staff Scientist &amp; Chemist at Lawrence Berkeley National Lab. We discuss his latest paper, ‘Unsupervised word embeddings capture latent knowledge from materials science literature’. Anubhav explains the design of a system that takes the literature and uses natural language processing to conceptualize complex material science concepts. He also discusses scientific literature mining and how the method can recommend materials for functional applications in the future.]]>
      </content:encoded>
      <itunes:duration>2397</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eb8007f16c7842f492d130489347ca7c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9858985478.mp3?updated=1627362800"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Problem with Black Boxes with Cynthia Rudin - TWIML Talk #290</title>
      <link>https://twimlai.com/twiml-talk-290-the-problem-with-black-boxes-with-cynthia-rudin</link>
      <description>Today we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for more comprehensible decisions - extremely important when dealing with human lives. Cynthia explains black box and interpretable models, their development, use cases, and her future plans in the field.</description>
      <pubDate>Wed, 14 Aug 2019 13:38:00 -0000</pubDate>
      <itunes:title>The Problem with Black Boxes with Cynthia Rudin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>290</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/56b7080c-ee98-11eb-9502-73962d2b7e29/image/TWIMLAI_Background_800x800_CynthiaRudin_1.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>You asked, we listened! Today, by listener request, we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. Cynthia is passionate about machine learning and social...</itunes:subtitle>
      <itunes:summary>Today we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for more comprehensible decisions - extremely important when dealing with human lives. Cynthia explains black box and interpretable models, their development, use cases, and her future plans in the field.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we are joined by Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science at Duke University. In this episode we discuss her paper, ‘Please Stop Explaining Black Box Models for High Stakes Decisions’, and how interpretable models make for more comprehensible decisions - extremely important when dealing with human lives. Cynthia explains black box and interpretable models, their development, use cases, and her future plans in the field.]]>
      </content:encoded>
      <itunes:duration>2909</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a3e7e94848874fba9666aa346a45f16f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1098781095.mp3?updated=1627362800"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human-Robot Interaction and Empathy with Kate Darling - TWIML Talk #289</title>
      <link>https://twimlai.com/twiml-talk-289-human-robot-interaction-and-empathy-with-kate-darling</link>
      <description>Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics, the social implication of how people treat robots and the purposeful design of robots in our daily lives. We discuss measuring empathy, the impact of robot treatment on kids behavior, the correlation between animals and robots, and why 'effective' robots aren’t always humanoid. Kate combines a wealth of knowledge with an analytical mind that questions the why and how of human-robot intera</description>
      <pubDate>Thu, 08 Aug 2019 16:42:24 -0000</pubDate>
      <itunes:title>Human-Robot Interaction and Empathy with Kate Darling</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>289</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/56de303a-ee98-11eb-9502-cbdf0b4602ea/image/TWIMLAI_Background_800x800_KateDarling.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics and interaction, namely the social implication of how people treat robots and the purposeful design of robots in our daily lives....</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics, the social implication of how people treat robots and the purposeful design of robots in our daily lives. We discuss measuring empathy, the impact of robot treatment on kids behavior, the correlation between animals and robots, and why 'effective' robots aren’t always humanoid. Kate combines a wealth of knowledge with an analytical mind that questions the why and how of human-robot intera</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Dr. Kate Darling, Research Specialist at the MIT Media Lab. Kate’s focus is on robot ethics, the social implication of how people treat robots and the purposeful design of robots in our daily lives. We discuss measuring empathy, the impact of robot treatment on kids behavior, the correlation between animals and robots, and why 'effective' robots aren’t always humanoid. Kate combines a wealth of knowledge with an analytical mind that questions the why and how of human-robot intera]]>
      </content:encoded>
      <itunes:duration>2637</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eca0186dafeb48a5aec882606ac81beb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9352676710.mp3?updated=1629244757"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automated ML for RNA Design with Danny Stoll - TWIML Talk #288</title>
      <link>https://twimlai.com/twiml-talk-288-automated-ml-for-rna-design-with-danny-stoll</link>
      <description>Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. In this episode, Danny explains the design process through reverse engineering and how his team’s deep learning algorithm is applied to train and design sequences. We discuss transfer learning, multitask learning, ablation studies, hyperparameter optimization and the difference between chemical and statistical based approac</description>
      <pubDate>Mon, 05 Aug 2019 17:31:43 -0000</pubDate>
      <itunes:title>Automated ML for RNA Design with Danny Stoll</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>288</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5703711a-ee98-11eb-9502-3bafa731e865/image/TWIMLAI_Background_800x800_-DannyStoll.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Since high school, Danny has been fascinated by Deep Learning which has grown into a desire to make machine learning available to anyone with interest. Danny’s...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. In this episode, Danny explains the design process through reverse engineering and how his team’s deep learning algorithm is applied to train and design sequences. We discuss transfer learning, multitask learning, ablation studies, hyperparameter optimization and the difference between chemical and statistical based approac</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Danny Stoll, Research Assistant at the University of Freiburg. Danny’s current research can be encapsulated in his latest paper, ‘Learning to Design RNA’. In this episode, Danny explains the design process through reverse engineering and how his team’s deep learning algorithm is applied to train and design sequences. We discuss transfer learning, multitask learning, ablation studies, hyperparameter optimization and the difference between chemical and statistical based approac]]>
      </content:encoded>
      <itunes:duration>2237</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3fb711138b7a4e3d8a247bdabce6d719]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3658642601.mp3?updated=1629244747"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Developing a brain atlas using deep learning with Theofanis Karayannis - TWIML Talk #287</title>
      <link>https://twimlai.com/twiml-talk-287-developing-a-brain-atlas-using-deep-learning</link>
      <description>Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is focused on brain circuit development and uses Deep Learning methods to segment the brain regions, then detect the connections around each region. He then looks at the distribution of connections that make neurological decisions in both animals and humans every day. From the way images of the brain are collected to genetic trackability, this episode has it all.</description>
      <pubDate>Thu, 01 Aug 2019 16:33:26 -0000</pubDate>
      <itunes:title>Developing a brain atlas using deep learning with Theofanis Karayannis</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>287</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/57237668-ee98-11eb-9502-27d41aea0a10/image/TWIMLAI_Background_800x800_TheoK.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is currently focused on understanding how circuits in the brain are formed during development and modified...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is focused on brain circuit development and uses Deep Learning methods to segment the brain regions, then detect the connections around each region. He then looks at the distribution of connections that make neurological decisions in both animals and humans every day. From the way images of the brain are collected to genetic trackability, this episode has it all.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Theofanis Karayannis, Assistant Professor at the Brain Research Institute of the University of Zurich. Theo’s research is focused on brain circuit development and uses Deep Learning methods to segment the brain regions, then detect the connections around each region. He then looks at the distribution of connections that make neurological decisions in both animals and humans every day. From the way images of the brain are collected to genetic trackability, this episode has it all.]]>
      </content:encoded>
      <itunes:duration>2243</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[db9fbbccbc7d451b9531a4fd2fa3f23f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6169268788.mp3?updated=1629244752"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Environmental Impact of Large-Scale NLP Model Training with Emma Strubell - TWIML Talk #286</title>
      <link>https://twimlai.com/twiml-talk-286-environmental-impact-of-large-scale-nlp-model-training-with-emma-strubell</link>
      <description>Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, reviews carbon emissions of training neural networks despite an increase in accuracy. In this episode, we discuss Emma’s research methods, how companies are reacting to environmental concerns, and how we can do b</description>
      <pubDate>Mon, 29 Jul 2019 18:26:08 -0000</pubDate>
      <itunes:title>Environmental Impact of Large-Scale NLP Model Training with Emma Strubell</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>286</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/574b3194-ee98-11eb-9502-cb2f6779d54f/image/TWIMLAI_Background_800x800_EmmaStrubell.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is on NLP and bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, reviews carbon emissions of training neural networks despite an increase in accuracy. In this episode, we discuss Emma’s research methods, how companies are reacting to environmental concerns, and how we can do b</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Emma Strubell, currently a visiting scientist at Facebook AI Research. Emma’s focus is bringing state of the art NLP systems to practitioners by developing efficient and robust machine learning models. Her paper, Energy and Policy Considerations for Deep Learning in NLP, reviews carbon emissions of training neural networks despite an increase in accuracy. In this episode, we discuss Emma’s research methods, how companies are reacting to environmental concerns, and how we can do b]]>
      </content:encoded>
      <itunes:duration>2242</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0d986f0306bc42a8810217570a53836b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5753102354.mp3?updated=1629244743"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>“Fairwashing” and the Folly of ML Solutionism with Zachary Lipton - TWIML Talk #285</title>
      <link>https://twimlai.com/twiml-talk-285-fairwashing-and-the-folly-of-ml-solutionism-with-zachary-lipton</link>
      <description>Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With a theme of data interpretation, Zachary’s research is focused on machine learning in healthcare, with the goal of assisting physicians through the diagnosis and treatment process. We discuss supervised learning in the medical field, robustness under distribution shifts, ethics in machine learning systems across industries, the concept of ‘fairwashing, and more.</description>
      <pubDate>Thu, 25 Jul 2019 15:47:19 -0000</pubDate>
      <itunes:title>“Fairwashing” and the Folly of ML Solutionism with Zachary Lipton</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>285</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/576cf66c-ee98-11eb-9502-6fda46f13934/image/TWIMLAI_Background_800x800_-ZacharyLipton.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With an overarching theme of data quality and interpretation, Zachary's research and work is focused on machine learning in healthcare, with the goal of not...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With a theme of data interpretation, Zachary’s research is focused on machine learning in healthcare, with the goal of assisting physicians through the diagnosis and treatment process. We discuss supervised learning in the medical field, robustness under distribution shifts, ethics in machine learning systems across industries, the concept of ‘fairwashing, and more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Zachary Lipton, Assistant Professor in the Tepper School of Business. With a theme of data interpretation, Zachary’s research is focused on machine learning in healthcare, with the goal of assisting physicians through the diagnosis and treatment process. We discuss supervised learning in the medical field, robustness under distribution shifts, ethics in machine learning systems across industries, the concept of ‘fairwashing, and more.]]>
      </content:encoded>
      <itunes:duration>4513</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c96ab7a5984f4b82a471b4a1ce57934c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9605744678.mp3?updated=1629244860"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Retinal Image Generation for Disease Discovery with Stephen Odaibo - TWIML Talk #284</title>
      <link>https://twimlai.com/twiml-talk-284-retinal-image-generation-for-disease-discovery-with-stephen-odaibo</link>
      <description>Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before becoming an entrepreneur. In this episode we discuss his expertise in ophthalmology and engineering along with the current state of both industries that lead him to build autonomous systems that diagnose and treat retinal diseases.</description>
      <pubDate>Mon, 22 Jul 2019 16:05:26 -0000</pubDate>
      <itunes:title>Retinal Image Generation for Disease Discovery with Stephen Odaibo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>284</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/57980136-ee98-11eb-9502-ef717fc1fe4f/image/TWIMLAI_Background_800x800_StephenOdaibo.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s unique journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before becoming an entrepreneur. In this episode we discuss his expertise in ophthalmology and engineering along with the current state of both industries that lead him to build autonomous systems that diagnose and treat retinal diseases.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Dr. Stephen Odaibo, Founder and CEO of RETINA-AI Health Inc. Stephen’s journey to machine learning and AI includes degrees in math, medicine and computer science, which led him to an ophthalmology practice before becoming an entrepreneur. In this episode we discuss his expertise in ophthalmology and engineering along with the current state of both industries that lead him to build autonomous systems that diagnose and treat retinal diseases.]]>
      </content:encoded>
      <itunes:duration>2471</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c132d6c93c074c3da7c88928e49ab5fe]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2953846855.mp3?updated=1629244744"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Real world model explainability with Rayid Ghani - TWiML Talk #283</title>
      <link>https://twimlai.com/twiml-talk-283-real-world-model-explainability-with-rayid-ghani</link>
      <description>Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Drawing on his range of experience, Rayid saw that while automated predictions can be helpful, they don’t always paint a full picture. The key is the relevant context when making tough decisions involving humans and their lives. We delve into the world of explainability methods, necessary human involvement, machine feedback loop and more.</description>
      <pubDate>Thu, 18 Jul 2019 16:00:00 -0000</pubDate>
      <itunes:title>Real world model explainability with Rayid Ghani</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>283</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/57c9150a-ee98-11eb-9502-879025256546/image/TWIMLAI_Background_800x800_RayidGhani.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Rayid’s goal is to combine his skills in machine learning and data with his desire to improve public policy and the social...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Drawing on his range of experience, Rayid saw that while automated predictions can be helpful, they don’t always paint a full picture. The key is the relevant context when making tough decisions involving humans and their lives. We delve into the world of explainability methods, necessary human involvement, machine feedback loop and more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Rayid Ghani, Director of the Center for Data Science and Public Policy at the University of Chicago. Drawing on his range of experience, Rayid saw that while automated predictions can be helpful, they don’t always paint a full picture. The key is the relevant context when making tough decisions involving humans and their lives. We delve into the world of explainability methods, necessary human involvement, machine feedback loop and more.]]>
      </content:encoded>
      <itunes:duration>3034</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[49cc44a8e1eb45aab567b58841ba78d3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7925838245.mp3?updated=1629244771"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inspiring New Machine Learning Platforms w/ Bioelectric Computation with Michael Levin - TWiML Talk #282</title>
      <link>https://twimlai.com/twiml-talk-282-inspiring-new-machine-learning-platforms-with-bioelectric-computation-with-michael-levin</link>
      <description>Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. In our conversation, we talk about synthetic living machines, novel AI architectures and brain-body plasticity. Michael explains how our DNA doesn’t control everything and how the behavior of cells in living organisms can be modified and adapted. Using research on biological systems dynamic remodeling, Michael discusses the future of developmental biology and regenerative medicine.</description>
      <pubDate>Mon, 15 Jul 2019 16:38:01 -0000</pubDate>
      <itunes:title>Inspiring New Machine Learning Platforms with Bioelectric Computation with Michael Levin</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>282</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/57eb6a9c-ee98-11eb-9502-cfec29075dfe/image/TWIMLAI_Background_800x800_MichaelLevin.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. Michael joined us back at NeurIPS to discuss his invited talk “What Bodies Think About: Bioelectric Computation Beyond the Nervous System as...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. In our conversation, we talk about synthetic living machines, novel AI architectures and brain-body plasticity. Michael explains how our DNA doesn’t control everything and how the behavior of cells in living organisms can be modified and adapted. Using research on biological systems dynamic remodeling, Michael discusses the future of developmental biology and regenerative medicine.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Michael Levin, Director of the Allen Discovery Institute at Tufts University. In our conversation, we talk about synthetic living machines, novel AI architectures and brain-body plasticity. Michael explains how our DNA doesn’t control everything and how the behavior of cells in living organisms can be modified and adapted. Using research on biological systems dynamic remodeling, Michael discusses the future of developmental biology and regenerative medicine.]]>
      </content:encoded>
      <itunes:duration>1530</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[28ac624b8ccb4fa7a421f8f90e373062]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6066934860.mp3?updated=1629244739"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Simulation and Synthetic Data for Computer Vision with Batu Arisoy - TWiML Talk #281</title>
      <link>https://twimlai.com/twiml-talk-281-simulation-and-synthetic-data-for-computer-vision</link>
      <description>Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies &amp; Solutions team at Siemens Corporate Technology. Batu’s research focus is solving limited-data computer vision problems, providing R&amp;D for business units throughout the company. In our conversation, Batu details his group's ongoing projects, like an activity recognition project with the ONR, and their many CVPR submissions, which include an emulation of a teacher teaching students information without the use of memorizatio</description>
      <pubDate>Tue, 09 Jul 2019 17:38:51 -0000</pubDate>
      <itunes:title>Simulation and Synthetic Data for Computer Vision with Batu Arisoy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>281</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/58137a28-ee98-11eb-9502-733ff6fd5ce9/image/TWIMLAI_Background_800x800_BatuArisoy.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies &amp; Solutions team at Siemens Corporate Technology. Currently, Batu’s research focus is solving limited data computer vision problems, providing R&amp;D for many of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies &amp; Solutions team at Siemens Corporate Technology. Batu’s research focus is solving limited-data computer vision problems, providing R&amp;D for business units throughout the company. In our conversation, Batu details his group's ongoing projects, like an activity recognition project with the ONR, and their many CVPR submissions, which include an emulation of a teacher teaching students information without the use of memorizatio</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Batu Arisoy, Research Manager with the Vision Technologies &amp; Solutions team at Siemens Corporate Technology. Batu’s research focus is solving limited-data computer vision problems, providing R&amp;D for business units throughout the company. In our conversation, Batu details his group's ongoing projects, like an activity recognition project with the ONR, and their many CVPR submissions, which include an emulation of a teacher teaching students information without the use of memorizatio]]>
      </content:encoded>
      <itunes:duration>2488</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4d019e404b1543e0bbe31df4e4800f37]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4063136659.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Spiking Neural Nets and ML as a Systems Challenge with Jeff Gehlhaar - TWIML Talk #280</title>
      <link>https://twimlai.com/twiml-talk-280-spiking-neural-nets-and-ml-as-a-systems-challenge-with-jeff-gehlhaar</link>
      <description>Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. Qualcomm has a hand in tons of machine learning research and hardware, and in our conversation with Jeff we discuss:

• How the various training frameworks fit into the developer experience when working with their chipsets.

• Examples of federated learning in the wild.

• The role inference will play in data center devices and much more.</description>
      <pubDate>Mon, 08 Jul 2019 19:07:07 -0000</pubDate>
      <itunes:title>Spiking Neural Nets and ML as a Systems Challenge with Jeff Gehlhaar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>280</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5835ee8c-ee98-11eb-9502-e7472674a19c/image/TWIMLAI_Background_800x800_JeffGehlhaar.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. As we’ve explored in our conversations with both Gary Brotman and Max Welling, Qualcomm has a hand in tons of machine learning research and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. Qualcomm has a hand in tons of machine learning research and hardware, and in our conversation with Jeff we discuss:

• How the various training frameworks fit into the developer experience when working with their chipsets.

• Examples of federated learning in the wild.

• The role inference will play in data center devices and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jeff Gehlhaar, VP of Technology and Head of AI Software Platforms at Qualcomm. Qualcomm has a hand in tons of machine learning research and hardware, and in our conversation with Jeff we discuss:

• How the various training frameworks fit into the developer experience when working with their chipsets.

• Examples of federated learning in the wild.

• The role inference will play in data center devices and much more.]]>
      </content:encoded>
      <itunes:duration>3154</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[922a74327aa84456ad5a6d98ff333820]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3802178431.mp3?updated=1629244781"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Transforming Oil &amp; Gas with AI with Adi Bhashyam and Daniel Jeavons - TWIML Talk #279</title>
      <link>https://twimlai.com/twiml-talk-279-transforming-oil-gas-with-ai-with-adi-bhashyam-and-daniel-jeavons</link>
      <description>Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss:

• The progress that Dan and his team has made since our last conversation, including an overview of their data platform.

• Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases.</description>
      <pubDate>Mon, 01 Jul 2019 18:33:09 -0000</pubDate>
      <itunes:title>Transforming Oil &amp; Gas with AI with Adi Bhashyam and Daniel Jeavons</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>279</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/58631e0c-ee98-11eb-9502-cbc01828febc/image/TWIMLAI_Background_800x800_AB-DJ.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss: • The...</itunes:subtitle>
      <itunes:summary>Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss:

• The progress that Dan and his team has made since our last conversation, including an overview of their data platform.

• Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by return guest Daniel Jeavons, GM of Data Science at Shell, and Adi Bhashyam, GM of Data Science at C3, who we had the pleasure of speaking to at this years C3 Transform Conference. In our conversation, we discuss:

• The progress that Dan and his team has made since our last conversation, including an overview of their data platform.

• Adi gives us an overview of the evolution of C3 and their platform, along with a breakdown of a few Shell-specific use cases.]]>
      </content:encoded>
      <itunes:duration>2768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[599d120f45bb457caad28be746dfb9fb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8259771507.mp3?updated=1629244767"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fast Radio Burst Pulse Detection with Gerry Zhang - TWIML Talk #278</title>
      <link>https://twimlai.com/twiml-talk-278-fast-radio-burst-pulse-detection-with-gerry-zhang</link>
      <description>Today we’re joined by Yunfan Gerry Zhang, a PhD student at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss: 

• Gerry's research on applying machine learning techniques to astrophysics and astronomy.

• His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”.

• We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.</description>
      <pubDate>Thu, 27 Jun 2019 18:18:20 -0000</pubDate>
      <itunes:title>Fast Radio Burst Pulse Detection with Gerry Zhang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>278</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/588b4f12-ee98-11eb-9502-df2f7d07db5c/image/TWIMLAI_Background_800x800_GerryZhang.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Yunfan Gerry Zhang, a PhD student in the Department of Astrophysics at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss:  • Gerry's research on applying machine...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Yunfan Gerry Zhang, a PhD student at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss: 

• Gerry's research on applying machine learning techniques to astrophysics and astronomy.

• His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”.

• We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Yunfan Gerry Zhang, a PhD student at UC Berkely, and an affiliate of Berkeley’s SETI research center. In our conversation, we discuss: 

• Gerry's research on applying machine learning techniques to astrophysics and astronomy.

• His paper “Fast Radio Burst 121102 Pulse Detection and Periodicity: A Machine Learning Approach”.

• We explore the types of data sources used for this project, challenges Gerry encountered along the way, the role of GANs and much more.]]>
      </content:encoded>
      <itunes:duration>2314</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[71994fde0ac64c908a70411ebadef13f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1979872372.mp3?updated=1629244739"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Tracking CO2 Emissions with Machine Learning with Laurence Watson - TWIML Talk #277</title>
      <link>https://twimlai.com/twiml-talk-277-tracking-co2-emissions-with-machine-learning-with-laurence-watson</link>
      <description>Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss:

• Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”. 

• How they are using computer vision to process satellite images of coal plants, including how the images are labeled.

•Various challenges with the scope and scale of this project.</description>
      <pubDate>Mon, 24 Jun 2019 19:29:08 -0000</pubDate>
      <itunes:title>Tracking CO2 Emissions with Machine Learning with Laurence Watson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>277</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/58ab9dd0-ee98-11eb-9502-5b49fa9604b5/image/TWIMLAI_Background_800x800_LW_277.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss: • Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss:

• Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”. 

• How they are using computer vision to process satellite images of coal plants, including how the images are labeled.

•Various challenges with the scope and scale of this project.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Laurence Watson, Co-Founder and CTO of Plentiful Energy and a former data scientist at Carbon Tracker. In our conversation, we discuss:

• Carbon Tracker's goals, and their report “Nowhere to hide: Using satellite imagery to estimate the utilisation of fossil fuel power plants”. 

• How they are using computer vision to process satellite images of coal plants, including how the images are labeled.

•Various challenges with the scope and scale of this project.
]]>
      </content:encoded>
      <itunes:duration>2497</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[af8b3cbaffe2497297315fbe72aa8ded]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5909500362.mp3?updated=1629244748"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Topic Modeling for Customer Insights at USAA with William Fehlman - TWIML Talk #276</title>
      <link>https://twimlai.com/twiml-talk-276-topic-modeling-for-customer-insight-at-usaa-with-william-fehlman</link>
      <description>Today we’re joined by William Fehlman, director of data science at USAA, to discuss: 

• His work on topic modeling, which USAA uses in various scenarios, including member chat channels.

• How their datasets are generated.

• Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization.

• We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence.</description>
      <pubDate>Thu, 20 Jun 2019 19:26:52 -0000</pubDate>
      <itunes:title>Topic Modeling for Customer Insights at USAA with William Fehlman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>276</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/58d3206c-ee98-11eb-9502-936f50b1459d/image/TWIMLAI_Background_800x800_WF_276.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by William Fehlman, director of data science at USAA. We caught up with William a while back to discuss:  His work on topic modeling, which USAA uses in various scenarios, including chat channels with members via mobile and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by William Fehlman, director of data science at USAA, to discuss: 

• His work on topic modeling, which USAA uses in various scenarios, including member chat channels.

• How their datasets are generated.

• Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization.

• We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by William Fehlman, director of data science at USAA, to discuss: 

• His work on topic modeling, which USAA uses in various scenarios, including member chat channels.

• How their datasets are generated.

• Explored methodologies of topic modeling, including latent semantic indexing, latent Dirichlet allocation, and non-negative matrix factorization.

• We also explore how terms are represented via a document-term matrix, and how they are scored based on coherence. 
]]>
      </content:encoded>
      <itunes:duration>2696</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e2ac436540c545cda75339373d0078a3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6513325256.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Phronesis of AI in Radiology with Judy Gichoya - TWIML Talk #275</title>
      <link>https://twimlai.com/twiml-talk-275-phronesis-of-ai-in-radiology-with-judy-gichoya</link>
      <description>Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss:

• Judy's research on the paper  “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology.

• Potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use c</description>
      <pubDate>Tue, 18 Jun 2019 20:46:53 -0000</pubDate>
      <itunes:title>Phronesis of AI in Radiology with Judy Gichoya</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>275</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/58fc43ac-ee98-11eb-9502-4b05bfd0f5ab/image/TWIMLAI_Background_800x800_JG_275.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss: • Judy's research in “Phronesis of AI in Radiology: Superhuman meets Natural...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss:

• Judy's research on the paper  “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology.

• Potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use c</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Judy Gichoya an interventional radiology fellow at the Dotter Institute at Oregon Health and Science University. In our conversation, we discuss:

• Judy's research on the paper  “Phronesis of AI in Radiology: Superhuman meets Natural Stupidy,” reviewing the claims of “superhuman” AI performance in radiology.

• Potential roles in which AI can have success in radiology, along with some of the different types of biases that can manifest themselves across multiple use c]]>
      </content:encoded>
      <itunes:duration>2613</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[451017a511f74dad8cab2bf37a87e0aa]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8436369596.mp3?updated=1629244753"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Ethics of AI-Enabled Surveillance with Karen Levy - TWIML Talk #274</title>
      <link>https://twimlai.com/twiml-talk-274-the-ethics-of-ai-enabled-surveillance-with-karen-levy</link>
      <description>Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation, we discuss how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.</description>
      <pubDate>Fri, 14 Jun 2019 19:31:37 -0000</pubDate>
      <itunes:title>The Ethics of AI-Enabled Surveillance with Karen Levy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>274</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/592c61f4-ee98-11eb-9502-9bd1a6309ea2/image/TWIMLAI_Background_800x800_KL_274.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation, we discuss how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Karen Levy, assistant professor in the department of information science at Cornell University. Karen’s research focuses on how rules and technologies interact to regulate behavior, especially the legal, organizational, and social aspects of surveillance and monitoring. In our conversation, we discuss how data tracking and surveillance can be used in ways that can be abusive to various marginalized groups, including detailing her extensive research into truck driver surveillance.]]>
      </content:encoded>
      <itunes:duration>2583</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6f5e07a100764942b9100df05380fcce]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9627081475.mp3?updated=1629244758"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Supporting Rapid Model Development at Two Sigma with Matt Adereth &amp; Scott Clark - TWIML Talk #273</title>
      <link>https://twimlai.com/twiml-talk-273-supporting-rapid-model-development-at-two-sigma-with-matt-adereth-scott-clark</link>
      <description>Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss:

• The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling.

• How Two Sigma has attacked the experimentation challenge with their platform.

• What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so, and much more!</description>
      <pubDate>Tue, 11 Jun 2019 17:16:47 -0000</pubDate>
      <itunes:title>Supporting Rapid Model Development at Two Sigma with Matt Adereth &amp; Scott Clark</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>273</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5953e9b8-ee98-11eb-9502-eba627692a02/image/TWIMLAI_Background_800x800_MA-SC_273.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss: • The end to end modeling platform at Two Sigma, who it serves, and challenges faced in...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss:

• The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling.

• How Two Sigma has attacked the experimentation challenge with their platform.

• What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Matt Adereth, managing director of investments at Two Sigma, and return guest Scott Clark, co-founder and CEO of SigOpt, to discuss:

• The end to end modeling platform at Two Sigma, who it serves, and challenges faced in production and modeling.

• How Two Sigma has attacked the experimentation challenge with their platform.

• What motivates companies that aren’t already heavily invested in platforms, optimization or automation, to do so, and much more!]]>
      </content:encoded>
      <itunes:duration>2779</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d922f57f87be4bc7a1abdb9cb33c5d90]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9480198831.mp3?updated=1629244759"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Model Training with Kubernetes at Stripe with Kelley Rivoire - TWIML Talk #272</title>
      <link>https://twimlai.com/twiml-talk-272-scaling-model-training-with-kubernetes-at-stripe-with-kelley-rivoire</link>
      <description>Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss:

• Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes."

• Stripe’s machine learning infrastructure journey, including their start from a production focus.

• Internal tools used at Stripe, including Railyard, an API built to manage model training at scale &amp; more!</description>
      <pubDate>Thu, 06 Jun 2019 16:34:42 -0000</pubDate>
      <itunes:title>Scaling Model Training with Kubernetes at Stripe with Kelley Rivoire</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>272</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/597c55a6-ee98-11eb-9502-6786a9126dab/image/TWIMLAI_Background_800x800_KR_272_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss: • Her talk "Scaling model training: From flexible training APIs...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss:

• Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes."

• Stripe’s machine learning infrastructure journey, including their start from a production focus.

• Internal tools used at Stripe, including Railyard, an API built to manage model training at scale &amp; more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Kelley Rivoire, engineering manager working on machine learning infrastructure at Stripe. Kelley and I caught up at a recent Strata Data conference to discuss:

• Her talk "Scaling model training: From flexible training APIs to resource management with Kubernetes."

• Stripe’s machine learning infrastructure journey, including their start from a production focus.

• Internal tools used at Stripe, including Railyard, an API built to manage model training at scale &amp; more!]]>
      </content:encoded>
      <itunes:duration>2534</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1a8728a0023a4d3ea3d7dc701d1fbc18]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4342562882.mp3?updated=1629244755"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Productizing ML at Scale at Twitter with Yi Zhuang - TWIML Talk #271</title>
      <link>https://twimlai.com/twiml-talk-271-productizing-ml-at-scale-at-twitter-with-yi-zhaung</link>
      <description>Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter. In our conversation, we cover: 

• The machine learning landscape at Twitter, including with the history of the Cortex team

• Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0.

• The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!</description>
      <pubDate>Mon, 03 Jun 2019 18:05:58 -0000</pubDate>
      <itunes:title>Productizing ML at Scale at Twitter with Yi Zhuang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>271</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/59a01b1c-ee98-11eb-9502-df1f28288f22/image/TWIMLAI_Background_800x800_YZ_271.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter &amp; Tech Lead for Machine Learning Core Environment at Twitter Cortex. In our conversation, we cover:  • The machine learning landscape at...</itunes:subtitle>
      <itunes:summary>Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter. In our conversation, we cover: 

• The machine learning landscape at Twitter, including with the history of the Cortex team

• Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0.

• The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our AI Platforms series joined by Yi Zhuang, Senior Staff Engineer at Twitter. In our conversation, we cover: 

• The machine learning landscape at Twitter, including with the history of the Cortex team

• Deepbird v2, which is used for model training and evaluation solutions, and it's integration with Tensorflow 2.0.

• The newly assembled “Meta” team, that is tasked with exploring the bias, fairness, and accountability of their machine learning models, and much more!]]>
      </content:encoded>
      <itunes:duration>2788</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2efafa0ea60746e8b95c254629f1fc22]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3183632248.mp3?updated=1629244761"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Snorkel: A System for Fast Training Data Creation with Alex Ratner - TWiML Talk #270</title>
      <link>https://twimlai.com/twiml-talk-270-snorkel-a-system-for-fast-training-data-creation-with-alex-ratner</link>
      <description>Today we’re joined by Alex Ratner, Ph.D. student at Stanford, to discuss:

• Snorkel, the open source framework that is the successor to Stanford's Deep Dive project.

• How Snorkel is used as a framework for creating training data with weak supervised learning techniques.

• Multiple use cases for Snorkel, including how it is used by companies like Google. 

The complete show notes can be found at twimlai.com/talk/270.

Follow along with AI Platforms Vol. 2 at twimlai.com/aiplatforms2.</description>
      <pubDate>Thu, 30 May 2019 18:35:21 -0000</pubDate>
      <itunes:title>Snorkel: A System for Fast Training Data Creation with Alex Ratner - TWiML Talk #270</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>270</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/59c84c0e-ee98-11eb-9502-7b4d6d5a2bf5/image/TWIMLAI_Background_800x800_AJR_270.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Alex Ratner, Ph.D. student at Stanford. In our conversation, we discuss: • Snorkel, the open source framework that is the successor to Stanford's Deep Dive project. • How Snorkel is used as a framework for creating...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Alex Ratner, Ph.D. student at Stanford, to discuss:

• Snorkel, the open source framework that is the successor to Stanford's Deep Dive project.

• How Snorkel is used as a framework for creating training data with weak supervised learning techniques.

• Multiple use cases for Snorkel, including how it is used by companies like Google. 

The complete show notes can be found at twimlai.com/talk/270.

Follow along with AI Platforms Vol. 2 at twimlai.com/aiplatforms2.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Alex Ratner, Ph.D. student at Stanford, to discuss:

• Snorkel, the open source framework that is the successor to Stanford's Deep Dive project.

• How Snorkel is used as a framework for creating training data with weak supervised learning techniques.

• Multiple use cases for Snorkel, including how it is used by companies like Google. 

The complete show notes can be found at twimlai.com/talk/270.

Follow along with AI Platforms Vol. 2 at twimlai.com/aiplatforms2.]]>
      </content:encoded>
      <itunes:duration>2618</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[57e2c72e55ea4e35b6a06156e317045c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5689496344.mp3?updated=1629244749"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advancing Autonomous Vehicle Development Using Distributed Deep Learning with Adrien Gaidon - TWiML Talk #269</title>
      <link>https://twimlai.com/twiml-talk-269-advancing-autonomous-vehicle-development-using-distributed-deep-learning-with-adrien-gaidon</link>
      <description>In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: 

• The beginning and gradual scaling up of TRI's platform.

• Their distributed deep learning methods, including their use of stock Pytorch, and much more!</description>
      <pubDate>Tue, 28 May 2019 18:26:49 -0000</pubDate>
      <itunes:title>Advancing Autonomous Vehicle Development Using Distributed Deep Learning with Adrien Gaidon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>269</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/59e44c42-ee98-11eb-9502-63d204e57748/image/TWIMLAI_Background_800x800_AG_269.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at...</itunes:subtitle>
      <itunes:summary>In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: 

• The beginning and gradual scaling up of TRI's platform.

• Their distributed deep learning methods, including their use of stock Pytorch, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[In this, the kickoff episode of AI Platforms Vol. 2, we're joined by Adrien Gaidon, Machine Learning Lead at Toyota Research Institute. Adrien and I caught up to discuss his team’s work on deploying distributed deep learning in the cloud, at scale. In our conversation, we discuss: 

• The beginning and gradual scaling up of TRI's platform.

• Their distributed deep learning methods, including their use of stock Pytorch, and much more!]]>
      </content:encoded>
      <itunes:duration>2881</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6c23f043e64242d188aff10e0941e7f9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6236475464.mp3?updated=1629244768"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Are We Being Honest About How Difficult AI Really Is? w/ David Ferrucci - TWiML Talk #268</title>
      <link>https://twimlai.com/twiml-talk-268-are-we-being-honest-about-how-difficult-ai-really-is-with-david-ferrucci</link>
      <description>Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do, to discuss: • The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding. • His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.</description>
      <pubDate>Thu, 23 May 2019 22:31:00 -0000</pubDate>
      <itunes:title>Are We Being Honest About How Difficult AI Really Is? w/ David Ferrucci</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>268</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5a0b2f74-ee98-11eb-9502-8b56bfc58f59/image/TWIMLAI_Background_800x800_DF_268_copy.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do. In our conversation, we discuss:  • His...</itunes:subtitle>
      <itunes:summary>Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do, to discuss: • The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding. • His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today we’re joined by David Ferrucci, Founder, CEO, and Chief Scientist at Elemental Cognition, a company focused on building natural learning systems that understand the world the way people do, to discuss: • The role of “understanding” in the context of AI systems, and the types of commitments and investments needed to achieve even modest levels of understanding. • His thoughts on the power of deep learning, what the path to AGI looks like, and the need for hybrid systems to get there.</p>]]>
      </content:encoded>
      <itunes:duration>3007</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[80183f8b4f0748c9b1ac9f931b6ad5dc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3149875416.mp3?updated=1629244765"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling - TWiML Talk #267</title>
      <link>https://twimlai.com/twiml-talk-267-gauge-equivariant-cnns-generative-models-and-the-future-of-ai-with-max-welling</link>
      <description>Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, and VP of Technologies at Qualcomm, to discuss: 

• Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, power efficiency for AI via compression, quantization, and compilation.

• Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and com</description>
      <pubDate>Mon, 20 May 2019 19:58:52 -0000</pubDate>
      <itunes:title>Gauge Equivariant CNNs, Generative Models, and the Future of AI with Max Welling</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>267</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5a2b313e-ee98-11eb-9502-c7ba90be1f07/image/TWIMLAI_Background_800x800_MW_267.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, as well as VP of technologies at Qualcomm, and Fellow at the Canadian Institute for Advanced Research, or CIFAR. In our conversation, we...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, and VP of Technologies at Qualcomm, to discuss: 

• Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, power efficiency for AI via compression, quantization, and compilation.

• Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and com</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Max Welling, research chair in machine learning at the University of Amsterdam, and VP of Technologies at Qualcomm, to discuss: 

• Max’s research at Qualcomm AI Research and the University of Amsterdam, including his work on Bayesian deep learning, Graph CNNs and Gauge Equivariant CNNs, power efficiency for AI via compression, quantization, and compilation.

• Max’s thoughts on the future of the AI industry, in particular, the relative importance of models, data and com]]>
      </content:encoded>
      <itunes:duration>3802</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[53d8633741cc4c769a2d0169d7e4e6a1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1600424551.mp3?updated=1629243505"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Can We Trust Scientific Discoveries Made Using Machine Learning? with Genevera Allen - TWiML Talk #266</title>
      <link>https://twimlai.com/twiml-talk-266-can-we-trust-scientific-discoveries-made-using-machine-learning-with-genevera-allen</link>
      <description>Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University. 

Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation, we discuss the goal of Genevera's talk, the issues surrounding reproducibility in Machine Learning, and much more!</description>
      <pubDate>Thu, 16 May 2019 16:48:55 -0000</pubDate>
      <itunes:title>Can We Trust Scientific Discoveries Made Using Machine Learning? with Genevera Allen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>266</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5a562a60-ee98-11eb-9502-a7a6a38dff95/image/TWIMLAI_Background_800x800_GA_266.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University, Founder and Director of the Rice Center for Transforming Data to Knowledge and Investigator with the Neurological Research Institute...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University. 

Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation, we discuss the goal of Genevera's talk, the issues surrounding reproducibility in Machine Learning, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Genevera Allen, associate professor of statistics in the EECS Department at Rice University. 

Genevera caused quite the stir at the American Association for the Advancement of Science meeting earlier this year with her presentation “Can We Trust Data-Driven Discoveries?" In our conversation, we discuss the goal of Genevera's talk, the issues surrounding reproducibility in Machine Learning, and much more!]]>
      </content:encoded>
      <itunes:duration>2562</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f853d0c288744be089735a10f245800e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5340214600.mp3?updated=1629243431"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Creative Adversarial Networks for Art Generation with Ahmed Elgammal - TWiML Talk #265</title>
      <link>https://twimlai.com/twiml-talk-265-creative-adversarial-networks-for-art-generation-with-ahmed-elgammal</link>
      <description>Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. We discuss his work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art.

The complete show notes for this episode can be found at twimlai.com/talk/265.</description>
      <pubDate>Mon, 13 May 2019 18:25:12 -0000</pubDate>
      <itunes:title>Creative Adversarial Networks for Art Generation with Ahmed Elgammal</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>265</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5aaab0f8-ee98-11eb-9502-4bff927642a5/image/TWIMLAI_Background_800x800_AG_265.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. In my conversation with Ahmed, we discuss: • His work on AICAN, a creative adversarial...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. We discuss his work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art.

The complete show notes for this episode can be found at twimlai.com/talk/265.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Ahmed Elgammal, a professor in the department of computer science at Rutgers, and director of The Art and Artificial Intelligence Lab. We discuss his work on AICAN, a creative adversarial network that produces original portraits, trained with over 500 years of European canonical art.

The complete show notes for this episode can be found at twimlai.com/talk/265.]]>
      </content:encoded>
      <itunes:duration>2281</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[eff1e772385147dbbf5567ee59ca1164]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4197109019.mp3?updated=1629243471"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Diagnostic Visualization for Machine Learning with YellowBrick w/ Rebecca Bilbro - TWiML Talk #264</title>
      <link>HTTPS://twimlai.com/twiml-talk-264-diagnostic-visualization-for-machine-learning-with-yellowbrick-w-rebecca-bilbro</link>
      <description>Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. 

In our conversation, Rebecca details:

• Her relationship with toolmaking, which led to the eventual creation of YellowBrick.

• Popular tools within YellowBrick, including a summary of their unit testing approach.

• Interesting use cases that she’s seen over time.</description>
      <pubDate>Fri, 10 May 2019 16:22:40 -0000</pubDate>
      <itunes:title>Diagnostic Visualization for Machine Learning with YellowBrick</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>264</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5adc269c-ee98-11eb-9502-4b0401505edf/image/TWIMLAI_Background_800x800_RB_264.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. In our conversation, Rebecca details: • Her relationship with...</itunes:subtitle>
      <itunes:summary>Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. 

In our conversation, Rebecca details:

• Her relationship with toolmaking, which led to the eventual creation of YellowBrick.

• Popular tools within YellowBrick, including a summary of their unit testing approach.

• Interesting use cases that she’s seen over time.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we close out our PyDataSci series joined by Rebecca Bilbro, head of data science at ICX media and co-creator of the popular open-source visualization library YellowBrick. 

In our conversation, Rebecca details:

• Her relationship with toolmaking, which led to the eventual creation of YellowBrick.

• Popular tools within YellowBrick, including a summary of their unit testing approach.

• Interesting use cases that she’s seen over time.
]]>
      </content:encoded>
      <itunes:duration>2504</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[20322854755a4b0b846fb98d35ceb7fe]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3414309881.mp3?updated=1629243432"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Librosa: Audio and Music Processing in Python with Brian McFee - TWiML Talk #263</title>
      <link>https://twimlai.com/twiml-talk-263-librosa-audio-and-music-processing-in-python-with-brian-mcfee</link>
      <description>Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis.

Brian walks us through his experience building LibROSA, including:

• Detailing the core functions provided in the library 

• His experience working in Jupyter Notebook

• We explore a typical LibROSA workflow &amp; more!

The complete show notes for this episode can be found at twimlai.com/talk/26</description>
      <pubDate>Thu, 09 May 2019 18:13:39 -0000</pubDate>
      <itunes:title>Librosa: Audio and Music Processing in Python with Brian McFee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>263</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5b072f54-ee98-11eb-9502-6760216d6986/image/TWIMLAI_Background_800x800_BM_263.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis. Brian walks us through his experience building...</itunes:subtitle>
      <itunes:summary>Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis.

Brian walks us through his experience building LibROSA, including:

• Detailing the core functions provided in the library 

• His experience working in Jupyter Notebook

• We explore a typical LibROSA workflow &amp; more!

The complete show notes for this episode can be found at twimlai.com/talk/26</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our PyDataSci series joined by Brian McFee, assistant professor of music technology and data science at NYU, and creator of LibROSA, a python package for music and audio analysis.

Brian walks us through his experience building LibROSA, including:

• Detailing the core functions provided in the library 

• His experience working in Jupyter Notebook

• We explore a typical LibROSA workflow &amp; more!

The complete show notes for this episode can be found at twimlai.com/talk/26]]>
      </content:encoded>
      <itunes:duration>2299</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7090fbfccd0b492ab494888a94382f1f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3035019629.mp3?updated=1629243388"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Practical Natural Language Processing with spaCy and Prodigy w/ Ines Montani - TWiML Talk #262</title>
      <link>https://twimlai.com/twiml-talk-262-practical-natural-language-processing-with-spacy-and-prodigy-w-ines-montani</link>
      <description>In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy.

Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases.

The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.</description>
      <pubDate>Tue, 07 May 2019 19:48:32 -0000</pubDate>
      <itunes:title>Practical Natural Language Processing with spaCy and Prodigy</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>262</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5b2fbc12-ee98-11eb-9502-d785dd1d9f41/image/TWIMLAI_Background_800x800_IM_262.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy. Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP...</itunes:subtitle>
      <itunes:summary>In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy.

Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases.

The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of PyDataSci, we’re joined by Ines Montani, Cofounder of Explosion, Co-developer of SpaCy and lead developer of Prodigy.

Ines and I caught up to discuss her various projects, including the aforementioned SpaCy, an open-source NLP library built with a focus on industry and production use cases.

The complete show notes for this episode can be found at twimlai.com/talk/262. Check out the rest of the PyDataSci series at twimlai.com/pydatasci.]]>
      </content:encoded>
      <itunes:duration>2929</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b6d9539aaff947fcaefea605d0e6f835]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1580183899.mp3?updated=1629243441"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Jupyter Notebooks with Luciano Resende - TWiML Talk #261</title>
      <link>https://twiml-talk-261-scaling-jupyter-notebooks-with-luciano-resende</link>
      <description>Today we're joined by Luciano Resende, an Open Source AI Platform Architect at IBM, to discuss his work on Jupyter Enterprise Gateway.

In our conversation, we address challenges that arise while using Jupyter Notebooks at scale and the role of open source projects like Jupyter Hub and Enterprise Gateway. We also explore some common requests like tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.</description>
      <pubDate>Mon, 06 May 2019 17:11:44 -0000</pubDate>
      <itunes:title>Scaling Jupyter Notebooks with Luciano Resende</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>261</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5b4e0726-ee98-11eb-9502-1ba90bcd76ac/image/TWIMLAI_Background_800x800_LR_261.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off PyDataSci with Luciano Resende, an Open Source AI Platform Architect at IBM and part of the Center for Open Source Data and AI Technology. Luciano and I caught up to discuss his work on Jupyter Enterprise Gateway, a scalable way to...</itunes:subtitle>
      <itunes:summary>Today we're joined by Luciano Resende, an Open Source AI Platform Architect at IBM, to discuss his work on Jupyter Enterprise Gateway.

In our conversation, we address challenges that arise while using Jupyter Notebooks at scale and the role of open source projects like Jupyter Hub and Enterprise Gateway. We also explore some common requests like tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Luciano Resende, an Open Source AI Platform Architect at IBM, to discuss his work on Jupyter Enterprise Gateway.

In our conversation, we address challenges that arise while using Jupyter Notebooks at scale and the role of open source projects like Jupyter Hub and Enterprise Gateway. We also explore some common requests like tighter integration with git repositories, as well as the python-centricity of the vast Jupyter ecosystem.]]>
      </content:encoded>
      <itunes:duration>2017</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[8df3d9e79d244dbc9aaf540ea6813e77]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1795672993.mp3?updated=1629243368"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fighting Fake News and Deep Fakes with Machine Learning w/ Delip Rao - TWiML Talk #260</title>
      <link>https://twimlai.com/twiml-talk-260-fighting-fake-news-and-deep-fakes-with-machine-learning-w-delip-rao</link>
      <description>Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge.

In our conversation, we discuss the generation and detection of artificial content, including “fake news” and “deep fakes,” the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutio</description>
      <pubDate>Fri, 03 May 2019 18:47:29 -0000</pubDate>
      <itunes:title>Fighting Fake News and Deep Fakes with Machine Learning w/ Delip Rao</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>260</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5b72a8a6-ee98-11eb-9502-6f9f67abb66e/image/TWIMLAI_Background_800x800_DR_260.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge. Our conversation begins with the origin story of the Fake News...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge.

In our conversation, we discuss the generation and detection of artificial content, including “fake news” and “deep fakes,” the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutio</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Delip Rao, vice president of research at the AI Foundation, co-author of the book Natural Language Processing with PyTorch, and creator of the Fake News Challenge.

In our conversation, we discuss the generation and detection of artificial content, including “fake news” and “deep fakes,” the state of generation and detection for text, video, and audio, the key challenges in each of these modalities, the role of GANs on both sides of the equation, and other potential solutio]]>
      </content:encoded>
      <itunes:duration>3525</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bf990fc4ca5947f389ef187940d772bc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5120025741.mp3?updated=1627362808"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Maintaining Human Control of Artificial Intelligence with Joanna Bryson - TWiML Talk #259</title>
      <link>https://twimlai.com/twiml-talk-259-maintaining-human-control-of-artificial-intelligence-with-joanna-bryson</link>
      <description>Today we’re joined by Joanna Bryson, Reader at the University of Bath.

I was fortunate to catch up with Joanna at the conference, where she presented on “Maintaining Human Control of Artificial Intelligence." In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI sy</description>
      <pubDate>Wed, 01 May 2019 19:25:50 -0000</pubDate>
      <itunes:title>Maintaining Human Control of Artificial Intelligence with Joanna Bryson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>259</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5b95cd22-ee98-11eb-9502-4f3cd8d9cc83/image/TWIMLAI_Background_800x800_JB_259.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Joanna Bryson, Reader at the University of Bath. I was fortunate to catch up with Joanna at the AI Conference where she presented on “Maintaining Human Control of Artificial Intelligence,“ focusing on technological and...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Joanna Bryson, Reader at the University of Bath.

I was fortunate to catch up with Joanna at the conference, where she presented on “Maintaining Human Control of Artificial Intelligence." In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI sy</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Joanna Bryson, Reader at the University of Bath.

I was fortunate to catch up with Joanna at the conference, where she presented on “Maintaining Human Control of Artificial Intelligence." In our conversation, we explore our current understanding of “natural intelligence” and how it can inform the development of AI, the context in which she uses the term “human control” and its implications, and the meaning of and need to apply “DevOps” principles when developing AI sy]]>
      </content:encoded>
      <itunes:duration>2296</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[db05c75b2fd640f39a055fa0f4cfd54f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7649348955.mp3?updated=1629243464"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Intelligent Infrastructure Management with Pankaj Goyal &amp; Rochna Dhand - TWiML Talk #258</title>
      <link>https://twimlai.com/twiml-talk-258-intelligent-infrastructure-management-with-pankaj-goyal-rochna-dhand</link>
      <description>Today we're joined by Pankaj Goyal and Rochna Dhand, to discuss  HPE InfoSight.

In our conversation, Pankaj gives a look into how HPE as a company views AI, from their customers to the future of AI at HPE through investment. Rocha details the role of HPE’s Infosight in deploying AI operations at an enterprise level, including a look at where it fits into the infrastructure for their current customer base, along with a walkthrough of how InfoSight is deployed in a real-world use case.</description>
      <pubDate>Mon, 29 Apr 2019 17:58:43 -0000</pubDate>
      <itunes:title>Intelligent Infrastructure Management with Pankaj Goyal &amp; Rochna Dhand</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>258</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5bbc0172-ee98-11eb-9502-df943e4f6b0d/image/TWIMLAI_Background_800x800_PG-RD_258_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we kick off our AI conference NY series with Pankaj Goyal, VP for AI &amp; HPC product management at HPE, and Rochna Dhand, director of product management for HPE InfoSight.  Today we get things kicked off with Pankaj Goyal, VP for AI &amp; HPC...</itunes:subtitle>
      <itunes:summary>Today we're joined by Pankaj Goyal and Rochna Dhand, to discuss  HPE InfoSight.

In our conversation, Pankaj gives a look into how HPE as a company views AI, from their customers to the future of AI at HPE through investment. Rocha details the role of HPE’s Infosight in deploying AI operations at an enterprise level, including a look at where it fits into the infrastructure for their current customer base, along with a walkthrough of how InfoSight is deployed in a real-world use case.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Pankaj Goyal and Rochna Dhand, to discuss  HPE InfoSight.

In our conversation, Pankaj gives a look into how HPE as a company views AI, from their customers to the future of AI at HPE through investment. Rocha details the role of HPE’s Infosight in deploying AI operations at an enterprise level, including a look at where it fits into the infrastructure for their current customer base, along with a walkthrough of how InfoSight is deployed in a real-world use case.]]>
      </content:encoded>
      <itunes:duration>2673</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e11b3b8dbb174a3fb8eda0186a212a65]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8978267270.mp3?updated=1629243418"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Organizing for Successful Data Science at Stitch Fix with Eric Colson - TWiML Talk #257</title>
      <link>https://twiml-talk-257-organizing-for-successful-data-science-at-stitchfix-with-eric-colson</link>
      <description>Today we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the Strata Data conference explored “How to make fewer bad decisions.”

Our discussion focuses in on the three key organizational principles for data science teams that he’s developed while at Stitch Fix. Along the way, we also talk through various roles data science plays, exploring a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, a</description>
      <pubDate>Fri, 26 Apr 2019 16:26:18 -0000</pubDate>
      <itunes:title>Organizing for Successful Data Science at Stitch Fix with Eric Colson</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>257</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5be5e73a-ee98-11eb-9502-cb8b1b934046/image/TWIMLAI_Background_800x800_EC_257.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>For the final episode of our Strata Data series, we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the conference explored “How to make fewer bad decisions.” Our discussion focuses in on the three key...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the Strata Data conference explored “How to make fewer bad decisions.”

Our discussion focuses in on the three key organizational principles for data science teams that he’s developed while at Stitch Fix. Along the way, we also talk through various roles data science plays, exploring a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, a</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Eric Colson, Chief Algorithms Officer at Stitch Fix, whose presentation at the Strata Data conference explored “How to make fewer bad decisions.”

Our discussion focuses in on the three key organizational principles for data science teams that he’s developed while at Stitch Fix. Along the way, we also talk through various roles data science plays, exploring a few of the 800+ algorithms in use at the company spanning recommendations, inventory management, demand forecasting, a]]>
      </content:encoded>
      <itunes:duration>3134</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dbbfd1c0c4bc4f948d23f95810e73da9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1275286134.mp3?updated=1629216922"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>End-to-End Data Science to Drive Business Decisions at LinkedIn with Burcu Baran - TWiML Talk #256</title>
      <link>https://twimlai.com/twiml-talk-256-end-to-end-data-science-to-drive-business-decisions-at-linkedin-with-burcu-baran</link>
      <description>In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn.

At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more.</description>
      <pubDate>Wed, 24 Apr 2019 17:45:54 -0000</pubDate>
      <itunes:title>End-to-End Data Science to Drive Business Decisions at LinkedIn with Burcu Baran</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>256</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c0b85d0-ee98-11eb-9502-c3e8e60e142a/image/TWIMLAI_Background_800x800_BB_256_.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn. At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to...</itunes:subtitle>
      <itunes:summary>In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn.

At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Strata Data conference series, we’re joined by Burcu Baran, Senior Data Scientist at LinkedIn.

At Strata, Burcu, along with a few members of her team, delivered the presentation “Using the full spectrum of data science to drive business decisions,” which outlines how LinkedIn manages their entire machine learning production process. In our conversation, Burcu details each phase of the process, including problem formulation, monitoring features, A/B testing and more. ]]>
      </content:encoded>
      <itunes:duration>2929</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9e716af688d245a290fdb93271587bc7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3206309671.mp3?updated=1629243397"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning with Limited Labeled Data with Shioulin Sam - TWiML Talk #255</title>
      <link>https://twimlai.com/twiml-talk-255-learning-with-limited-labeled-data-with-shioulin-sam</link>
      <description>Today we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. 

Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning</description>
      <pubDate>Mon, 22 Apr 2019 22:11:47 -0000</pubDate>
      <itunes:title>Learning with Limited Labeled Data with Shioulin Sam</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>255</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c3804fc-ee98-11eb-9502-d7146e60b71c/image/TWIMLAI_Background_800x800_SS_255.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, in the first episode of our Strata Data conference series, we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. 

Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Shioulin Sam, Research Engineer with Cloudera Fast Forward Labs. 

Shioulin and I caught up to discuss the newest report to come out of CFFL, “Learning with Limited Label Data,” which explores active learning as a means to build applications requiring only a relatively small set of labeled data. We start our conversation with a review of active learning and some of the reasons why it’s recently become an interesting technology for folks building systems based on deep learning]]>
      </content:encoded>
      <itunes:duration>2653</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fbad207ba8024d898cf1cd09f740519f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7785657199.mp3?updated=1629243430"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>cuDF, cuML &amp; RAPIDS: GPU Accelerated Data Science with Paul Mahler - TWiML Talk #254</title>
      <link>https://twimlai.com/twiml-talk-254-cudf-cuml-rapids-gpu-accelerated-data-science-with-paul-mahler</link>
      <description>Today we're joined by Paul Mahler, senior data scientist and technical product manager for ML at NVIDIA. 

In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and ML tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.</description>
      <pubDate>Fri, 19 Apr 2019 17:33:30 -0000</pubDate>
      <itunes:title>cuDF, cuML &amp; RAPIDS: GPU Accelerated Data Science with Paul Mahler</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>254</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c5f50a2-ee98-11eb-9502-53c089a4758e/image/TWIMLAI_Background_800x800_PM_254.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Paul Mahler, senior data scientist and technical product manager for machine learning at NVIDIA. In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional...</itunes:subtitle>
      <itunes:summary>Today we're joined by Paul Mahler, senior data scientist and technical product manager for ML at NVIDIA. 

In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and ML tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Paul Mahler, senior data scientist and technical product manager for ML at NVIDIA. 

In our conversation, Paul and I discuss NVIDIA's RAPIDS open source project, which aims to bring GPU acceleration to traditional data science workflows and ML tasks. We dig into the various subprojects like cuDF and cuML that make up the RAPIDS ecosystem, as well as the role of lower-level libraries like mlprims and the relationship to other open-source projects like Scikit-learn, XGBoost and Dask.]]>
      </content:encoded>
      <itunes:duration>2290</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2181afc16f9d4e9aa7306b0992cf17cc]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2407678511.mp3?updated=1629216913"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Edge AI for Smart Manufacturing with Trista Chen - TWiML Talk #253</title>
      <link>https://twimlai.com/twiml-talk-253-edge-ai-for-smart-manufacturing-with-trista-chen</link>
      <description>Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec, who spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond” at GTC. In our conversation, we discuss the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of ML in an industrial setting to perform various tasks. We also discuss the challenges associated with estimating the ROI of industrial AI projects.</description>
      <pubDate>Thu, 18 Apr 2019 17:26:20 -0000</pubDate>
      <itunes:title>Edge AI for Smart Manufacturing with Trista Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>253</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5c855a40-ee98-11eb-9502-8358b26020cd/image/TWIMLAI_Background_800x800_TC_253.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec. At GTC, Trista spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond.” In our conversation, we discuss a few of the challenges that Industry 4.0...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec, who spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond” at GTC. In our conversation, we discuss the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of ML in an industrial setting to perform various tasks. We also discuss the challenges associated with estimating the ROI of industrial AI projects.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Trista Chen, chief scientist of machine learning at Inventec, who spoke on “Edge AI in Smart Manufacturing: Defect Detection and Beyond” at GTC. In our conversation, we discuss the challenges that Industry 4.0 initiatives aim to address and dig into a few of the various use cases she’s worked on, such as the deployment of ML in an industrial setting to perform various tasks. We also discuss the challenges associated with estimating the ROI of industrial AI projects.]]>
      </content:encoded>
      <itunes:duration>2315</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[e907230a9d9f4811bfea6117bdd1559e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7399939632.mp3?updated=1629243493"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning for Security and Security for Machine Learning with Nicole Nichols - TWiML Talk #252</title>
      <link>https://twimlai.com/talk/252</link>
      <description>Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. We discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” We explore two use cases, insider threat detection, and software fuzz testing, discussing the effectiveness of standard and bidirectional RNN language models for detecting malicious activity, the augmentation of software fuzzing techniques using deep learning, and much mor</description>
      <pubDate>Tue, 16 Apr 2019 17:01:59 -0000</pubDate>
      <itunes:title>Machine Learning for Security and Security for Machine Learning with Nicole Nichols</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>252</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5cb41de4-ee98-11eb-9502-5bc6d6a16a2b/image/TWIMLAI_Background_800x800_NN_252.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. Nicole joined me to discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. We discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” We explore two use cases, insider threat detection, and software fuzz testing, discussing the effectiveness of standard and bidirectional RNN language models for detecting malicious activity, the augmentation of software fuzzing techniques using deep learning, and much mor</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Nicole Nichols, a senior research scientist at the Pacific Northwest National Lab. We discuss her recent presentation at GTC, which was titled “Machine Learning for Security, and Security for Machine Learning.” We explore two use cases, insider threat detection, and software fuzz testing, discussing the effectiveness of standard and bidirectional RNN language models for detecting malicious activity, the augmentation of software fuzzing techniques using deep learning, and much mor]]>
      </content:encoded>
      <itunes:duration>2512</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[697980b3a926428395b37fdb891868db]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7125594364.mp3?updated=1629243375"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Domain Adaptation and Generative Models for Single Cell Genomics with Gerald Quon - TWiML Talk #251</title>
      <link>https://twimlai.com/twiml-talk-251-domain-adaptation-and-generative-models-for-single-cell-genomics-with-gerald-quon</link>
      <description>Today we’re joined by Gerald Quon, assistant professor at UC Davis.

Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how he uses deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.</description>
      <pubDate>Mon, 15 Apr 2019 19:48:17 -0000</pubDate>
      <itunes:title>Domain Adaptation and Generative Models for Single Cell Genomics with Gerald Quon</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>251</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5cda03b0-ee98-11eb-9502-8ba2429b101c/image/TWIMLAI_Background_800x800_GQ_251.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Gerald Quon, assistant professor in the Molecular and Cellular Biology department at UC Davis. Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Gerald Quon, assistant professor at UC Davis.

Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how he uses deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Gerald Quon, assistant professor at UC Davis.

Gerald presented his work on Deep Domain Adaptation and Generative Models for Single Cell Genomics at GTC this year, which explores single cell genomics as a means of disease identification for treatment. In our conversation, we discuss how he uses deep learning to generate novel insights across diseases, the different types of data that was used, and the development of ‘nested’ Generative Models for single cell measurement.]]>
      </content:encoded>
      <itunes:duration>1941</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[db8ebce962dc4ab3b23317097356c8b3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8100456410.mp3?updated=1629243430"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mapping Dark Matter with Bayesian Neural Networks w/ Yashar Hezaveh - TWiML Talk #250</title>
      <link>https://twimlai.com/talk/250</link>
      <description>Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how ML can be applied to undistort images, the intertwined roles of simulation and ML in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.</description>
      <pubDate>Thu, 11 Apr 2019 19:01:55 -0000</pubDate>
      <itunes:title>Mapping Dark Matter with Bayesian Neural Networks w/ Yashar Hezaveh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>250</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5d01fbcc-ee98-11eb-9502-d7fcf209f088/image/TWIMLAI_Background_800x800_YH_250.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>You might have seen the news yesterday that MIT researcher Katie Bouman produced the first image of a black hole. What’s been less reported is that the algorithm she developed to accomplish this is based on machine learning. Machine learning is...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how ML can be applied to undistort images, the intertwined roles of simulation and ML in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Yashar Hezaveh, Assistant Professor at the University of Montreal. Yashar and I caught up to discuss his work on gravitational lensing, which is the bending of light from distant sources due to the effects of gravity. In our conversation, Yashar and I discuss how ML can be applied to undistort images, the intertwined roles of simulation and ML in generating images, incorporating other techniques such as domain transfer or GANs, and how he assesses the results of this project.]]>
      </content:encoded>
      <itunes:duration>2061</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[47d191f242884a76962d9c216310a01e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8578428810.mp3?updated=1629243470"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for Population Genetic Inference with Dan Schrider - TWiML Talk #249</title>
      <link>https://twimlai.com/twiml-talk-249-deep-learning-for-population-genetic-inference-with-dan-schrider</link>
      <description>Today we’re joined by Dan Schrider, assistant professor in the department of genetics at UNC Chapel Hill. 

My discussion with Dan starts with an overview of population genomics, looking into his application of ML in the field. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.</description>
      <pubDate>Tue, 09 Apr 2019 03:39:27 -0000</pubDate>
      <itunes:title>Deep Learning for Population Genetic Inference with Dan Schrider</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>249</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5d22a84a-ee98-11eb-9502-774cdc589f6e/image/TWIMLAI_Background_800x800_DS_249.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Dan Schrider, assistant professor in the department of genetics at The University of North Carolina at Chapel Hill. My discussion with Dan starts with an overview of population genomics and from there digs into his application...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Dan Schrider, assistant professor in the department of genetics at UNC Chapel Hill. 

My discussion with Dan starts with an overview of population genomics, looking into his application of ML in the field. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Dan Schrider, assistant professor in the department of genetics at UNC Chapel Hill. 

My discussion with Dan starts with an overview of population genomics, looking into his application of ML in the field. We then dig into Dan’s paper “The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference,” which examines the idea that CNNs are capable of outperforming expert-derived statistical methods for some key problems in the field.]]>
      </content:encoded>
      <itunes:duration>2951</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bd1342431d0a423e8b838ce81bcd7050]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4357124382.mp3?updated=1629243567"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Empathy in AI with Rob Walker - TWiML Talk #248</title>
      <link>https://twimlai.com/twiml-talk-248-empathy-in-ai-with-rob-walker</link>
      <description>Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. 

Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when enterprise AI systems.</description>
      <pubDate>Fri, 05 Apr 2019 18:31:23 -0000</pubDate>
      <itunes:title>Empathy in AI with Rob Walker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>248</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5d555e2a-ee98-11eb-9502-9331f873fab3/image/TWIMLAI_Background_800x800_RW_248.png?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. Rob joined us back in  to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems....</itunes:subtitle>
      <itunes:summary>Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. 

Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when enterprise AI systems.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Rob Walker, Vice President of Decision Management at Pegasystems. 

Rob joined us back in episode 127 to discuss “Hyperpersonalizing the customer experience.” Today, he’s back for a discussion about the role of empathy in AI systems. In our conversation, we dig into the role empathy plays in consumer-facing human-AI interactions, the differences between empathy and ethics, and a few examples of ways empathy should be considered when enterprise AI systems.]]>
      </content:encoded>
      <itunes:duration>2446</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3832e1ba95bd4fe2a968d943fefe539c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7908378568.mp3?updated=1629243493"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Benchmarking Custom Computer Vision Services at Urban Outfitters with Tom Szumowski - TWiML Talk #247</title>
      <link>https://twimlai.com/twiml-talk-247-benchmarking-custom-computer-vision-services-at-urban-outfitters-with-tom-szumowski</link>
      <description>Today we’re joined by Tom Szumowski, Data Scientist at URBN, parent company of Urban Outfitters and other consumer fashion brands. Tom and I caught up to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We look at the process Tom and his team took to build custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the</description>
      <pubDate>Wed, 03 Apr 2019 21:24:29 -0000</pubDate>
      <itunes:title>Benchmarking Custom Computer Vision Services at Urban Outfitters with Tom Szumowski</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>247</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5d8001e8-ee98-11eb-9502-4f0b0d3a6b02/image/TWIMLAI_Background_800x800_TS_247.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Tom Szumowski, Data Scientist at URBN, the parent company of Urban Outfitters, Anthropologie, and other consumer fashion brands. Tom and I caught up recently to discuss his project “Exploring Custom Vision Services for...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Tom Szumowski, Data Scientist at URBN, parent company of Urban Outfitters and other consumer fashion brands. Tom and I caught up to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We look at the process Tom and his team took to build custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Tom Szumowski, Data Scientist at URBN, parent company of Urban Outfitters and other consumer fashion brands. Tom and I caught up to discuss his project “Exploring Custom Vision Services for Automated Fashion Product Attribution.” We look at the process Tom and his team took to build custom attribution models, and the results of their evaluation of various custom vision APIs for this purpose, with a focus on the various roadblocks and lessons he and his team encountered along the ]]>
      </content:encoded>
      <itunes:duration>3009</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ebc197ec4c594f63a05fed27dab4c4ac]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3217041378.mp3?updated=1629243525"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pragmatic Quantum Machine Learning with Peter Wittek - TWiML Talk #245</title>
      <link>https://twimlai.com/twiml-talk-245-pragmatic-quantum-machine-learning-with-peter-wittek</link>
      <description>Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms. 

In our conversation, we discuss the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in Februar</description>
      <pubDate>Mon, 01 Apr 2019 21:27:12 -0000</pubDate>
      <itunes:title>Pragmatic Quantum Machine Learning with Peter Wittek</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>245</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5da9b5f6-ee98-11eb-9502-b38244e0cca8/image/TWIMLAI_Background_800x800_PW_245.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms in quantum physics. Peter and I caught up back in November...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms. 

In our conversation, we discuss the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in Februar</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Peter Wittek, Assistant Professor at the University of Toronto working on quantum-enhanced machine learning and the application of high-performance learning algorithms. 

In our conversation, we discuss the current state of quantum computing, a look ahead to what the next 20 years of quantum computing might hold, and how current quantum computers are flawed. We then dive into our discussion on quantum machine learning, and Peter’s new course on the topic, which debuted in Februar]]>
      </content:encoded>
      <itunes:duration>3903</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c2d31581765f43409c2efefb87ed75d1]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5517938176.mp3?updated=1629243573"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>*Bonus Episode* A Quantum Machine Learning Algorithm Takedown with Ewin Tang - TWiML Talk #246</title>
      <link>https://twimlai.com/twiml-talk-246-a-quantum-machine-learning-algorithm-takedown-with-ewin-tang</link>
      <description>In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. 

In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready!</description>
      <pubDate>Mon, 01 Apr 2019 18:40:41 -0000</pubDate>
      <itunes:title>*Bonus Episode* A Quantum Machine Learning Algorithm Takedown with Ewin Tang</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>246</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5dcb9f54-ee98-11eb-9502-03c2eaa628f7/image/TWIMLAI_Background_800x800_ET_246.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. In our conversation, Ewin and I dig into her paper “A quantum-inspired classical...</itunes:subtitle>
      <itunes:summary>In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. 

In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready!</itunes:summary>
      <content:encoded>
        <![CDATA[In this special bonus episode of the podcast, I’m joined by Ewin Tang, a PhD student in the Theoretical Computer Science group at the University of Washington. 

In our conversation, Ewin and I dig into her paper “A quantum-inspired classical algorithm for recommendation systems,” which took the quantum computing community by storm last summer. We haven’t called out a Nerd-Alert interview in a long time, but this interview inspired us to dust off that designation, so get your notepad ready! ]]>
      </content:encoded>
      <itunes:duration>2427</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[79de69611de44b199fd50bbd1908b928]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2138030989.mp3?updated=1629243413"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Supporting TensorFlow at Airbnb with Alfredo Luque - TWiML Talk #244</title>
      <link>https://twimlai.com/twiml-talk-244-supporting-tensorflow-at-airbnb-with-alfredo-luque</link>
      <description>Today we're joined by Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb.

If you’re interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users.</description>
      <pubDate>Thu, 28 Mar 2019 19:38:45 -0000</pubDate>
      <itunes:title>Supporting TensorFlow at Airbnb with Alfredo Luque</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>244</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5df7e83e-ee98-11eb-9502-57541bf47c1c/image/TWIMLAI_Background_800x800_AL_244.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This interview features my conversation with Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb. If you’re among the many TWiML fans interested in AI Platforms and ML infrastructure, you probably remember my interview...</itunes:subtitle>
      <itunes:summary>Today we're joined by Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb.

If you’re interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Alfredo Luque, a software engineer on the machine infrastructure team at Airbnb.

If you’re interested in AI Platforms and ML infrastructure, you probably remember my interview with Airbnb’s Atul Kale, in which we discussed their Bighead platform. In my conversation with Alfredo, we dig a bit deeper into Bighead’s support for TensorFlow, discuss a recent image categorization challenge they solved with the framework, and explore what the new 2.0 release means for their users.]]>
      </content:encoded>
      <itunes:duration>2425</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[049c5cd2d44d4f358f027b25595b569a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9241282547.mp3?updated=1629243417"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Mining the Vatican Secret Archives with TensorFlow w/ Elena Nieddu - TWiML Talk #243</title>
      <link>https://twimlai.com/twiml-talk-243-mining-the-vatican-secret-archives-with-tensorflow-w-elena-nieddu</link>
      <description>Today we’re joined by Elena Nieddu, Phd Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit.

In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation.</description>
      <pubDate>Wed, 27 Mar 2019 16:20:32 -0000</pubDate>
      <itunes:title>Mining the Vatican Secret Archives with TensorFlow w/ Elena Nieddu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>243</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e20e586-ee98-11eb-9502-1b35e464db51/image/TWIMLAI_Background_800x800_EN_243.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Elena Nieddu, PhD Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit. In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Elena Nieddu, Phd Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit.

In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Elena Nieddu, Phd Student at Roma Tre University, who presented on her project “In Codice Ratio” at the TF Dev Summit.

In our conversation, Elena provides an overview of the project, which aims to annotate and transcribe Vatican secret archive documents via machine learning. We discuss the many challenges associated with transcribing this vast archive of handwritten documents, including overcoming the high cost of data annotation.]]>
      </content:encoded>
      <itunes:duration>2596</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1bb74b28dad04ccaa487e1e0dc0ee243]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8738082333.mp3?updated=1629243425"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring TensorFlow 2.0 with Paige Bailey - TWiML Talk #242</title>
      <link>https://twimlai.com/twiml-talk-242-exploring-tensorflow-2-0-with-paige-bailey</link>
      <description>Today we're joined by Paige Bailey, TensorFlow developer advocate at Google, to discuss the TensorFlow 2.0 alpha release. Paige and I talk through the latest TensorFlow updates, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end ML platform, the emphasis on community collaboration with TF 2.0, and more.</description>
      <pubDate>Mon, 25 Mar 2019 21:01:27 -0000</pubDate>
      <itunes:title>Exploring TensorFlow 2.0 with Paige Bailey</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>242</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e4359c2-ee98-11eb-9502-1f2d0aca8707/image/TWIMLAI_Background_800x800_PB_242.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Paige Bailey, a TensorFlow developer advocate at Google to discuss the TensorFlow 2.0 alpha release. Paige and I sat down to talk through the latest TensorFlow updates, and we cover a lot of ground, including the evolution of the...</itunes:subtitle>
      <itunes:summary>Today we're joined by Paige Bailey, TensorFlow developer advocate at Google, to discuss the TensorFlow 2.0 alpha release. Paige and I talk through the latest TensorFlow updates, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end ML platform, the emphasis on community collaboration with TF 2.0, and more.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Paige Bailey, TensorFlow developer advocate at Google, to discuss the TensorFlow 2.0 alpha release. Paige and I talk through the latest TensorFlow updates, including the evolution of the TensorFlow APIs and the role of eager mode, tf.keras and tf.function, the evolution of TensorFlow for Swift and its inclusion in the new fast.ai course, new updates to TFX (or TensorFlow Extended), Google’s end-to-end ML platform, the emphasis on community collaboration with TF 2.0, and more.]]>
      </content:encoded>
      <itunes:duration>2397</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[802d7f750a3849d29f4abbb46a976688]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7804504807.mp3?updated=1629243388"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Privacy-Preserving Decentralized Data Science with Andrew Trask - TWiML Talk #241</title>
      <link>https://twimlai.com/twiml-talk-241-privacy-preserving-decentralized-data-science</link>
      <description>Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project, an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. We dig into why OpenMined is important, exploring some of the basic research and technologies supporting Private, Decentralized Data Science, including ideas such as Differential Privacy,and Secure Multi-Party Computation.</description>
      <pubDate>Thu, 21 Mar 2019 16:27:46 -0000</pubDate>
      <itunes:title>Privacy-Preserving Decentralized Data Science with Andrew Trask</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>241</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e6b143a-ee98-11eb-9502-63e5aa55f23c/image/TWIMLAI_Background_800x800_AT_241.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project. OpenMined is an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving,...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project, an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. We dig into why OpenMined is important, exploring some of the basic research and technologies supporting Private, Decentralized Data Science, including ideas such as Differential Privacy,and Secure Multi-Party Computation.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Andrew Trask, PhD student at the University of Oxford and Leader of the OpenMined Project, an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. We dig into why OpenMined is important, exploring some of the basic research and technologies supporting Private, Decentralized Data Science, including ideas such as Differential Privacy,and Secure Multi-Party Computation.]]>
      </content:encoded>
      <itunes:duration>2027</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fd47302377a1431f86a0134295c87530]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9472247116.mp3?updated=1629243325"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen - TWiML Talk #240 </title>
      <link>https://twimlai.com/twiml-talk-240-the-unreasonable-effectiveness-of-the-forget-gate-with-jos-van-der-westhuizen</link>
      <description>Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University.

Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper "The unreasonable effectiveness of the forget gate," in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks.</description>
      <pubDate>Mon, 18 Mar 2019 19:31:31 -0000</pubDate>
      <itunes:title>The Unreasonable Effectiveness of the Forget Gate with Jos Van Der Westhuizen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>240</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5e974f64-ee98-11eb-9502-7ba369f8a915/image/TWIMLAI_Background_800x800_JVDW_240.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University. Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University.

Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper "The unreasonable effectiveness of the forget gate," in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jos Van Der Westhuizen, PhD student in Engineering at Cambridge University.

Jos’ research focuses on applying LSTMs, or Long Short-Term Memory neural networks, to biological data for various tasks. In our conversation, we discuss his paper "The unreasonable effectiveness of the forget gate," in which he explores the various “gates” that make up an LSTM module and the general impact of getting rid of gates on the computational intensity of training the networks.]]>
      </content:encoded>
      <itunes:duration>1926</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a710fd1d03f04706b69b9ebf4f238ac2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9654943197.mp3?updated=1629243420"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building a Recommendation Agent for The North Face with Andrew Guldman - TWiML Talk #239</title>
      <link>https://twimlai.com/twiml-talk-239-building-a-recommendation-agent-for-the-north-face-with-andrew-guldman</link>
      <description>Today we’re joined by Andrew Guldman, VP of Product Engineering and R&amp;D at Fluid to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. We specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, challenges associated with staying on top of a constantly changing landscape, and more!</description>
      <pubDate>Thu, 14 Mar 2019 16:42:41 -0000</pubDate>
      <itunes:title>Building a Recommendation Agent for The North Face with Andrew Guldman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>239</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5ec04a9a-ee98-11eb-9502-cb7cd82ef2db/image/TWIMLAI_Background_800x800_AG_239.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Andrew Guldman, VP of Product Engineering and Research and Development at Fluid. Andrew and I caught up a while back to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Andrew Guldman, VP of Product Engineering and R&amp;D at Fluid to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. We specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, challenges associated with staying on top of a constantly changing landscape, and more!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Andrew Guldman, VP of Product Engineering and R&amp;D at Fluid to discuss Fluid XPS, a user experience built to help the casual shopper decide on the best product choices during online retail interactions. We specifically discuss its origins as a product to assist outerwear retailer The North Face. In our conversation, we discuss their use of heat-sink algorithms and graph databases, challenges associated with staying on top of a constantly changing landscape, and more!]]>
      </content:encoded>
      <itunes:duration>2868</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[45f551713ae848688a9121cc67e47d26]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5468262890.mp3?updated=1629243510"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Active Learning for Materials Design with Kevin Tran - TWiML Talk #238</title>
      <link>https://twiml-talk-238-active-learning-for-materials-design-with-kevin-tran</link>
      <description>Today we’re joined by Kevin Tran, PhD student at Carnegie Mellon University. In our conversation, we explore the challenges surrounding the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” 

The AI Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!</description>
      <pubDate>Mon, 11 Mar 2019 18:28:33 -0000</pubDate>
      <itunes:title>Active Learning for Materials Design with Kevin Tran</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>238</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5ee8cb46-ee98-11eb-9502-db400b1dc1a5/image/TWIMLAI_Background_800x800_KT_238.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kevin Tran, PhD student in the department of chemical engineering at Carnegie Mellon University. Kevin’s research focuses on creating and using automated, active learning workflows to perform density functional theory, or...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kevin Tran, PhD student at Carnegie Mellon University. In our conversation, we explore the challenges surrounding the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” 

The AI Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Kevin Tran, PhD student at Carnegie Mellon University. In our conversation, we explore the challenges surrounding the creation of renewable energy fuel cells, which is discussed in his recent Nature paper “Active learning across intermetallics to guide discovery of electrocatalysts for CO2 reduction and H2 evolution.” 

The AI Conference is returning to New York in April and we have one FREE conference pass for a lucky listener! Visit twimlai.com/ainygiveaway to enter!]]>
      </content:encoded>
      <itunes:duration>2022</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a5c1c2b7a972455580abba1f00d8a3d8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5796422004.mp3?updated=1629243453"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning in Optics with Aydogan Ozcan - TWiML Talk #237</title>
      <link>https://twimlai.com/twiml-talk-237-deep-learning-in-optics-with-aydogan-ozcan</link>
      <description>Today we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, exploring his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore practical applications for their research and other areas of interest.</description>
      <pubDate>Thu, 07 Mar 2019 19:08:13 -0000</pubDate>
      <itunes:title>Deep Learning in Optics with Aydogan Ozcan</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>237</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5f21defe-ee98-11eb-9502-ef48acfb6213/image/TWIMLAI_Background_800x800_AO_237.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, where his research group focuses on photonics and its applications to nano- and biotechnology. In our conversation, we explore his group's research into...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, exploring his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore practical applications for their research and other areas of interest.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Aydogan Ozcan, Professor of Electrical and Computer Engineering at UCLA, exploring his group's research into the intersection of deep learning and optics, holography and computational imaging. We specifically look at a really interesting project to create all-optical neural networks which work based on diffraction, where the printed pixels of the network are analogous to neurons. We also explore practical applications for their research and other areas of interest.]]>
      </content:encoded>
      <itunes:duration>2544</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a87d6a45e6d441669193efeb8eb763c6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7199548229.mp3?updated=1629243406"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Machine Learning on Graphs at LinkedIn with Hema Raghavan and Scott Meyer - TWiML Talk #236</title>
      <link>https://twimlai.com/twiml-talk-236-scaling-machine-learning-on-graphs-at-linkedin-with-hema-raghavan-and-scott-meyer</link>
      <description>Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn to discuss the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.</description>
      <pubDate>Mon, 04 Mar 2019 17:00:00 -0000</pubDate>
      <itunes:title>Scaling Machine Learning on Graphs at LinkedIn with Hema Raghavan and Scott Meyer</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>236</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5f490efc-ee98-11eb-9502-a38c1c2d4290/image/TWIMLAI_Background_800x800_HR-SM_236.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn. Hema is an Engineering Director Responsible for AI for Growth and Notifications, while Scott serves as a Principal Software Engineer. In this conversation, Hema, Scott and I dig into...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn to discuss the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Hema Raghavan and Scott Meyer of LinkedIn to discuss the graph database and machine learning systems that power LinkedIn features such as “People You May Know” and second-degree connections. Hema shares her insight into the motivations for LinkedIn’s use of graph-based models and some of the challenges surrounding using graphical models at LinkedIn’s scale, while Scott details his work on the software used at the company to support its biggest graph databases.]]>
      </content:encoded>
      <itunes:duration>2788</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c4eda59341c344d18c0cc8914d1edbb3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9456025023.mp3?updated=1629243441"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane - TWiML Talk #235</title>
      <link>https://twimlai.com/twiml-talk-235-safer-exploration-in-deep-reinforcement-learning-using-action-priors-with-sicelukwanda-zwane</link>
      <description>Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR, who presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors” at the workshop. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”</description>
      <pubDate>Fri, 01 Mar 2019 17:00:00 -0000</pubDate>
      <itunes:title>Safer Exploration in Deep Reinforcement Learning using Action Priors with Sicelukwanda Zwane</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>235</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5f7b501a-ee98-11eb-9502-d761b228d123/image/TWIMLAI_Background_800x800_SZ_235.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR. At the workshop, he presented on “Safer Exploration in Deep Reinforcement Learning...</itunes:subtitle>
      <itunes:summary>Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR, who presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors” at the workshop. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we conclude our Black in AI series with Sicelukwanda Zwane, a masters student at the University of Witwatersrand and graduate research assistant at the CSIR, who presented on “Safer Exploration in Deep Reinforcement Learning using Action Priors” at the workshop. In our conversation, we discuss what “safer exploration” means in this sense, the difference between this work and other techniques like imitation learning, and how this fits in with the goal of “lifelong learning.”]]>
      </content:encoded>
      <itunes:duration>3226</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[31f55982ef2f4fd3a64c63c75ae26697]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4337350825.mp3?updated=1629243457"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Dissecting the Controversy around OpenAI's New Language Model - TWiML Talk #234</title>
      <link>https://twimlai.com/twiml-talk-234-dissecting-the-controversy-surrounding-openais-new-language-model</link>
      <description>In the inaugural TWiML Live, Sam Charrington is joined by Amanda Askell (OpenAI), Anima Anandkumar (NVIDIA/CalTech), Miles Brundage (OpenAI), Robert Munro (Lilt), and Stephen Merity to discuss the controversial recent release of the OpenAI GPT-2 Language Model. 

We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.</description>
      <pubDate>Mon, 25 Feb 2019 17:58:34 -0000</pubDate>
      <itunes:title>Dissecting the Controversy around OpenAI's New Language Model</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>234</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5fa9ac08-ee98-11eb-9502-5bb5e53ca058/image/TWIMLAI_Background_800x800_234.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>If you’re listening to this podcast, you’ve likely seen some of the press coverage and discussion surrounding the release, or lack thereof, of OpenAI’s new GPT-2 Language Model. The announcement caused quite a stir, with reactions spanning...</itunes:subtitle>
      <itunes:summary>In the inaugural TWiML Live, Sam Charrington is joined by Amanda Askell (OpenAI), Anima Anandkumar (NVIDIA/CalTech), Miles Brundage (OpenAI), Robert Munro (Lilt), and Stephen Merity to discuss the controversial recent release of the OpenAI GPT-2 Language Model. 

We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.</itunes:summary>
      <content:encoded>
        <![CDATA[In the inaugural TWiML Live, Sam Charrington is joined by Amanda Askell (OpenAI), Anima Anandkumar (NVIDIA/CalTech), Miles Brundage (OpenAI), Robert Munro (Lilt), and Stephen Merity to discuss the controversial recent release of the OpenAI GPT-2 Language Model. 

We cover the basics like what language models are and why they’re important, and why this announcement caused such a stir, and dig deep into why the lack of a full release of the model raised concerns for so many.
]]>
      </content:encoded>
      <itunes:duration>3915</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[09eeb6d5a43a46afb3ccd92bf9686d78]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7012580390.mp3?updated=1629243419"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human-Centered Design with Mira Lane - TWiML Talk #233</title>
      <link>https://twimlai.com/talk/233</link>
      <description>Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft.

Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.</description>
      <pubDate>Fri, 22 Feb 2019 15:26:34 -0000</pubDate>
      <itunes:title>Human-Centered Design with Mira Lane</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>233</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5fc633c8-ee98-11eb-9502-c3e563b5de56/image/TWIMLAI_Background_800x800_ML_233.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft. Mira and I focus our conversation on the role of culture and human-centered...</itunes:subtitle>
      <itunes:summary>Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft.

Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we present the final episode in our AI for the Benefit of Society series, in which we’re joined by Mira Lane, Partner Director for Ethics and Society at Microsoft.

Mira and I focus our conversation on the role of culture and human-centered design in AI. We discuss how Mira defines human-centered design, its connections to culture and responsible innovation, and how these ideas can be scalably implemented across large engineering organizations.]]>
      </content:encoded>
      <itunes:duration>2808</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c47b734884f84e598f7490aae474b569]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4769638813.mp3?updated=1629243563"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fairness in Machine Learning with Hanna Wallach - TWiML Talk #232</title>
      <link>https://twimlai.com/twiml-talk-232-fairness-in-machine-learning-with-hanna-wallach</link>
      <description>Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk</description>
      <pubDate>Mon, 18 Feb 2019 23:06:39 -0000</pubDate>
      <itunes:title>Fairness in Machine Learning with Hanna Wallach</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>232</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/5ff8f452-ee98-11eb-9502-aff3ad7a0b05/image/TWIMLAI_Background_800x800_HW_232.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research. Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk]]>
      </content:encoded>
      <itunes:duration>2914</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ec747377bfab416086da73a5bc51e016]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4333041910.mp3?updated=1629243553"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Healthcare with Peter Lee - TWiML Talk #231</title>
      <link>https://twimlai.com/twiml-talk-231-ai-in-healthcare-with-peter-lee</link>
      <description>In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met back at Microsoft Ignite, where he gave me some really interesting takes on AI development in China, which is linked in the show notes. This conversation centers around impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine.</description>
      <pubDate>Mon, 18 Feb 2019 02:06:25 -0000</pubDate>
      <itunes:title>AI in Healthcare with Peter Lee</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>231</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/603a3ed0-ee98-11eb-9502-1fa95fcbb06e/image/TWIMLAI_Background_800x800_PL_231.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met a few months ago at the Microsoft Ignite conference, where he gave me some really...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met back at Microsoft Ignite, where he gave me some really interesting takes on AI development in China, which is linked in the show notes. This conversation centers around impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we’re joined by Peter Lee, Corporate Vice President at Microsoft Research responsible for the company’s healthcare initiatives. Peter and I met back at Microsoft Ignite, where he gave me some really interesting takes on AI development in China, which is linked in the show notes. This conversation centers around impact areas Peter sees for AI in healthcare, namely diagnostics and therapeutics, tools, and the future of precision medicine.]]>
      </content:encoded>
      <itunes:duration>3411</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b99e687c27e64cf0a35c7e2fde2c3501]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4732418663.mp3?updated=1629243552"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection with Justice Amoh Jr. - TWiML Talk #230</title>
      <link>https://twimlai.com/twiml-talk-230-an-optimized-recurrent-unit-for-ultra-low-power-acoustic-event-detection-with-justice-amoh-jr</link>
      <description>Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering.

Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environm</description>
      <pubDate>Mon, 11 Feb 2019 21:43:35 -0000</pubDate>
      <itunes:title>An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection with Justice Amoh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>230</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6063feaa-ee98-11eb-9502-379802ea7208/image/TWIMLAI_Background_800x800_JA_230.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering. Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his...</itunes:subtitle>
      <itunes:summary>Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering.

Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environm</itunes:summary>
      <content:encoded>
        <![CDATA[Today, we're joined by Justice Amoh Jr., a Ph.D. student at Dartmouth’s Thayer School of Engineering.

Justice presented his work on “An Optimized Recurrent Unit for Ultra-Low Power Acoustic Event Detection.” In our conversation, we discuss his goal of bringing low cost, high-efficiency wearables to market for monitoring asthma. We explore the challenges of using classical machine learning models on microcontrollers, and how he went about developing models optimized for constrained hardware environm]]>
      </content:encoded>
      <itunes:duration>2739</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b7f2455858d642d9a40128200ff6db2f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3203184550.mp3?updated=1629243504"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pathologies of Neural Models and Interpretability with Alvin Grissom II - TWiML Talk #229</title>
      <link>https://twimlai.com/twiml-talk-229-pathologies-of-neural-models-and-interpretability-with-alvin-grissom-ii</link>
      <description>Today, we continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. In our conversation, we dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regulariz</description>
      <pubDate>Mon, 11 Feb 2019 17:49:21 -0000</pubDate>
      <itunes:title>Pathologies of Neural Models and Interpretability with Alvin Grissom II</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>229</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/608b63c8-ee98-11eb-9502-eb237b6e62d1/image/TWIMLAI_Background_800x800_AG_229.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, we're excited to continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. Alvin’s research is focused on computational linguistics, and we begin with a brief chat about some of his...</itunes:subtitle>
      <itunes:summary>Today, we continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. In our conversation, we dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regulariz</itunes:summary>
      <content:encoded>
        <![CDATA[Today, we continue our Black in AI series with Alvin Grissom II, Assistant Professor of Computer Science at Ursinus College. In our conversation, we dive into the paper he presented at the workshop, “Pathologies of Neural Models Make Interpretations Difficult.” We talk through some of the “pathological behaviors” he identified in the paper, how we can better understand the overconfidence of trained deep learning models in certain settings, and how we can improve model training with entropy regulariz]]>
      </content:encoded>
      <itunes:duration>1951</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9309c72831964db09f25f241d84c947c]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8570371979.mp3?updated=1629243457"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Earth with Lucas Joppa - TWiML Talk #228</title>
      <link>https://twimlai.com/twiml-talk-228-ai-for-earth-with-lucas-joppa</link>
      <description>Today we’re joined by Lucas Joppa, Chief Environmental Officer at Microsoft and Zach Parisa, Co-founder and president of Silvia Terra, a Microsoft AI for Earth grantee.

In our conversation, we explore the ways that ML &amp; AI can be used to advance our understanding of forests and other ecosystems, supporting conservation efforts. We discuss how Silvia Terra uses computer vision and data from a wide array of sensors, combined with AI, to yield more detailed estimates of the various species in our forests.</description>
      <pubDate>Fri, 08 Feb 2019 16:00:00 -0000</pubDate>
      <itunes:title>AI for Earth with Lucas Joppa</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>228</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/60af7998-ee98-11eb-9502-0f9994ca5487/image/TWIMLAI_Background_800x800_LJ_228.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI For the Benefit of Society with Microsoft series, we’re joined by Lucas Joppa and Zach Parisa. Lucas is the Chief Environmental Officer at Microsoft, spearheading their 5 year, $50 million AI for Earth commitment, which...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Lucas Joppa, Chief Environmental Officer at Microsoft and Zach Parisa, Co-founder and president of Silvia Terra, a Microsoft AI for Earth grantee.

In our conversation, we explore the ways that ML &amp; AI can be used to advance our understanding of forests and other ecosystems, supporting conservation efforts. We discuss how Silvia Terra uses computer vision and data from a wide array of sensors, combined with AI, to yield more detailed estimates of the various species in our forests.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Lucas Joppa, Chief Environmental Officer at Microsoft and Zach Parisa, Co-founder and president of Silvia Terra, a Microsoft AI for Earth grantee.

In our conversation, we explore the ways that ML &amp; AI can be used to advance our understanding of forests and other ecosystems, supporting conservation efforts. We discuss how Silvia Terra uses computer vision and data from a wide array of sensors, combined with AI, to yield more detailed estimates of the various species in our forests.]]>
      </content:encoded>
      <itunes:duration>3371</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9556e57e45a8424a9903213ce2af9f75]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3553446433.mp3?updated=1629243643"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Accessibility with Wendy Chisholm - TWiML Talk #227</title>
      <link>https://twimlai.com/talk/227</link>
      <description>Today we’re joined by Wendy Chisholm, a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication &amp; Connection. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of projects in this area.</description>
      <pubDate>Wed, 06 Feb 2019 16:00:00 -0000</pubDate>
      <itunes:title>AI for Accessibility with Wendy Chisholm</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>227</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/60d892e2-ee98-11eb-9502-9fef216acebd/image/TWIMLAI_Background_800x800_WC_227.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Wendy Chisholm, Lois Brady, and Matthew Guggemos. Wendy is a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Wendy Chisholm, a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication &amp; Connection. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of projects in this area.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Wendy Chisholm, a principal accessibility architect at Microsoft, and one of the chief proponents of the AI for Accessibility program, which extends grants to AI-powered accessibility projects the areas of Employment, Daily Life, and Communication &amp; Connection. In our conversation, we discuss the intersection of AI and accessibility, the lasting impact that innovation in AI can have for people with disabilities and society as a whole, and the importance of projects in this area.]]>
      </content:encoded>
      <itunes:duration>3016</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[147595dbf6794a609e443642a31accf4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9365590966.mp3?updated=1629243575"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Humanitarian Action with Justin Spelhaug - TWiML Talk #226</title>
      <link>https://twimlai.com/talk/226</link>
      <description>Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. 

In our conversation, we discuss the company’s efforts in AI for Humanitarian Action, covering Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.</description>
      <pubDate>Mon, 04 Feb 2019 16:00:00 -0000</pubDate>
      <itunes:title>AI for Humanitarian Action with Justin Spelhaug</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>226</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6107e4a2-ee98-11eb-9502-9b3f1ed3ede5/image/TWIMLAI_Background_800x800_JS_227.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. In our conversation, Justin and I discuss the company’s efforts in AI for Humanitarian Action, a program which extends grants to fund AI-powered...</itunes:subtitle>
      <itunes:summary>Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. 

In our conversation, we discuss the company’s efforts in AI for Humanitarian Action, covering Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we're joined by Justin Spelhaug, General Manager of Technology for Social Impact at Microsoft. 

In our conversation, we discuss the company’s efforts in AI for Humanitarian Action, covering Microsoft’s overall approach to technology for social impact, how his group helps mission-driven organizations best leverage technologies like AI, and how AI is being used at places like the World Bank, Operation Smile, and Mission Measurement to create greater impact.]]>
      </content:encoded>
      <itunes:duration>3530</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a2d4cbe4ad66431bba2d56cef9a4a4b4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7611135972.mp3?updated=1629243390"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Teaching AI to Preschoolers with Randi Williams - TWiML Talk #225</title>
      <link>https://twimlai.com/talk/225</link>
      <description>Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab.

At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work.</description>
      <pubDate>Thu, 31 Jan 2019 05:58:09 -0000</pubDate>
      <itunes:title>Teaching AI to Preschoolers with Randi Williams</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>225</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/612c9036-ee98-11eb-9502-83177747a891/image/TWIMLAI_Background_800x800_RW_225.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab. At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards...</itunes:subtitle>
      <itunes:summary>Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab.

At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work.</itunes:summary>
      <content:encoded>
        <![CDATA[Today, in the first episode of our Black in AI series, we’re joined by Randi Williams, PhD student at the MIT Media Lab.

At the Black in AI workshop Randi presented her research on Popbots: A Early Childhood AI Curriculum, which is geared towards teaching preschoolers the fundamentals of artificial intelligence. In our conversation, we discuss the origins of the project, the three AI concepts that are taught in the program, and the goals that Randi hopes to accomplish with her work. ]]>
      </content:encoded>
      <itunes:duration>2616</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d676ee0025a145a7a658a3ff6730413f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3441862740.mp3?updated=1629243391"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Holistic Optimization of the LinkedIn News Feed - TWiML Talk #224</title>
      <link>https://twimlai.com/talk/224</link>
      <description>Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. In our conversation, Tim describes the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.</description>
      <pubDate>Mon, 28 Jan 2019 16:28:15 -0000</pubDate>
      <itunes:title>Holistic Optimization of the LinkedIn News Feed</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>224</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6158edc0-ee98-11eb-9502-f7f459e94cc4/image/TWIMLAI_Background_800x800_TJ_224.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. As you can imagine Feed AI is responsible for curating all the content you see daily on the LinkedIn site. What’s less apparent to those that don’t work on this type of product is the...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. In our conversation, Tim describes the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn. In our conversation, Tim describes the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale. 

]]>
      </content:encoded>
      <itunes:duration>2882</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[df39da79b6f345538640ad774c9597c6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5823183150.mp3?updated=1629243605"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI at the Edge at Qualcomm with Gary Brotman - TWiML Talk #223</title>
      <link>https://twimlai.com/talk/223</link>
      <description>Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc.

Gary, who got his start in AI through music, now leads strategy and product planning for the company’s AI and ML technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable th</description>
      <pubDate>Thu, 24 Jan 2019 16:50:22 -0000</pubDate>
      <itunes:title>AI at the Edge at Qualcomm with Gary Brotman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>223</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/61831de8-ee98-11eb-9502-2bc9db82563e/image/TWIMLAI_Background_800x800_GB_223.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc. Gary, who got his start in AI through music, now leads strategy and product planning for the company’s Artificial Intelligence and Machine...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc.

Gary, who got his start in AI through music, now leads strategy and product planning for the company’s AI and ML technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable th</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Gary Brotman, Senior Director of Product Management at Qualcomm Technologies, Inc.

Gary, who got his start in AI through music, now leads strategy and product planning for the company’s AI and ML technologies, including those that make up the Qualcomm Snapdragon mobile platforms. In our conversation, we discuss AI on mobile devices and at the edge, including popular use cases, and explore some of the various acceleration technologies offered by Qualcomm and others that enable th]]>
      </content:encoded>
      <itunes:duration>3088</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d2e7130026b04373a0d16af6f2237d69]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3442820098.mp3?updated=1629243520"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Innovation at CES - TWiML Talk #222</title>
      <link>https://twimlai.com/talk/222</link>
      <description>A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. In this special visual only episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. 

Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment!

For the show notes, visit https://twimlai.com/talk/222.</description>
      <pubDate>Mon, 21 Jan 2019 19:18:58 -0000</pubDate>
      <itunes:title>AI Innovation at CES - TWiML Talk #222</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>222</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/61a4abfc-ee98-11eb-9502-ab411643acad/image/TWIMLAI_Background_800x800_CES_222.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. CES is one of those things that’s hard to fully understand without having seen, so I thought it’d be fun to give you a look at it from my vantage...</itunes:subtitle>
      <itunes:summary>A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. In this special visual only episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. 

Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment!

For the show notes, visit https://twimlai.com/talk/222.</itunes:summary>
      <content:encoded>
        <![CDATA[A few weeks ago, I made the trek to Las Vegas for the world’s biggest electronics conference, CES. In this special visual only episode, we’re going to check out some of the interesting examples of machine learning and AI that I found at the event. 

Check out the video at https://twimlai.com/ces2019, and be sure to hit the like and subscribe buttons and let us know how you like the show via a comment!

For the show notes, visit https://twimlai.com/talk/222.]]>
      </content:encoded>
      <itunes:duration>120</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bdeed7808cee46d1b2e5c3bc481ddda0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4882019404.mp3?updated=1627362818"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Self-Tuning Services via Real-Time Machine Learning with Vladimir Bychkovsky - TWiML Talk #221</title>
      <link>https://twimlai.com/talk/221</link>
      <description>Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral, a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks.</description>
      <pubDate>Thu, 17 Jan 2019 19:34:02 -0000</pubDate>
      <itunes:title>Self-Tuning Services vis Real-Time Machine Learning with Vladimir Bychkovsky</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>221</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/61c9e318-ee98-11eb-9502-43eb7ab7d29b/image/TWIMLAI_Background_800x800_VB_221.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral. Spiral is a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral, a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Vladimir Bychkovsky, Engineering Manager at Facebook, to discuss Spiral, a system they’ve developed for self-tuning high-performance infrastructure services at scale, using real-time machine learning. In our conversation, we explore how the system works, how it was developed, and how infrastructure teams at Facebook can use it to replace hand-tuned parameters set using heuristics with services that automatically optimize themselves in minutes rather than in weeks.]]>
      </content:encoded>
      <itunes:duration>2768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2c4af7e4e11e44628ec3a799f7791e38]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7614957695.mp3?updated=1629243502"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building a Recommender System from Scratch at 20th Century Fox with JJ Espinoza - TWiML Talk #220</title>
      <link>https://twimlai.com/talk/220</link>
      <description>Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox.

In this talk we dig into JJ and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users.</description>
      <pubDate>Mon, 14 Jan 2019 20:15:32 -0000</pubDate>
      <itunes:title>Building a Recommender System from Scratch at 20th Century Fox with JJ Espinoza</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>220</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/61eb9fda-ee98-11eb-9502-0b0cd2c9ad09/image/TWIMLAI_Background_800x800_JJE_220.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox. In this talk we start out with a discussion JJ’s transition from econometrician to data scientist, and then dig into his and his team’s experience building...</itunes:subtitle>
      <itunes:summary>Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox.

In this talk we dig into JJ and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by JJ Espinoza, former Director of Data Science at 20th Century Fox.

In this talk we dig into JJ and his team’s experience building and deploying a content recommendation system from the ground up. In our conversation, we explore the design of a couple of key components of their system, the first of which processes movie scripts to make recommendations about which movies the studio should make, and the second processes trailers to determine which should be recommended to users. ]]>
      </content:encoded>
      <itunes:duration>2099</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1354fd59c6c14c79b91b83d06522c0b0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4433848168.mp3?updated=1629243436"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Legal and Policy Implications of Model Interpretability with Solon Barocas - TWiML Talk #219</title>
      <link>https://twimlai.com/talk/219</link>
      <description>Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we explore the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines.”</description>
      <pubDate>Thu, 10 Jan 2019 18:22:32 -0000</pubDate>
      <itunes:title>Legal and Policy Implications of Model Interpretability with Solon Barocas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>219</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/62116a9e-ee98-11eb-9502-531bf6179eb4/image/TWIMLAI_Background_800x800_SB_219.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University. Solon is also the co-founder of the Fairness, Accountability, and Transparency in Machine Learning workshop that is hosted annually at conferences...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we explore the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Solon Barocas, Assistant Professor of Information Science at Cornell University.

Solon and I caught up to discuss his work on model interpretability and the legal and policy implications of the use of machine learning models. In our conversation, we explore the gap between law, policy, and ML, and how to build the bridge between them, including formalizing ethical frameworks for machine learning. We also look at his paper ”The Intuitive Appeal of Explainable Machines.”]]>
      </content:encoded>
      <itunes:duration>2812</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2a02a2b54de946a18dbca7142ed94d91]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8838048681.mp3?updated=1629243535"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Computer Vision with Siddha Ganju - TWiML Talk #218</title>
      <link>https://twimlai.com/talk/218</link>
      <description>In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show.

Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.</description>
      <pubDate>Mon, 07 Jan 2019 21:00:09 -0000</pubDate>
      <itunes:title>Trends in Computer Vision with Siddha Ganju</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>218</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/623a0f08-ee98-11eb-9502-b794bf36421e/image/TWIMLAI_Background_800x800_SG_218.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show. Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover...</itunes:subtitle>
      <itunes:summary>In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show.

Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.</itunes:summary>
      <content:encoded>
        <![CDATA[In the final episode of our AI Rewind series, we’re excited to have Siddha Ganju back on the show.

Siddha, who is now an autonomous vehicles solutions architect at Nvidia shares her thoughts on trends in Computer Vision in 2018 and beyond. We cover her favorite CV papers of the year in areas such as neural architecture search, learning from simulation, application of CV to augmented reality, and more, as well as a bevy of tools and open source projects.]]>
      </content:encoded>
      <itunes:duration>1973</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4fb2c5039cff4f1aa5e1502d985f9515]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8816431426.mp3?updated=1629243405"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Reinforcement Learning with Simon Osindero - TWiML Talk #217</title>
      <link>https://twimlai.com/talk/217</link>
      <description>In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.

We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen this year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.

The complete show notes for this episode can be found at https://twimlai.com/talk/217.</description>
      <pubDate>Thu, 03 Jan 2019 18:26:57 -0000</pubDate>
      <itunes:title>Trends in Reinforcement Learning with Simon Osindero</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>217</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/625fbc1c-ee98-11eb-9502-f7d9eac79407/image/TWIMLAI_Background_800x800_SO_217.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind. We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.

We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen this year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.

The complete show notes for this episode can be found at https://twimlai.com/talk/217.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Rewind series, we introduce a new friend of the show, Simon Osindero, Staff Research Scientist at DeepMind.

We discuss trends in Deep Reinforcement Learning in 2018 and beyond. We’ve packed a bunch into this show, as Simon walks us through many of the important papers and developments seen this year in areas like Imitation Learning, Unsupervised RL, Meta-learning, and more.

The complete show notes for this episode can be found at https://twimlai.com/talk/217.]]>
      </content:encoded>
      <itunes:duration>3133</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c42c7afbe91b4f33889cee598ce79ceb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5890812963.mp3?updated=1629243572"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Natural Language Processing with Sebastian Ruder - TWiML Talk #216</title>
      <link>https://twimlai.com/talk/216</link>
      <description>In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. 

In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.</description>
      <pubDate>Mon, 31 Dec 2018 16:53:28 -0000</pubDate>
      <itunes:title>Trends in Natural Language Processing with Sebastian Ruder</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>216</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/628cba78-ee98-11eb-9502-3f898ca15e04/image/TWIMLAI_Background_800x800_SR_216.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. In our...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. 

In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Rewind series, we’ve brought back recent guest Sebastian Ruder, PhD Student at the National University of Ireland and Research Scientist at Aylien, to discuss trends in Natural Language Processing in 2018 and beyond. 

In our conversation we cover a bunch of interesting papers spanning topics such as pre-trained language models, common sense inference datasets and large document reasoning and more, and talk through Sebastian’s predictions for the new year.]]>
      </content:encoded>
      <itunes:duration>3174</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0fdea567458845119837280a94f4b0e5]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7077673472.mp3?updated=1629243530"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Machine Learning with Anima Anandkumar - TWiML Talk #215</title>
      <link>https://twimlai.com/talk/215</link>
      <description>In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. 

Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. 

For this episode's complete show notes, visit twimlai.com/talk/215.</description>
      <pubDate>Thu, 27 Dec 2018 15:48:55 -0000</pubDate>
      <itunes:title>Trends in Machine Learning with Anima Anandkumar</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>215</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/62b499e4-ee98-11eb-9502-1773d28afa14/image/TWIMLAI_Background_800x800_AA_215.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. 

Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. 

For this episode's complete show notes, visit twimlai.com/talk/215.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Rewind series, we’re back with Anima Anandkumar, Bren Professor at Caltech and now Director of Machine Learning Research at NVIDIA. 

Anima joins us to discuss her take on trends in the broader Machine Learning field in 2018 and beyond. In our conversation, we cover not only technical breakthroughs in the field but also those around inclusivity and diversity. 

For this episode's complete show notes, visit twimlai.com/talk/215.]]>
      </content:encoded>
      <itunes:duration>3083</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0ea4fec642c742d8a65cfa67b6ba20eb]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1308392109.mp3?updated=1629243567"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trends in Deep Learning with Jeremy Howard - TWiML Talk #214</title>
      <link>https://twimlai.com/talk/214</link>
      <description>In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai.

Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.</description>
      <pubDate>Mon, 24 Dec 2018 16:43:45 -0000</pubDate>
      <itunes:title>Trends in Deep Learning with Jeremy Howard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>214</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/62d50986-ee98-11eb-9502-77bfcc05231c/image/TWIMLAI_Background_800x800_JH_214.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai. Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai.

Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Rewind series, we’re bringing back one of your favorite guests of the year, Jeremy Howard, founder and researcher at Fast.ai.

Jeremy joins us to discuss trends in Deep Learning in 2018 and beyond. We cover many of the papers, tools and techniques that have contributed to making deep learning more accessible than ever to so many developers and data scientists.]]>
      </content:encoded>
      <itunes:duration>4097</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[dff41053004e4f8c8ed5b83833db2289]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7517895808.mp3?updated=1629243634"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Training Large-Scale Deep Nets with RL with Nando de Freitas - TWiML Talk #213</title>
      <link>https://twimlai.com/talk/213</link>
      <description>Today we close out both our NeurIPS series joined by Nando de Freitas, Team Lead &amp; Principal Scientist at Deepmind. In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.”</description>
      <pubDate>Thu, 20 Dec 2018 17:34:52 -0000</pubDate>
      <itunes:title>Training Large-Scale Deep Nets with RL with Nando de Freitas</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>213</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/62fc7890-ee98-11eb-9502-fb274b607a6c/image/TWIMLAI_Background_800x800_NF_213.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we close out both our NeurIPS series and our 2018 conference coverage with this interview with Nando de Freitas, Team Lead &amp; Principal Scientist at Deepmind and Fellow at the Canadian Institute for Advanced Research. In our conversation, we...</itunes:subtitle>
      <itunes:summary>Today we close out both our NeurIPS series joined by Nando de Freitas, Team Lead &amp; Principal Scientist at Deepmind. In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we close out both our NeurIPS series joined by Nando de Freitas, Team Lead &amp; Principal Scientist at Deepmind. In our conversation, we explore his interest in understanding the brain and working towards artificial general intelligence. In particular, we dig into a couple of his team’s NeurIPS papers: “Playing hard exploration games by watching YouTube,” and “One-Shot high-fidelity imitation: Training large-scale deep nets with RL.” ]]>
      </content:encoded>
      <itunes:duration>3324</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[49c27ac120ad49d2a86b673950577fe0]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2580763273.mp3?updated=1629243536"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Making Algorithms Trustworthy with David Spiegelhalter - TWiML Talk #212</title>
      <link>https://twimlai.com/talk/212</link>
      <description>Today we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems.</description>
      <pubDate>Thu, 20 Dec 2018 01:00:26 -0000</pubDate>
      <itunes:title>Making Algorithms Trustworthy with David Spiegelhalter</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>212</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/631cdb30-ee98-11eb-9502-1bea18324baf/image/TWIMLAI_Background_800x800_DS_212.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the second episode of our NeurIPS series, we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at...</itunes:subtitle>
      <itunes:summary>Today we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society. David, an invited speaker at NeurIPS, presented on “Making Algorithms Trustworthy: What Can Statistical Science Contribute to Transparency, Explanation and Validation?”. In our conversation, we explore the nuanced difference between being trusted and being trustworthy, and its implications for those building AI systems.]]>
      </content:encoded>
      <itunes:duration>1405</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b46bc40412d84e8f8afbbbad1707f692]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7389205365.mp3?updated=1629243374"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Designing Computer Systems for Software with Kunle Olukotun - TWiML Talk #211</title>
      <link>https://twimlai.com/talk/211</link>
      <description>Today we’re joined by Kunle Olukotun, Professor in the department of EE and CS at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. This was a fun one!</description>
      <pubDate>Tue, 18 Dec 2018 00:38:14 -0000</pubDate>
      <itunes:title>Designing Computer Systems for Software with Kunle Olukotun</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>211</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/634045e8-ee98-11eb-9502-33b1e47cfc45/image/TWIMLAI_Background_800x800_KO_211.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kunle Olukotun, Professor in the department of Electrical Engineering and Computer Science at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kunle Olukotun, Professor in the department of EE and CS at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. This was a fun one!</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Kunle Olukotun, Professor in the department of EE and CS at Stanford University, and Chief Technologist at Sambanova Systems. Kunle was an invited speaker at NeurIPS this year, presenting on “Designing Computer Systems for Software 2.0.” In our conversation, we discuss various aspects of designing hardware systems for machine and deep learning, touching on multicore processor design, domain specific languages, and graph-based hardware. This was a fun one!]]>
      </content:encoded>
      <itunes:duration>3344</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[95497918885e42a9a192f42e762c7fa2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3328780511.mp3?updated=1629243529"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Operationalizing Ethical AI with Kathryn Hume - TWiML Talk #210</title>
      <link>https://twimlai.com/talk/210</link>
      <description>Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. We discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.</description>
      <pubDate>Fri, 14 Dec 2018 17:49:06 -0000</pubDate>
      <itunes:title>Operationalizing Ethical AI with Kathryn Hume</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>210</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/635ef52e-ee98-11eb-9502-535315e459b2/image/TWIMLAI_Background_800x800_KH_210.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. You might remember Kathryn from our interview last year on “Selling AI to the Enterprise,” which was . This time around, we discuss...</itunes:subtitle>
      <itunes:summary>Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. We discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we conclude our Trust in AI series with this conversation with Kathryn Hume, VP of Strategy at Integrate AI. We discuss her newly released white paper “Responsible AI in the Consumer Enterprise,” which details a framework for ethical AI deployment in e-commerce companies and other consumer-facing enterprises. We look at the structure of the ethical framework she proposes, and some of the many questions that need to be considered when deploying AI in an ethical manner.]]>
      </content:encoded>
      <itunes:duration>3224</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[74a55a74dc664451a84c2f7e45e680e6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2898157023.mp3?updated=1629243523"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Approaches to Fairness in Machine Learning with Richard Zemel - TWiML Talk #209</title>
      <link>https://twimlai.com/talk/209</link>
      <description>Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”</description>
      <pubDate>Wed, 12 Dec 2018 22:29:49 -0000</pubDate>
      <itunes:title>Approaches to Fairness in Machine Learning with Richard Zemel</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>209</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/638927f4-ee98-11eb-9502-93e039a89b28/image/TWIMLAI_Background_800x800_RZ_209.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute. In our conversation, Rich describes some...</itunes:subtitle>
      <itunes:summary>Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.”</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

In our conversation, Rich describes some of his work on fairness in machine learning algorithms, including how he defines both group and individual fairness and his group’s recent NeurIPS poster, “Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer.” 

]]>
      </content:encoded>
      <itunes:duration>2731</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[11a4cfc106454c51af58a511dc161a87]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3145335145.mp3?updated=1629243531"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trust and AI with Parinaz Sobhani - TWiML Talk #208</title>
      <link>https://twimlai.com/talk/208</link>
      <description>In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. 

In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.</description>
      <pubDate>Tue, 11 Dec 2018 16:53:15 -0000</pubDate>
      <itunes:title>Trust and AI with Parinaz Sobhani</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>208</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/63b35286-ee98-11eb-9502-27016623f9a9/image/TWIMLAI_Background_800x800_PS_208.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and...</itunes:subtitle>
      <itunes:summary>In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. 

In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode we’re joined by Parinaz Sobhani, Director of Machine Learning at Georgian Partners. 

In our conversation, Parinaz and I discuss some of the main issues falling under the “trust” umbrella, such as transparency, fairness and accountability. We also explore some of the trust-related projects she and her team at Georgian are working on, as well as some of the interesting trust and privacy papers coming out of the NeurIPS conference.]]>
      </content:encoded>
      <itunes:duration>2786</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[cf093d8708a84744bcfe48cce2eb935d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4306084494.mp3?updated=1629243536"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Unbiased Learning from Biased User Feedback with Thorsten Joachims - TWiML Talk #207</title>
      <link>https://twimlai.com/talk/207</link>
      <description>In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. We discuss his presentation “Unbiased Learning from Biased User Feedback,” looking at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies.</description>
      <pubDate>Fri, 07 Dec 2018 19:04:12 -0000</pubDate>
      <itunes:title>Unbiased Learning from Biased User Feedback with Thorsten Joachims</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>207</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/63d01402-ee98-11eb-9502-6b87715084b0/image/TWIMLAI_Background_800x800_TJ_207.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. Thorsten participated at the conference’s AI Summit, presenting his research on “Unbiased...</itunes:subtitle>
      <itunes:summary>In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. We discuss his presentation “Unbiased Learning from Biased User Feedback,” looking at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies.</itunes:summary>
      <content:encoded>
        <![CDATA[In the final episode of our re:Invent series, we're joined by Thorsten Joachims, Professor in the Department of Computer Science at Cornell University. We discuss his presentation “Unbiased Learning from Biased User Feedback,” looking at some of the inherent and introduced biases in recommender systems, and the ways to avoid them. We also discuss how inference techniques can be used to make learning algorithms more robust to bias, and how these can be enabled with the correct type of logging policies. ]]>
      </content:encoded>
      <itunes:duration>2444</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[1e5889152cc942e29339b1e2b709c484]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3890619515.mp3?updated=1629243459"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Language Parsing and Character Mining with Jinho Choi - TWiML Talk #206</title>
      <link>https://twimlai.com/talk/206</link>
      <description>Today we’re joined by Jinho Choi, assistant professor of computer science at Emory University.

Jinho presented at the conference on ELIT, their cloud-based NLP platform. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.</description>
      <pubDate>Wed, 05 Dec 2018 22:31:54 -0000</pubDate>
      <itunes:title>Language Parsing and Character Mining with Jinho Choi</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>206</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/63f5ed1c-ee98-11eb-9502-d349fda79317/image/TWIMLAI_Background_800x800_JC_206.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, in the second episode of our re:Invent series, we’re joined by Jinho Choi, assistant professor of computer science at Emory University. Jinho presented at the conference on ELIT — a cloud-based NLP platform — which is short for Evolution...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Jinho Choi, assistant professor of computer science at Emory University.

Jinho presented at the conference on ELIT, their cloud-based NLP platform. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Jinho Choi, assistant professor of computer science at Emory University.

Jinho presented at the conference on ELIT, their cloud-based NLP platform. In our conversation, we discuss some of the key NLP challenges that Jinho and his group are tackling, including language parsing and character mining. We also discuss their vision for ELIT, which is to make it easy for researchers to develop, access, and deploying cutting-edge NLP tools models on the cloud.

]]>
      </content:encoded>
      <itunes:duration>2853</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c1dbe828faca489abbd94ce8a414f11b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9744355856.mp3?updated=1629243502"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>re:Invent Roundup Roundtable 2018 with Dave McCrory and Val Bercovici - TWiML Talk #205</title>
      <link>https://twimlai.com/talk/205</link>
      <description>I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data. If you missed the news coming out of re:Invent, we cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning, DeepRacer, Inferentia and Elastic Inference, ML Marketplace and much more.

For the show notes visit https://twimlai.com/ta</description>
      <pubDate>Mon, 03 Dec 2018 19:36:00 -0000</pubDate>
      <itunes:title>re:Invent Roundup Roundtable 2018 with Dave McCrory and Val Bercovici</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>205</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/64183a5c-ee98-11eb-9502-97b3ae736117/image/TWIMLAI_Background_800x800_re_205.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>For today’s show, I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by my friends Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil...</itunes:subtitle>
      <itunes:summary>I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data. If you missed the news coming out of re:Invent, we cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning, DeepRacer, Inferentia and Elastic Inference, ML Marketplace and much more.

For the show notes visit https://twimlai.com/ta</itunes:summary>
      <content:encoded>
        <![CDATA[I’m excited to present our second annual re:Invent Roundtable Roundup. This year I’m joined by Dave McCrory, VP of Software Engineering at Wise.io at GE Digital, and Val Bercovici, Founder and CEO of Pencil Data. If you missed the news coming out of re:Invent, we cover all of AWS’ most important ML and AI announcements, including SageMaker Ground Truth, Reinforcement Learning, DeepRacer, Inferentia and Elastic Inference, ML Marketplace and much more.

For the show notes visit https://twimlai.com/ta]]>
      </content:encoded>
      <itunes:duration>4055</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fcb66ff323d045d88d62a9563e87cc6f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6897734994.mp3?updated=1629243582"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Knowledge Graphs and Expert Augmentation with Marisa Boston - TWiML Talk #204</title>
      <link>https://twimlai.com/talk/204</link>
      <description>Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. We caught up to discuss some of the ways that KPMG is using AI to build tools that help augment the knowledge of their teams of professionals. We discuss knowledge graphs and how they can be used to map out and relate various concepts and how they use these in conjunction with NLP tools to create insight engines. We also look at tools that curate and contextualize news and other text-based data sour</description>
      <pubDate>Thu, 29 Nov 2018 23:34:58 -0000</pubDate>
      <itunes:title>Knowledge Graphs and Expert Augmentation with Marisa Boston</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>204</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/64390700-ee98-11eb-9502-e38894e37279/image/TWIMLAI_Background_800x800_MB_204.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. Marisa and I caught up to discuss some of the ways that they’re using AI to build tools that help augment the knowledge of KPMG’s teams...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. We caught up to discuss some of the ways that KPMG is using AI to build tools that help augment the knowledge of their teams of professionals. We discuss knowledge graphs and how they can be used to map out and relate various concepts and how they use these in conjunction with NLP tools to create insight engines. We also look at tools that curate and contextualize news and other text-based data sour</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Marisa Boston, Director of Cognitive Technology in KPMG’s Cognitive Automation Lab. We caught up to discuss some of the ways that KPMG is using AI to build tools that help augment the knowledge of their teams of professionals. We discuss knowledge graphs and how they can be used to map out and relate various concepts and how they use these in conjunction with NLP tools to create insight engines. We also look at tools that curate and contextualize news and other text-based data sour]]>
      </content:encoded>
      <itunes:duration>2817</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d446b51d7d1640eda505da0905737481]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4985476878.mp3?updated=1629243577"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid - TWiML Talk #203</title>
      <link>https://twimlai.com/talk/203</link>
      <description>Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research.

NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ML and DL models to support the firm’s investment decisions. We focus on techniques for modeling non-stationary time-series, stationary vs non-stationary time-series, and challenges of building models using financial data.</description>
      <pubDate>Mon, 26 Nov 2018 21:59:47 -0000</pubDate>
      <itunes:title>ML/DL for Non-Stationary Time Series Analysis in Financial Markets and Beyond with Stuart Reid</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>203</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6462b5fa-ee98-11eb-9502-8be7ecf339e4/image/TWIMLAI_Background_800x800_SR_203.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research. NMRQL, based in Stellenbosch, South Africa, is an investment management firm that uses machine learning algorithms to make adaptive, unbiased, scalable, and testable trading...</itunes:subtitle>
      <itunes:summary>Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research.

NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ML and DL models to support the firm’s investment decisions. We focus on techniques for modeling non-stationary time-series, stationary vs non-stationary time-series, and challenges of building models using financial data.</itunes:summary>
      <content:encoded>
        <![CDATA[Today, we’re joined by Stuart Reid, Chief Scientist at NMRQL Research.

NMRQL is an investment management firm that uses ML algorithms to make adaptive, unbiased, scalable, and testable trading decisions for its funds. In our conversation, Stuart and I dig into the way NMRQL uses ML and DL models to support the firm’s investment decisions. We focus on techniques for modeling non-stationary time-series, stationary vs non-stationary time-series, and challenges of building models using financial data.]]>
      </content:encoded>
      <itunes:duration>3509</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2aeda73f50f443fb9f0465d18c8d5169]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3296690977.mp3?updated=1629243618"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Industrializing Machine Learning at Shell with Daniel Jeavons - TWiML Talk #202</title>
      <link>https://twimlai.com/talk/202</link>
      <description>In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell.

In our conversation, we explore the evolution of analytics and data science at Shell, discussing IoT-related applications and issues, such as inference at the edge, federated ML, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to the company as a whole.</description>
      <pubDate>Wed, 21 Nov 2018 16:32:20 -0000</pubDate>
      <itunes:title>Industrializing Machine Learning at Shell with Daniel Jeavons</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>202</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6492934c-ee98-11eb-9502-7bd91cdacd17/image/TWIMLAI_Background_800x800_DJ_202.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell. In our conversation, Daniel and I explore the evolution of analytics and data science at Shell, and cover a ton of interesting...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell.

In our conversation, we explore the evolution of analytics and data science at Shell, discussing IoT-related applications and issues, such as inference at the edge, federated ML, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to the company as a whole.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Platforms series, we’re joined by Daniel Jeavons, General Manager of Data Science at Shell.

In our conversation, we explore the evolution of analytics and data science at Shell, discussing IoT-related applications and issues, such as inference at the edge, federated ML, and digital twins, all key considerations for the way they apply ML. We also talk about the data science process at Shell and the importance of platform technologies to the company as a whole.]]>
      </content:encoded>
      <itunes:duration>2719</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b76a3a7f7e6a4df8847639df2ba80ed6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7676868639.mp3?updated=1629243478"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Resurrecting a Recommendations Platform at Comcast with Leemay Nassery - TWiML Talk #201</title>
      <link>https://twimlai.com/talk/201</link>
      <description>In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including the rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure.</description>
      <pubDate>Mon, 19 Nov 2018 19:19:55 -0000</pubDate>
      <itunes:title>Resurrecting a Recommendations Platform at Comcast with Leemay Nassery</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>201</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/64b84e8e-ee98-11eb-9502-7bc5588c73bb/image/TWIMLAI_Background_800x800_LN_201.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. Leemay spoke at the Strange Loop conference a few months ago on “Resurrecting a recommendations...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including the rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Platforms series, we’re joined by Leemay Nassery, Senior Engineering Manager and head of the recommendations team at Comcast. In our conversation, Leemay and I discuss just how she and her team resurrected the Xfinity X1 recommendations platform, including the rebuilding the data pipeline, the machine learning process, and the deployment and training of their updated models. We also touch on the importance of A-B testing and maintaining their rebuilt infrastructure.]]>
      </content:encoded>
      <itunes:duration>2856</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[c1b1d5782541499fa8608bea81617917]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2834472319.mp3?updated=1629243546"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Productive Machine Learning at LinkedIn with Bee-Chung Chen - TWiML Talk #200</title>
      <link>https://twimlai.com/talk/200</link>
      <description>In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML. Bee-Chung breaks down some of the major pieces of the pipeline, LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed.

For the complete show notes, visit https://twimlai.com/talk/200.</description>
      <pubDate>Thu, 15 Nov 2018 20:05:16 -0000</pubDate>
      <itunes:title>Productive Machine Learning at LinkedIn with Bee-Chung Chen</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>200</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/64dbb414-ee98-11eb-9502-2b60839e10d1/image/TWIMLAI_Background_800x800_BCC_200.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML, which was built with...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML. Bee-Chung breaks down some of the major pieces of the pipeline, LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed.

For the complete show notes, visit https://twimlai.com/talk/200.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Platforms series, we’re joined by Bee-Chung Chen, Principal Staff Engineer and Applied Researcher at LinkedIn. Bee-Chung and I caught up to discuss LinkedIn’s internal AI automation platform, Pro-ML. Bee-Chung breaks down some of the major pieces of the pipeline, LinkedIn’s experience bringing Pro-ML to the company's developers and the role the LinkedIn AI Academy plays in helping them get up to speed.

For the complete show notes, visit https://twimlai.com/talk/200.

]]>
      </content:encoded>
      <itunes:duration>2858</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fcd8cb6833534f6aafd713e4e4180827]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1407537945.mp3?updated=1629243504"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Deep Learning on Kubernetes at OpenAI with Christopher Berner - TWiML Talk #199</title>
      <link>https://twimlai.com/talk/199</link>
      <description>In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.</description>
      <pubDate>Mon, 12 Nov 2018 20:15:06 -0000</pubDate>
      <itunes:title>Scaling Deep Learning on Kubernetes at OpenAI with Christopher Berner</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>199</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6504faea-ee98-11eb-9502-37a948e00e17/image/TWIMLAI_Background_800x800_CB_199.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. Chris has played a key role in overhauling OpenAI’s deep learning infrastructure of the course of his two years with the company. In...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Platforms series we’re joined by OpenAI’s Head of Infrastructure, Christopher Berner. In our conversation, we discuss the evolution of OpenAI’s deep learning platform, the core principles which have guided that evolution, and its current architecture. We dig deep into their use of Kubernetes and discuss various ecosystem players and projects that support running deep learning at scale on the open source project.
]]>
      </content:encoded>
      <itunes:duration>2997</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[719a00d19f644b2bb4f12b0a9dc17dd9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5546802736.mp3?updated=1629243516"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bighead: Airbnb's Machine Learning Platform with Atul Kale - TWiML Talk #198</title>
      <link>https://twimlai.com/talk/198</link>
      <description>In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb.

In our conversation, we discuss Airbnb’s internal machine learning platform, Bighead. Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, some of Atul’s best practices for scaling machine learning, and a special announcement that Atul and his team made at Strata.</description>
      <pubDate>Thu, 08 Nov 2018 20:17:11 -0000</pubDate>
      <itunes:title>Bighead: Airbnb's Machine Learning Platform with Atul Kale</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>198</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/652d1872-ee98-11eb-9502-3be2eb4de603/image/TWIMLAI_Background_800x800_AK_198.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb. Atul and I met at the Strata Data conference a while back to discuss Airbnb’s internal machine...</itunes:subtitle>
      <itunes:summary>In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb.

In our conversation, we discuss Airbnb’s internal machine learning platform, Bighead. Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, some of Atul’s best practices for scaling machine learning, and a special announcement that Atul and his team made at Strata.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI Platforms series, we’re joined by Atul Kale, Engineering Manager on the machine learning infrastructure team at Airbnb.

In our conversation, we discuss Airbnb’s internal machine learning platform, Bighead. Atul outlines the ML lifecycle at Airbnb and how the various components of Bighead support it. We then dig into the major components of Bighead, some of Atul’s best practices for scaling machine learning, and a special announcement that Atul and his team made at Strata.
]]>
      </content:encoded>
      <itunes:duration>2984</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[80576f1140d04148b9499761afc2dd51]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7150254751.mp3?updated=1629243508"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Facebook's FBLearner Platform with Aditya Kalro - TWiML Talk #197</title>
      <link>https://twimlai.com/talk/197</link>
      <description>In the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. We discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook.</description>
      <pubDate>Tue, 06 Nov 2018 21:53:16 -0000</pubDate>
      <itunes:title>Facebook's FBLearner Platform with Aditya Kalro</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>197</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/655a05c6-ee98-11eb-9502-6b3036122315/image/TWIMLAI_Background_800x800_AK_197.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. Introduced in May of 2016, FBLearner Flow is the workflow...</itunes:subtitle>
      <itunes:summary>In the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. We discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook.</itunes:summary>
      <content:encoded>
        <![CDATA[In the kickoff episode of our AI Platforms series, we’re joined by Aditya Kalro, Engineering Manager at Facebook, to discuss their internal machine learning platform FBLearner Flow. FBLearner Flow is the workflow management platform at the heart of the Facebook ML engineering ecosystem. We discuss the history and development of the platform, as well as its functionality and its evolution from an initial focus on model training to supporting the entire ML lifecycle at Facebook. ]]>
      </content:encoded>
      <itunes:duration>2318</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2f76c22666274630ad858fb35d2f06ba]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5757726710.mp3?updated=1629243349"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Geometric Statistics in Machine Learning w/ geomstats with Nina Miolane - TWiML Talk #196</title>
      <link>https://twimlai.com/talk/196</link>
      <description>In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I spoke about her work in the field of geometric statistics in ML, specifically the application of Riemannian geometry, which is the study of curved surfaces, to ML. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.</description>
      <pubDate>Thu, 01 Nov 2018 16:40:44 -0000</pubDate>
      <itunes:title>Geometric Statistics in Machine Learning w/ geomstats with Nina Miolane</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>196</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65807288-ee98-11eb-9502-0b412f499ad5/image/TWIMLAI_Background_800x800_NM_196.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I recently spoke about her work in the field of geometric statistics in machine learning. Specifically, we discuss the application of Riemannian...</itunes:subtitle>
      <itunes:summary>In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I spoke about her work in the field of geometric statistics in ML, specifically the application of Riemannian geometry, which is the study of curved surfaces, to ML. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode we’re joined by Nina Miolane, researcher and lecturer at Stanford University. Nina and I spoke about her work in the field of geometric statistics in ML, specifically the application of Riemannian geometry, which is the study of curved surfaces, to ML. In our discussion we review the differences between Riemannian and Euclidean geometry in theory and her new Geomstats project, which is a python package that simplifies computations and statistics on manifolds with geometric structures.]]>
      </content:encoded>
      <itunes:duration>2623</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[fa24fcb890834d7f8f7be9d6f0ce3dde]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6600420400.mp3?updated=1629243540"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Milestones in Neural Natural Language Processing with Sebastian Ruder - TWiML Talk #195</title>
      <link>https://twimlai.com/talk/195</link>
      <description>In this episode, we’re joined by Sebastian Ruder, PhD student studying NLP at National University of Ireland and Research Scientist at text analysis startup Aylien. We discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also look at the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his ULMFit paper, which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.</description>
      <pubDate>Mon, 29 Oct 2018 20:16:23 -0000</pubDate>
      <itunes:title>Milestones in Neural Natural Language Processing with Sebastian Ruder</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>195</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65a70600-ee98-11eb-9502-6f9006a1e78d/image/TWIMLAI_Background_800x800_SR_195.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Sebastian Ruder, a PhD student studying natural language processing at the National University of Ireland and a Research Scientist at text analysis startup Aylien. In our conversation, Sebastian and I discuss recent...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Sebastian Ruder, PhD student studying NLP at National University of Ireland and Research Scientist at text analysis startup Aylien. We discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also look at the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his ULMFit paper, which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we’re joined by Sebastian Ruder, PhD student studying NLP at National University of Ireland and Research Scientist at text analysis startup Aylien. We discuss recent milestones in neural NLP, including multi-task learning and pretrained language models. We also look at the use of attention-based models, Tree RNNs and LSTMs, and memory-based networks. Finally, Sebastian walks us through his ULMFit paper, which he co-authored with Jeremy Howard of fast.ai who I interviewed in episode 186.]]>
      </content:encoded>
      <itunes:duration>3675</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[677975a2314042a6bf2de3117f6d9e75]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6712140794.mp3?updated=1629216926"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Natural Language Processing at StockTwits with Garrett Hoffman - TWiML Talk #194</title>
      <link>https://twimlai.com/talk/194</link>
      <description>In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas.</description>
      <pubDate>Thu, 25 Oct 2018 21:22:02 -0000</pubDate>
      <itunes:title>Natural Language Processing at StockTwits with Garrett Hoffman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>194</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65d31272-ee98-11eb-9502-6b5ed025807b/image/TWIMLAI_Background_800x800_GH_194.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Garrett and I caught up at last month’s Strata Data conference, where he presented a tutorial on “Deep Learning Methods for NLP with Emphasis on Financial...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we’re joined by Garrett Hoffman, Director of Data Science at Stocktwits. Stocktwits is a social network for the investing community which has its roots in the use of the $cashtag on Twitter. In our conversation, we discuss applications such as Stocktwits’ own use of “social sentiment graphs” built on multilayer LSTM networks to gauge community sentiment about certain stocks in real time, as well as the more general use of natural language processing for generating trading ideas. ]]>
      </content:encoded>
      <itunes:duration>3056</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[bb5217b84ac84ce8a6bdf372c9ce3e64]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6211095501.mp3?updated=1629243524"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Advanced Reinforcement Learning &amp; Data Science for Social Impact with Vukosi Marivate - TWiML Talk #193</title>
      <link>https://twimlai.com/talk/193</link>
      <description>In the final episode of our Deep Learning Indaba series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba.

My conversation with Vukosi falls into two distinct parts, his PhD research in reinforcement learning, and his current research, which falls under the banner of data science with social impact. We discuss several advanced RL scenarios, along with several applications he is currently exploring in areas like public safety and energy.</description>
      <pubDate>Tue, 23 Oct 2018 19:30:30 -0000</pubDate>
      <itunes:title>Advanced Reinforcement Learning &amp; Data Science for Social Impact with Vukosi Marivate</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>193</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/65fb921a-ee98-11eb-9502-5f48b987b874/image/TWIMLAI_Background_800x800_VM_193.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the final show of our Deep Learning Indaba Series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba. My conversation with Vukosi fell into two distinct parts. The first part...</itunes:subtitle>
      <itunes:summary>In the final episode of our Deep Learning Indaba series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba.

My conversation with Vukosi falls into two distinct parts, his PhD research in reinforcement learning, and his current research, which falls under the banner of data science with social impact. We discuss several advanced RL scenarios, along with several applications he is currently exploring in areas like public safety and energy.</itunes:summary>
      <content:encoded>
        <![CDATA[In the final episode of our Deep Learning Indaba series, we speak with Vukosi Marivate, Chair of Data Science at the University of Pretoria and a co-organizer of the Indaba.

My conversation with Vukosi falls into two distinct parts, his PhD research in reinforcement learning, and his current research, which falls under the banner of data science with social impact. We discuss several advanced RL scenarios, along with several applications he is currently exploring in areas like public safety and energy.
]]>
      </content:encoded>
      <itunes:duration>2796</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0683057d2629447b87146772285229a6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1166081876.mp3?updated=1629216912"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Ethics, Strategic Decisioning and Game Theory with Osonde Osoba - TWiML Talk #192</title>
      <link>https://twimlai.com/talk/192</link>
      <description>In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation.

Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues and how to build an intuition for where ethical flashpoints may exist in these discussions. We also discuss Osonde’s own model development research, including the application of machine learning to strategic decisions and game theor</description>
      <pubDate>Thu, 18 Oct 2018 14:59:28 -0000</pubDate>
      <itunes:title>AI Ethics, Strategic Decisioning and Game Theory with Osonde Osoba</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>192</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66217a7a-ee98-11eb-9502-437a91cda331/image/TWIMLAI_Background_800x800_OO_192.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation and Professor at the Pardee RAND Graduate School. Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and...</itunes:subtitle>
      <itunes:summary>In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation.

Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues and how to build an intuition for where ethical flashpoints may exist in these discussions. We also discuss Osonde’s own model development research, including the application of machine learning to strategic decisions and game theor</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Deep Learning Indaba Series, we’re joined by Osonde Osoba, Engineer at RAND Corporation.

Osonde and I spoke on the heels of the Indaba, where he presented on AI Ethics and Policy. We discuss his framework-based approach for evaluating ethical issues and how to build an intuition for where ethical flashpoints may exist in these discussions. We also discuss Osonde’s own model development research, including the application of machine learning to strategic decisions and game theor]]>
      </content:encoded>
      <itunes:duration>2823</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[aa0ae70808074cd5b79a6fc8fa5b3ad9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6999187891.mp3?updated=1629216915"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Acoustic Word Embeddings for Low Resource Speech Processing with Herman Kamper - TWiML Talk #191</title>
      <link>https://twimlai.com/talk/191</link>
      <description>In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, lecturer at Stellenbosch University in SA and a co-organizer of the Indaba.

We discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We also dive into the specifics of the methods being used and developed in Herman’s lab.</description>
      <pubDate>Tue, 16 Oct 2018 16:47:40 -0000</pubDate>
      <itunes:title>Acoustic Word Embeddings for Low Resource Speech Processing with Herman Kamper</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>191</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6647db70-ee98-11eb-9502-4b58aabed3bc/image/TWIMLAI_Background_800x800_HH_191.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, Lecturer in the electrical and electronics engineering department at Stellenbosch University in SA and a co-organizer of the Indaba. Herman and I discuss his work on...</itunes:subtitle>
      <itunes:summary>In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, lecturer at Stellenbosch University in SA and a co-organizer of the Indaba.

We discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We also dive into the specifics of the methods being used and developed in Herman’s lab.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Deep Learning Indaba Series, we’re joined by Herman Kamper, lecturer at Stellenbosch University in SA and a co-organizer of the Indaba.

We discuss his work on limited- and zero-resource speech recognition, how those differ from regular speech recognition, and the tension between linguistic and statistical methods in this space. We also dive into the specifics of the methods being used and developed in Herman’s lab.
]]>
      </content:encoded>
      <itunes:duration>3687</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d1fb432e28874f9c8fdf0a5c1cc3cffa]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1192508464.mp3?updated=1629216930"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Representations for Visual Search with Naila Murray - TWiML Talk #190</title>
      <link>https://twimlai.com/talk/190</link>
      <description>In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe.

Naila presented at the Indaba on computer vision. In this discussion, we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper  “Generalized Max Pooling,” and much more!

For the complete show notes, visit twimlai.com/tal</description>
      <pubDate>Fri, 12 Oct 2018 16:52:54 -0000</pubDate>
      <itunes:title>Learning Representations for Visual Search with Naila Murray</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>190</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66752a08-ee98-11eb-9502-030b60bcec86/image/TWIMLAI_Background_800x800_NM_190.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe. Naila presented at the Indaba on computer vision, and in this discussion we...</itunes:subtitle>
      <itunes:summary>In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe.

Naila presented at the Indaba on computer vision. In this discussion, we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper  “Generalized Max Pooling,” and much more!

For the complete show notes, visit twimlai.com/tal</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Deep Learning Indaba series, we’re joined by Naila Murray, Senior Research Scientist and Group Lead in the computer vision group at Naver Labs Europe.

Naila presented at the Indaba on computer vision. In this discussion, we explore her work on visual attention, including why visual attention is important and the trajectory of work in the field over time. We also discuss her paper  “Generalized Max Pooling,” and much more!

For the complete show notes, visit twimlai.com/tal]]>
      </content:encoded>
      <itunes:duration>2493</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0e689b1f02df40269811e491668b4ee8]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4395686493.mp3?updated=1629216912"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189</title>
      <link>https://twimlai.com/talk/189</link>
      <description>In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.</description>
      <pubDate>Wed, 10 Oct 2018 18:24:51 -0000</pubDate>
      <itunes:title>Evaluating Model Explainability Methods with Sara Hooker</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>189</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66950742-ee98-11eb-9502-9f4406f40bc7/image/TWIMLAI_Background_800x800_SH_189.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I had the pleasure of speaking with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks....</itunes:subtitle>
      <itunes:summary>In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.</itunes:summary>
      <content:encoded>
        <![CDATA[In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.]]>
      </content:encoded>
      <itunes:duration>3837</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[932e014f009b4318b6b8e238ffe04c3e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6010694699.mp3?updated=1629216932"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Graph Analytic Systems with Zachary Hanif - TWiML Talk #188</title>
      <link>https://twimlai.com/talk/188</link>
      <description>In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. 

We start our discussion with a look at the role of graph analytics in the ML toolkit, including some important application areas for graph-based systems. Zach gives us an overview of the different ways to implement graph analytics, including what he calls graphical processing engines which excel at handling large datasets, &amp; much m</description>
      <pubDate>Mon, 08 Oct 2018 19:49:27 -0000</pubDate>
      <itunes:title>Graph Analytic Systems with Zachary Hanif</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>188</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66b541d8-ee98-11eb-9502-1b737ff08cc6/image/TWIMLAI_Background_800x800_ZH_188.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. Zach led a session at Strata called “Network effects: Working with modern...</itunes:subtitle>
      <itunes:summary>In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. 

We start our discussion with a look at the role of graph analytics in the ML toolkit, including some important application areas for graph-based systems. Zach gives us an overview of the different ways to implement graph analytics, including what he calls graphical processing engines which excel at handling large datasets, &amp; much m</itunes:summary>
      <content:encoded>
        <![CDATA[In this, the final episode of our Strata Data Conference series, we’re joined by Zachary Hanif, Director of Machine Learning at Capital One’s Center for Machine Learning. 

We start our discussion with a look at the role of graph analytics in the ML toolkit, including some important application areas for graph-based systems. Zach gives us an overview of the different ways to implement graph analytics, including what he calls graphical processing engines which excel at handling large datasets, &amp; much m]]>
      </content:encoded>
      <itunes:duration>3247</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f05b6dd3c1e14a96b37694d4180d9608]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6446801866.mp3?updated=1629216926"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Diversification in Recommender Systems with Ahsan Ashraf - TWiML Talk #187</title>
      <link>https://twimlai.com/talk/187</link>
      <description>In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. We discuss his presentation, “Diversification in recommender systems: Using topical variety to increase user satisfaction,” covering the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system and much more!

The show notes can be found at https://twimlai.com/talk/18</description>
      <pubDate>Thu, 04 Oct 2018 17:28:05 -0000</pubDate>
      <itunes:title>Diversification in Recommender Systems with Ahsan Ashraf</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>187</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66dacbce-ee98-11eb-9502-4b307b99fab7/image/TWIMLAI_Background_800x800_AA_187.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. In our conversation, Ahsan and I discuss his presentation from the conference, “Diversification in recommender systems: Using topical...</itunes:subtitle>
      <itunes:summary>In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. We discuss his presentation, “Diversification in recommender systems: Using topical variety to increase user satisfaction,” covering the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system and much more!

The show notes can be found at https://twimlai.com/talk/18</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Strata Data conference series, we’re joined by Ahsan Ashraf, data scientist at Pinterest. We discuss his presentation, “Diversification in recommender systems: Using topical variety to increase user satisfaction,” covering the experiments his team ran to explore the impact of diversification in user’s boards, the methodology his team used to incorporate variety into the Pinterest recommendation system and much more!

The show notes can be found at https://twimlai.com/talk/18]]>
      </content:encoded>
      <itunes:duration>2674</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a1c525436282408cbee9170d1ad60497]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6945124653.mp3?updated=1629216907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Fastai v1 Deep Learning Framework with Jeremy Howard - TWiML Talk #186</title>
      <link>https://twimlai.com/twiml-talk-186-the-fastai-v1-deep-learning-framework-with-jeremy-howard</link>
      <description>In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. 

In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more!</description>
      <pubDate>Tue, 02 Oct 2018 16:13:49 -0000</pubDate>
      <itunes:title>The Fastai v1 Deep Learning Framework with Jeremy Howard</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>186</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/66f64368-ee98-11eb-9502-3764a2480f85/image/TWIMLAI_Background_800x800_JH_186.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today's episode we’ll be taking a break from our  and presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. Fast.ai is a company many of our listeners are quite familiar with due to their popular deep learning...</itunes:subtitle>
      <itunes:summary>In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. 

In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more!</itunes:summary>
      <content:encoded>
        <![CDATA[In today's episode we're presenting a special conversation with Jeremy Howard, founder and researcher at Fast.ai. This episode is being released today in conjunction with the company’s announcement of version 1.0 of their fastai library at the inaugural Pytorch Devcon in San Francisco. 

In our conversation, we dive into the new library, exploring why it’s important and what’s changed, the unique way in which it was developed, what it means for the future of the fast.ai courses, and much more!]]>
      </content:encoded>
      <itunes:duration>4277</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[77e22a05aad6443a88aa31d2740691f3]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5657245830.mp3?updated=1629216955"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Federated ML for Edge Applications with Justin Norman - TWiML Talk #185</title>
      <link>https://twimlai.com/twiml-talk-185-federated-ml-for-edge-applications-with-justin-norman</link>
      <description>In this episode we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. In my chat with Justin we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge. 

For the complete show notes, visit https://twimlai.com/talk/185.</description>
      <pubDate>Thu, 27 Sep 2018 21:40:25 -0000</pubDate>
      <itunes:title>Federated ML for Edge Applications with Justin Norman</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>185</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/671915aa-ee98-11eb-9502-4fabf7068753/image/TWIMLAI_Background_800x800_JN_185.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Strata Data conference series, we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. Fast Forward Labs was an Applied AI research firm and consultancy founded by Hilary...</itunes:subtitle>
      <itunes:summary>In this episode we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. In my chat with Justin we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge. 

For the complete show notes, visit https://twimlai.com/talk/185.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode we’re joined by Justin Norman, Director of Research and Data Science Services at Cloudera Fast Forward Labs. In my chat with Justin we start with an update on the company before diving into a look at some of recent and upcoming research projects. Specifically, we discuss their recent report on Multi-Task Learning and their upcoming research into Federated Machine Learning for AI at the edge. 

For the complete show notes, visit https://twimlai.com/talk/185.]]>
      </content:encoded>
      <itunes:duration>2864</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5bd7c851eb87447e917717405c606ae6]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9262126757.mp3?updated=1629216916"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring Dark Energy &amp; Star Formation w/ ML with Viviana Acquaviva - TWiML Talk #184</title>
      <link>https://twimlai.com/talk/184</link>
      <description>In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. In our conversation, we discuss an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance. 

The complete show notes can be found at https://twimlai.com/talk/184. </description>
      <pubDate>Wed, 26 Sep 2018 17:49:27 -0000</pubDate>
      <itunes:title>Exploring Dark Energy &amp; Star Formation w/ ML with Viviana Acquaviva</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>184</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/673e9f3c-ee98-11eb-9502-13efeca44bd8/image/TWIMLAI_Background_800x800_VA_184.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. Viviana led a tutorial at the conference, titled “Learning Machine Learning using...</itunes:subtitle>
      <itunes:summary>In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. In our conversation, we discuss an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance. 

The complete show notes can be found at https://twimlai.com/talk/184. </itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode of our Strata Data series, we’re joined by Viviana Acquaviva, Associate Professor at City Tech, the New York City College of Technology. In our conversation, we discuss an ongoing project she’s a part of called the “Hobby-Eberly Telescope Dark Energy eXperiment,” her motivation for undertaking this project, how she gets her data, the models she uses, and how she evaluates their performance. 

The complete show notes can be found at https://twimlai.com/talk/184.  ]]>
      </content:encoded>
      <itunes:duration>2412</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ae133852ceed4e9a85105eef7e65a8f2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2740311103.mp3?updated=1629216908"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Document Vectors in the Wild with James Dreiss - TWiML Talk #183</title>
      <link>https://twimlai.com/talk/183</link>
      <description>In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout.</description>
      <pubDate>Mon, 24 Sep 2018 18:13:13 -0000</pubDate>
      <itunes:title>Document Vectors in the Wild with James Dreiss</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>183</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/67615900-ee98-11eb-9502-f31a508f5007/image/TWIMLAI_Background_800x800_JD_183.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content...</itunes:subtitle>
      <itunes:summary>In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Strata Data series we’re joined by James Dreiss, Senior Data Scientist at international news syndicate Reuters. James and I sat down to discuss his talk from the conference “Document vectors in the wild, building a content recommendation system,” in which he details how Reuters implemented document vectors to recommend content to users of their new “infinite scroll” page layout. ]]>
      </content:encoded>
      <itunes:duration>2459</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[f40eeff82403450e9ed87c0a5688992b]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5237287726.mp3?updated=1629216909"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Applied Machine Learning for Publishers with Naveed Ahmad - TWiML Talk #182</title>
      <link>https://twimlai.com/talk/182</link>
      <description>In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse. 

For the complete show notes for this episode, visit https://twimlai.com/talk/182.</description>
      <pubDate>Thu, 20 Sep 2018 20:56:07 -0000</pubDate>
      <itunes:title>Applied Machine Learning for Publishers with Naveed Ahmad</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>182</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/67807664-ee98-11eb-9502-07380ab5f31a/image/TWIMLAI_Background_800x800_NA_182.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. A few months ago, Naveed gave a talk at the Google Cloud Next Conference on “How Publishers Can Take Advantage of...</itunes:subtitle>
      <itunes:summary>In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse. 

For the complete show notes for this episode, visit https://twimlai.com/talk/182.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode we’re joined by Naveed Ahmad, Senior Director of data engineering and machine learning at Hearst Newspapers. In our conversation, we discuss into the role of ML at Hearst, including their motivations for implementing it and some of their early projects, the challenges of data acquisition within a large organization, and the benefits they enjoy from using Google’s BigQuery as their data warehouse. 

For the complete show notes for this episode, visit https://twimlai.com/talk/182.
]]>
      </content:encoded>
      <itunes:duration>2397</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[202bcf2f25914f5aa6f3cb0643574795]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8095123018.mp3?updated=1629216907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Anticipating Superintelligence with Nick Bostrom - TWiML Talk #181</title>
      <link>https://twimlai.com/talk/181</link>
      <description>In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18</description>
      <pubDate>Mon, 17 Sep 2018 19:49:25 -0000</pubDate>
      <itunes:title>Anticipating Superintelligence with Nick Bostrom</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>181</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/67a5736a-ee98-11eb-9502-f39edb03a4d8/image/TWIMLAI_Background_800x800_NB_181.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Nick Bostrom, professor in the faculty of philosophy at the University of Oxford, where he also heads the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we’re joined by Nick Bostrom, professor at the University of Oxford and head of the Future of Humanity Institute, a multidisciplinary institute focused on answering big-picture questions for humanity with regards to AI safety and ethics. In our conversation, we discuss the risks associated with Artificial General Intelligence, advanced AI systems Nick refers to as superintelligence, openness in AI development and more! The notes for this episode can be found at https://twimlai.com/talk/18]]>
      </content:encoded>
      <itunes:duration>2696</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[4d33bab2777e4afab6941727bbfe99ad]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6548884781.mp3?updated=1629216910"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Can We Train an AI to Understand Body Language? with Hanbyul Joo - TWIML Talk #180</title>
      <link>https://twimlai.com/talk/180</link>
      <description>In this episode, we’re joined by Hanbyul Joo, a PhD student at CMU. 

Han is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio used to capture human body behavior and body language. His work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. We also discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.”</description>
      <pubDate>Thu, 13 Sep 2018 19:46:18 -0000</pubDate>
      <itunes:title>Can We Train an AI to Understand Body Language? with Hanbyul Joo</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>180</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/67cf890c-ee98-11eb-9502-b3ead8c5a032/image/TWIMLAI_Background_800x800_HJ_180.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we’re joined by Hanbyul Joo, a PhD student in the Robotics Institute at Carnegie Mellon University. Han, who is on track to complete his thesis at the end of the year, is working on what is called the “Panoptic Studio,” a...</itunes:subtitle>
      <itunes:summary>In this episode, we’re joined by Hanbyul Joo, a PhD student at CMU. 

Han is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio used to capture human body behavior and body language. His work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. We also discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.”</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we’re joined by Hanbyul Joo, a PhD student at CMU. 

Han is working on what is called the “Panoptic Studio,” a multi-dimension motion capture studio used to capture human body behavior and body language. His work focuses on understanding how humans interact and behave so that we can teach AI-based systems to react to humans more naturally. We also discuss his CVPR best student paper award winner “Total Capture: A 3D Deformation Model for Tracking Faces, Hands, and Bodies.”]]>
      </content:encoded>
      <itunes:duration>3113</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b0c575dec6e1457f83a7066c28598c23]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9432342781.mp3?updated=1629216922"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Biological Particle Identification and Tracking with Jay Newby - TWiML Talk #179</title>
      <link>https://twimlai.com/talk/179</link>
      <description>In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. 

Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” He gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models.</description>
      <pubDate>Mon, 10 Sep 2018 18:08:00 -0000</pubDate>
      <itunes:title>Biological Particle Identification and Tracking with Jay Newby</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>179</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/67f884e2-ee98-11eb-9502-ef800b1475ea/image/TWIMLAI_Background_800x800_JN_179.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. Jay joins us to discuss his work applying deep learning to biology, including his paper...</itunes:subtitle>
      <itunes:summary>In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. 

Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” He gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode we’re joined by Jay Newby, Assistant Professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. 

Jay joins us to discuss his work applying deep learning to biology, including his paper “Deep neural networks automate detection for tracking of submicron scale particles in 2D and 3D.” He gives us an overview of particle tracking and a look at how he combines neural networks with physics-based particle filter models.]]>
      </content:encoded>
      <itunes:duration>2731</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ba3b59dbf2da4b728fad6a585f176a37]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5924885137.mp3?updated=1629216910"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Content Creation with Debajyoti Ray - TWiML Talk #178</title>
      <link>https://twimlai.com/talk/178</link>
      <description>In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers.

Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.</description>
      <pubDate>Thu, 06 Sep 2018 19:09:46 -0000</pubDate>
      <itunes:title>AI for Content Creation with Debajyoti Ray</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>178</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/68271e42-ee98-11eb-9502-673400e05ff0/image/TWIMLAI_Background_800x800_DR_178.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers. Rivet’s tools are inspired in part by the founders’ collaboration with the team that created...</itunes:subtitle>
      <itunes:summary>In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers.

Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode we’re joined by Debajyoti Ray, Founder and CEO of RivetAI, a startup producing AI-powered tools for storytellers and filmmakers.

Deb and I discuss some of what he’s learned in the journey to apply AI to content creation, including how Rivet approaches the use of machine learning to automate creative processes, the company’s use hierarchical LSTM models and autoencoders, and the tech stack that they’ve put in place to support the business.
]]>
      </content:encoded>
      <itunes:duration>3315</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[ec2b2990813a4d1d8f97878da789b993]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3315428469.mp3?updated=1629216922"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli - TWiML Talk #177</title>
      <link>https://twimlai.com/talk/177</link>
      <description>Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” 

To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177</description>
      <pubDate>Thu, 30 Aug 2018 20:07:16 -0000</pubDate>
      <itunes:title>Deep Reinforcement Learning Primer and Research Frontiers with Kamyar Azizzadenesheli</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>177</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/684ba0e6-ee98-11eb-9502-bf34c7206482/image/TWIMLAI_Background_800x800_KA_177.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, and visiting researcher at Caltech where he works with Anima Anandkumar, who you might remember from . We begin with a reinforcement learning primer...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” 

To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Kamyar Azizzadenesheli, PhD student at the University of California, Irvine, who joins us to review the core elements of RL, along with a pair of his RL-related papers: “Efficient Exploration through Bayesian Deep Q-Networks” and “Sample-Efficient Deep RL with Generative Adversarial Tree Search.” 

To skip the Deep Reinforcement Learning primer conversation and jump to the research discussion, skip to the 34:30 mark of the episode. Show notes at https://twimlai.com/talk/177]]>
      </content:encoded>
      <itunes:duration>5689</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[88559a1dbe084fe0aaf55189050b14a9]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1936697421.mp3?updated=1629217069"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>OpenAI Five with Christy Dennison - TWiML Talk #176</title>
      <link>https://twimlai.com/talk/176</link>
      <description>Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI, who has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. In our conversation we overview of DOTA 2 gameplay and the recent OpenAI Five benchmark,  we dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning, LSTM recurrent neural networks, and entity embeddings, plus some tricks and techniques they use to train the models.</description>
      <pubDate>Mon, 27 Aug 2018 19:20:01 -0000</pubDate>
      <itunes:title>OpenAI Five with Christy Dennison</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>176</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/687769a6-ee98-11eb-9502-df38880e0140/image/TWIMLAI_Background_800x800_CD_176.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI. Since joining OpenAI earlier this year, Christy has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. Our conversation begins...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI, who has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. In our conversation we overview of DOTA 2 gameplay and the recent OpenAI Five benchmark,  we dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning, LSTM recurrent neural networks, and entity embeddings, plus some tricks and techniques they use to train the models.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Christy Dennison, Machine Learning Engineer at OpenAI, who has been working on OpenAI’s efforts to build an AI-powered agent to play the DOTA 2 video game. In our conversation we overview of DOTA 2 gameplay and the recent OpenAI Five benchmark,  we dig into the underlying technology used to create OpenAI Five, including their use of deep reinforcement learning, LSTM recurrent neural networks, and entity embeddings, plus some tricks and techniques they use to train the models.]]>
      </content:encoded>
      <itunes:duration>2905</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b80166e726864076934714cabcffa8e2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3134918704.mp3?updated=1629216913"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How ML Keeps Shelves Stocked at Home Depot with Pat Woowong - TWiML Talk #175</title>
      <link>https://twimlai.com/talk/175</link>
      <description>Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. 

We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, their use of kubernetes to support future growth in the platform, and much more. 

For complete show notes, visit https://twimlai.com/talk/175.</description>
      <pubDate>Thu, 23 Aug 2018 18:37:20 -0000</pubDate>
      <itunes:title>How ML Keeps Shelves Stocked at Home Depot with Pat Woowong</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>175</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/68a12d9a-ee98-11eb-9502-a7525582b615/image/TWIMLAI_Background_800x800_PW_175.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. 

We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, their use of kubernetes to support future growth in the platform, and much more. 

For complete show notes, visit https://twimlai.com/talk/175.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Pat Woowong, principal engineer in the applied machine intelligence group at The Home Depot. 

We discuss a project that Pat recently presented at the Google Cloud Next conference which used machine learning to predict shelf-out scenarios within stores. We dig into the motivation for this system and how the team went about building it, their use of kubernetes to support future growth in the platform, and much more. 

For complete show notes, visit https://twimlai.com/talk/175.]]>
      </content:encoded>
      <itunes:duration>2725</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[70124fe0fa954cceb1bbd05deceef95d]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8812916627.mp3?updated=1629216911"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Contextual Modeling for Language and Vision with Nasrin Mostafazadeh - TWiML Talk #174</title>
      <link>https://twimlai.com/talk/174</link>
      <description>Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition.

Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision including her work on the Story Cloze Test, a reasoning framework for evaluating story understanding and generation. We explore the details of this task, some of the challenges it presents and approaches for solving it.</description>
      <pubDate>Mon, 20 Aug 2018 19:59:02 -0000</pubDate>
      <itunes:title>Contextual Modeling for Language and Vision with Nasrin Mostafazadeh</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>174</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/68c1755a-ee98-11eb-9502-878bcf7f9255/image/TWIMLAI_Background_800x800_NM_174.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition. Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision, which she sees as a means of...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition.

Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision including her work on the Story Cloze Test, a reasoning framework for evaluating story understanding and generation. We explore the details of this task, some of the challenges it presents and approaches for solving it.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Nasrin Mostafazadeh, Senior AI Research Scientist at New York-based Elemental Cognition.

Our conversation focuses on Nasrin’s work in event-centric contextual modeling in language and vision including her work on the Story Cloze Test, a reasoning framework for evaluating story understanding and generation. We explore the details of this task, some of the challenges it presents and approaches for solving it. ]]>
      </content:encoded>
      <itunes:duration>2960</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[07f831c1967749759d9093229f6c636a]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7442016936.mp3?updated=1629216911"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML for Understanding Satellite Imagery at Scale with Kyle Story - TWiML Talk #173</title>
      <link>https://twimlai.com/talk/173</link>
      <description>Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs.

Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, and the key challenges they’ve had to overcome in scaling them.</description>
      <pubDate>Thu, 16 Aug 2018 17:18:44 -0000</pubDate>
      <itunes:title>ML for Understanding Satellite Imagery at Scale with Kyle Story</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>173</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/68ec154e-ee98-11eb-9502-9f8bdac7c605/image/TWIMLAI_Background_800x800_KS_173.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs. Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs.

Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, and the key challenges they’ve had to overcome in scaling them.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Kyle Story, computer vision engineer at Descartes Labs.

Kyle and I caught up after his recent talk at the Google Cloud Next Conference titled “How Computers See the Earth: A Machine Learning Approach to Understanding Satellite Imagery at Scale.” We discuss some of the interesting computer vision problems he’s worked on at Descartes, and the key challenges they’ve had to overcome in scaling them. 

]]>
      </content:encoded>
      <itunes:duration>3385</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[0e4b514b53c54a4189d8630f00c7509f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2149176933.mp3?updated=1629216920"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Generating Ground-Level Images From Overhead Imagery Using GANs with Yi Zhu - TWiML Talk #172</title>
      <link>https://twimlai.com/talk/172</link>
      <description>Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” We discuss the goal of this research and how he uses conditional GANs to generate artificial ground-level images.</description>
      <pubDate>Mon, 13 Aug 2018 20:47:23 -0000</pubDate>
      <itunes:title>Generating Ground-Level Images From Overhead Imagery Using GANs with Yi Zhu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>172</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/690f3fec-ee98-11eb-9502-dba62011dde5/image/TWIMLAI_Background_800x800_YZ_172.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “.” Yi and I discuss the goal of this research, which is to train effective land-use...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” We discuss the goal of this research and how he uses conditional GANs to generate artificial ground-level images.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Yi Zhu, a PhD candidate at UC Merced focused on geospatial image analysis. In our conversation, Yi and I take a look at his recent paper “What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks.” We discuss the goal of this research and how he uses conditional GANs to generate artificial ground-level images.]]>
      </content:encoded>
      <itunes:duration>2288</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5d7992af7e63476184212880789f2dfd]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1924343019.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Vision Systems for Planetary Landers and Drones with Larry Matthies - TWiML Talk #171</title>
      <link>https://twimlai.com/talk/171</link>
      <description>Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. In our conversation, we discuss two talks he gave at CVPR a few weeks back, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects. 

For the complete show notes, visit https://twimlai.com/talk/171.</description>
      <pubDate>Thu, 09 Aug 2018 15:39:52 -0000</pubDate>
      <itunes:title>Vision Systems for Planetary Landers and Drones with Larry Matthies</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>171</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/693b415a-ee98-11eb-9502-fbc9c062ac75/image/TWIMLAI_Background_800x800_LM_171.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. Larry joins us on the heels of two presentations at this year’s CVPR conference, the first on Onboard Stereo...</itunes:subtitle>
      <itunes:summary>Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. In our conversation, we discuss two talks he gave at CVPR a few weeks back, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects. 

For the complete show notes, visit https://twimlai.com/talk/171.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we’re joined by Larry Matthies, Sr. Research Scientist and head of computer vision in the mobility and robotics division at JPL. In our conversation, we discuss two talks he gave at CVPR a few weeks back, his work on vision systems for the first iteration of Mars rovers in 2004 and the future of planetary landing projects. 

For the complete show notes, visit https://twimlai.com/talk/171.]]>
      </content:encoded>
      <itunes:duration>2612</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[64d95f5981444a09ac4a24f5e65d5004]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2607386293.mp3?updated=1629216908"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Semantically Meaningful and Actionable Representations with Ashutosh Saxena - TWiML Talk #170</title>
      <link>https://twimlai.com/talk/170</link>
      <description>In this episode i'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. 

For complete show notes, visit https://twimlai.com/talk/170.</description>
      <pubDate>Mon, 06 Aug 2018 20:26:09 -0000</pubDate>
      <itunes:title>Learning Semantically Meaningful and Actionable Representations with Ashutosh Saxena</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>170</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/696037bc-ee98-11eb-9502-6fc83a2aed8e/image/TWIMLAI_Background_800x800_AS_170.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful...</itunes:subtitle>
      <itunes:summary>In this episode i'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. 

For complete show notes, visit https://twimlai.com/talk/170.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode i'm joined by Ashutosh Saxena, a veteran of Andrew Ng’s Stanford Machine Learning Group, and co-founder and CEO of Caspar.ai. Ashutosh and I discuss his RoboBrain project, a computational system that creates semantically meaningful and actionable representations of the objects, actions and observations that a robot experiences in its environment, and allows these to be shared and queried by other robots to learn new actions. 

For complete show notes, visit https://twimlai.com/talk/170.]]>
      </content:encoded>
      <itunes:duration>2755</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[55cee340657f4026ab51e775ef1cabda]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6841738167.mp3?updated=1629216906"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Innovation for Clinical Decision Support with Joe Connor - TWiML Talk #169</title>
      <link>https://twimlai.com/talk/169</link>
      <description>In this episode I speak with Joe Connor, Founder of Experto Crede.

In our conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians, some of the various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes. We also discuss data protections, especially GDPR, potential ways to include clinicians in the building of applications.</description>
      <pubDate>Thu, 02 Aug 2018 17:44:41 -0000</pubDate>
      <itunes:title>AI Innovation for Clinical Decision Support with Joe Connor</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>169</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/698b726a-ee98-11eb-9502-c3f5ff886a6e/image/TWIMLAI_Background_800x800_JC_169.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I speak with Joe Connor, Founder of Experto Crede. Joe’s been listening to the podcast for a while and he and I connected after he reached out to discuss an article I wrote regarding AI in the healthcare space. In this conversation,...</itunes:subtitle>
      <itunes:summary>In this episode I speak with Joe Connor, Founder of Experto Crede.

In our conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians, some of the various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes. We also discuss data protections, especially GDPR, potential ways to include clinicians in the building of applications.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I speak with Joe Connor, Founder of Experto Crede.

In our conversation, we explore his experiences bringing AI powered healthcare projects to market in collaboration with the UK National Health Service and its clinicians, some of the various challenges he’s run into when applying ML and AI in healthcare, as well as some of his successes. We also discuss data protections, especially GDPR, potential ways to include clinicians in the building of applications. ]]>
      </content:encoded>
      <itunes:duration>2549</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[a1c55698d996444db3edbb523ca1a6f7]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8799929283.mp3?updated=1629216912"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Dynamic Visual Localization and Segmentation with Laura Leal-Taixé -TWiML Talk #168</title>
      <link>https://twimlai.com/talk/168</link>
      <description>In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group.

In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning, her paper on one-shot video object segmentation and the broader vision for her research.</description>
      <pubDate>Mon, 30 Jul 2018 19:52:18 -0000</pubDate>
      <itunes:title>Dynamic Visual Localization and Segmentation with Laura Leal-Taixé </itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>168</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/69aee2a4-ee98-11eb-9502-27b0115c56a3/image/TWIMLAI_Background_800x800_LLT_167.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group, and 2017 recipient of prestigious Sofja Kovalevskaja Award. In our conversation, we discuss several...</itunes:subtitle>
      <itunes:summary>In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group.

In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning, her paper on one-shot video object segmentation and the broader vision for her research.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I'm joined by Laura Leal-Taixé, Professor at the Technical University of Munich where she leads the Dynamic Vision and Learning Group.

In our conversation, we discuss several of her recent projects including work on image-based localization techniques that fuse traditional model-based computer vision approaches with a data-driven approach based on deep learning, her paper on one-shot video object segmentation and the broader vision for her research.]]>
      </content:encoded>
      <itunes:duration>2697</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[3ad857ddbabb4ebda560650be3ae7891]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5518154601.mp3?updated=1629216905"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Conversational AI for the Intelligent Workplace with Gillian McCann - TWiML Talk #167</title>
      <link>https://twimlai.com/talk/167</link>
      <description>In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, their engineering pipeline &amp; how they build high quality systems that incorporate external APIs and her view on factors that contribute to misunderstandings and impatience on the part of users of AI-based products.</description>
      <pubDate>Thu, 26 Jul 2018 13:49:38 -0000</pubDate>
      <itunes:title>Conversational AI for the Intelligent Workplace with Gillian McCann</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>167</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/69d418da-ee98-11eb-9502-9f31940a91f8/image/TWIMLAI_Background_800x800_GM_168.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software.   Workgrid offers an intelligent workplace assistant that integrates with a variety of business tools and systems. In our conversation, which...</itunes:subtitle>
      <itunes:summary>In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, their engineering pipeline &amp; how they build high quality systems that incorporate external APIs and her view on factors that contribute to misunderstandings and impatience on the part of users of AI-based products.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I'm joined by Gillian McCann, Head of Cloud Engineering and AI at Workgrid Software. In our conversation, which focuses on Workgrid’s use of cloud-based AI services, Gillian details some of the underlying systems that make Workgrid tick, their engineering pipeline &amp; how they build high quality systems that incorporate external APIs and her view on factors that contribute to misunderstandings and impatience on the part of users of AI-based products.
]]>
      </content:encoded>
      <itunes:duration>2199</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[59961e436bfc47ea88959145607aef45]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6388430289.mp3?updated=1629216903"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Computer Vision and Intelligent Agents for Wildlife Conservation with Jason Holmberg - TWiML Talk #166</title>
      <link>https://twimlai.com/talk/166</link>
      <description>In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Jason and I discuss Wildme's pair of open source computer vision based conservation projects, Wildbook and Whaleshark.org, Jason kicks us off with the interesting story of how Wildbook came to be, the eventual expansion of the project and the evolution of these projects’ use of computer vision and deep learning. 

For the complete show notes, visit twimlai.com/talk/166</description>
      <pubDate>Sun, 22 Jul 2018 03:58:40 -0000</pubDate>
      <itunes:title>Computer Vision and Intelligent Agents for Wildlife Conservation with Jason Holmberg</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>166</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/69fd5d44-ee98-11eb-9502-57ba05f40f6a/image/TWIMLAI_Background_800x800_JH_166.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Wildme’s Wildbook and Whaleshark.org are both open source computer vision based conservation projects, that have been compared to a facebook for...</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Jason and I discuss Wildme's pair of open source computer vision based conservation projects, Wildbook and Whaleshark.org, Jason kicks us off with the interesting story of how Wildbook came to be, the eventual expansion of the project and the evolution of these projects’ use of computer vision and deep learning. 

For the complete show notes, visit twimlai.com/talk/166</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Jason Holmberg, Executive Director and Director of Engineering at WildMe. Jason and I discuss Wildme's pair of open source computer vision based conservation projects, Wildbook and Whaleshark.org, Jason kicks us off with the interesting story of how Wildbook came to be, the eventual expansion of the project and the evolution of these projects’ use of computer vision and deep learning. 

For the complete show notes, visit twimlai.com/talk/166]]>
      </content:encoded>
      <itunes:duration>2904</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[70da5218068947bda630f361042f651f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1342219854.mp3?updated=1629216906"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pragmatic Deep Learning for Medical Imagery with Prashant Warier - TWiML Talk #165</title>
      <link>https://twimlai.com/talk/165</link>
      <description>In this episode I'm joined by Prashant Warier, CEO and Co-Founder of Qure.ai. We discuss the company’s work building products for interpreting head CT scans and chest x-rays. We look at knowledge gained in bringing a commercial product to market, including what the gap between academic research papers and commercially viable software, the challenge of data acquisition and more. We also touch on the application of transfer learning.

For the complete show notes, visit https://twimlai.com/talk/165.</description>
      <pubDate>Thu, 19 Jul 2018 17:52:52 -0000</pubDate>
      <itunes:title>Pragmatic Deep Learning for Medical Imagery with Prashant Warier</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>165</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6a22a176-ee98-11eb-9502-5b9ddd79c767/image/TWIMLAI_Background_800x800_PW_165.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I'm joined by Prashant Warier, CEO and Co-Founder of Qure.ai, a company building AI-powered software for radiology. In our conversation, Prashant and I discuss the company’s work building products for interpreting head CT scans and...</itunes:subtitle>
      <itunes:summary>In this episode I'm joined by Prashant Warier, CEO and Co-Founder of Qure.ai. We discuss the company’s work building products for interpreting head CT scans and chest x-rays. We look at knowledge gained in bringing a commercial product to market, including what the gap between academic research papers and commercially viable software, the challenge of data acquisition and more. We also touch on the application of transfer learning.

For the complete show notes, visit https://twimlai.com/talk/165.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I'm joined by Prashant Warier, CEO and Co-Founder of Qure.ai. We discuss the company’s work building products for interpreting head CT scans and chest x-rays. We look at knowledge gained in bringing a commercial product to market, including what the gap between academic research papers and commercially viable software, the challenge of data acquisition and more. We also touch on the application of transfer learning.

For the complete show notes, visit https://twimlai.com/talk/165.]]>
      </content:encoded>
      <itunes:duration>2196</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[b3ca7db6ce314f3a80ab318aedf6be6e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6045258256.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Taskonomy: Disentangling Transfer Learning for Perception (CVPR 2018 Best Paper Winner) with Amir Zamir - TWiML Talk #164</title>
      <link>https://twimlai.com/twiml-talk-164-taskonomy-disentangling-transfer-learning-for-perception-cvpr-2018-best-paper-winner-with-amir-zamir</link>
      <description>In this episode I'm joined by Amir Zamir, Postdoctoral researcher at both Stanford &amp; UC Berkeley, who joins us fresh off of winning the 2018 CVPR Best Paper Award for co-authoring "Taskonomy: Disentangling Task Transfer Learning." In our conversation, we discuss the nature and consequences of the relationships that Amir and his team discovered, and how they can be used to build more effective visual systems with machine learning. 

https://twimlai.com/talk/164</description>
      <pubDate>Mon, 16 Jul 2018 16:27:39 -0000</pubDate>
      <itunes:title>Taskonomy: Disentangling Transfer Learning for Perception (CVPR 2018 Best Paper Winner) with Amir Zamir</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>164</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6a707b80-ee98-11eb-9502-430caed2cc06/image/TWIMLAI_Background_800x800_AZ_164.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I'm joined by Amir Zamir, Postdoctoral researcher at both Stanford &amp; UC Berkeley. Amir joins us fresh off of winning the 2018 CVPR Best Paper Award for co-authoring  In this work, Amir and his coauthors explore the relationships...</itunes:subtitle>
      <itunes:summary>In this episode I'm joined by Amir Zamir, Postdoctoral researcher at both Stanford &amp; UC Berkeley, who joins us fresh off of winning the 2018 CVPR Best Paper Award for co-authoring "Taskonomy: Disentangling Task Transfer Learning." In our conversation, we discuss the nature and consequences of the relationships that Amir and his team discovered, and how they can be used to build more effective visual systems with machine learning. 

https://twimlai.com/talk/164</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I'm joined by Amir Zamir, Postdoctoral researcher at both Stanford &amp; UC Berkeley, who joins us fresh off of winning the 2018 CVPR Best Paper Award for co-authoring "Taskonomy: Disentangling Task Transfer Learning." In our conversation, we discuss the nature and consequences of the relationships that Amir and his team discovered, and how they can be used to build more effective visual systems with machine learning. 

https://twimlai.com/talk/164]]>
      </content:encoded>
      <itunes:duration>2853</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[05371b0010964e71a16af2db19dbf9f2]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8621492485.mp3?updated=1629216905"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Predicting Metabolic Pathway Dynamics w/ Machine Learning with Zak Costello - TWiML Talk #163</title>
      <link>https://twimlai.com/talk/163</link>
      <description>In today’s episode I’m joined by Zak Costello, post-doctoral fellow at the Joint BioEnergy Institute to discuss his recent paper, “A machine learning approach to predict metabolic pathway dynamics from time-series multiomics data.” Zak gives us an overview of synthetic biology and the use of ML techniques to optimize metabolic reactions for engineering biofuels at scale. 

Visit twimlai.com/talk/163 for the complete show notes.</description>
      <pubDate>Wed, 11 Jul 2018 21:27:15 -0000</pubDate>
      <itunes:title>Predicting Metabolic Pathway Dynamics w/ Machine Learning with Zak Costello</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>163</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6a946608-ee98-11eb-9502-ff8e04e51359/image/TWIMLAI_Background_800x800_ZC_163.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode I’m joined by Zak Costello, post-doctoral fellow at the Joint BioEnergy Institute. Zak joins me to discuss his recent paper, “A machine learning approach to predict metabolic pathway dynamics from time-series multiomics...</itunes:subtitle>
      <itunes:summary>In today’s episode I’m joined by Zak Costello, post-doctoral fellow at the Joint BioEnergy Institute to discuss his recent paper, “A machine learning approach to predict metabolic pathway dynamics from time-series multiomics data.” Zak gives us an overview of synthetic biology and the use of ML techniques to optimize metabolic reactions for engineering biofuels at scale. 

Visit twimlai.com/talk/163 for the complete show notes.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode I’m joined by Zak Costello, post-doctoral fellow at the Joint BioEnergy Institute to discuss his recent paper, “A machine learning approach to predict metabolic pathway dynamics from time-series multiomics data.” Zak gives us an overview of synthetic biology and the use of ML techniques to optimize metabolic reactions for engineering biofuels at scale. 

Visit twimlai.com/talk/163 for the complete show notes.]]>
      </content:encoded>
      <itunes:duration>2378</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[2797ec8f4ac8452393c472d69b598382]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1633551572.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning to Discover Physics and Engineering Principles with Nathan Kutz - TWiML Talk #162</title>
      <link>https://twimlai.com/talk/162</link>
      <description>In this episode, I’m joined by Nathan Kutz, Professor of applied mathematics, electrical engineering and physics at the University of Washington to discuss his research into the use of machine learning to help discover the fundamental governing equations for physical and engineering systems from time series measurements.

For complete show notes visit twimlai.com/talk/162</description>
      <pubDate>Mon, 09 Jul 2018 16:28:53 -0000</pubDate>
      <itunes:title>Machine Learning to Discover Physics and Engineering Principles with Nathan Kutz</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>162</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6abd4c8a-ee98-11eb-9502-972778562e98/image/TWIMLAI_Background_800x800_NK_162.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by Nathan Kutz, Professor of applied mathematics, electrical engineering and physics at the University of Washington. Nathan and I met a few months ago at the Prepare.AI conference in St. Louis where he gave a talk on...</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by Nathan Kutz, Professor of applied mathematics, electrical engineering and physics at the University of Washington to discuss his research into the use of machine learning to help discover the fundamental governing equations for physical and engineering systems from time series measurements.

For complete show notes visit twimlai.com/talk/162</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by Nathan Kutz, Professor of applied mathematics, electrical engineering and physics at the University of Washington to discuss his research into the use of machine learning to help discover the fundamental governing equations for physical and engineering systems from time series measurements.

For complete show notes visit twimlai.com/talk/162

]]>
      </content:encoded>
      <itunes:duration>2588</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[12a57f8806534ed0aa0c2289032e53c4]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9992188974.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Automating Complex Internal Processes w/ AI with Alexander Chukovski - TWiML Talk #161</title>
      <link>https://twimlai.com/talk/161</link>
      <description>In this episode, I'm joined by Alexander Chukovski, Director of Data Services at Munich, Germany based career platform, Experteer. In our conversation, we explore Alex’s journey to implement machine learning at Experteer, the Experteer NLP pipeline and how it’s evolved, Alex’s work with deep learning based ML models, including models like VDCNN and Facebook’s FastText offering and a few recent papers that look at transfer learning for NLP.

Check out the complete show notes at twimlai.com/talk/161</description>
      <pubDate>Thu, 05 Jul 2018 16:38:12 -0000</pubDate>
      <itunes:title>Automating Complex Internal Processes w/ AI with Alexander Chukovski</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>161</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ae631a4-ee98-11eb-9502-ffc89a5d5584/image/TWIMLAI_Background_800x800_AC_161.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Alexander Chukovski, Director of Data Services at Munich, Germany based career platform, Experteer. In our conversation, we explore Alex’s journey to implement machine learning at Experteer. Alex and I discuss the...</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Alexander Chukovski, Director of Data Services at Munich, Germany based career platform, Experteer. In our conversation, we explore Alex’s journey to implement machine learning at Experteer, the Experteer NLP pipeline and how it’s evolved, Alex’s work with deep learning based ML models, including models like VDCNN and Facebook’s FastText offering and a few recent papers that look at transfer learning for NLP.

Check out the complete show notes at twimlai.com/talk/161</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Alexander Chukovski, Director of Data Services at Munich, Germany based career platform, Experteer. In our conversation, we explore Alex’s journey to implement machine learning at Experteer, the Experteer NLP pipeline and how it’s evolved, Alex’s work with deep learning based ML models, including models like VDCNN and Facebook’s FastText offering and a few recent papers that look at transfer learning for NLP.

Check out the complete show notes at twimlai.com/talk/161]]>
      </content:encoded>
      <itunes:duration>2382</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9e10baddf6824a9381fce819157b1643]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2414746331.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Designing Better Sequence Models with RNNs with Adji Bousso Dieng - TWiML Talk #160</title>
      <link>https://twimlai.com/talk/160</link>
      <description>In this episode, I'm joined by Adji Bousso Dieng, PhD Student in the Department of Statistics at Columbia University to discuss two of her recent papers, “Noisin: Unbiased Regularization for Recurrent Neural Networks” and “TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency.” We dive into the details behind both of these papers and learn a ton along the way.</description>
      <pubDate>Mon, 02 Jul 2018 17:36:26 -0000</pubDate>
      <itunes:title>Designing Better Sequence Models with RNNs with Adji Bousso Dieng</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>160</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6b11781e-ee98-11eb-9502-5fa173989820/image/TWIMLAI_Background_800x800_ABD_160.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Adji Bousso Dieng, PhD Student in the Department of Statistics at Columbia University. In this interview, Adji and I discuss two of her recent papers, the first, an accepted paper from this year’s ICML conference...</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Adji Bousso Dieng, PhD Student in the Department of Statistics at Columbia University to discuss two of her recent papers, “Noisin: Unbiased Regularization for Recurrent Neural Networks” and “TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency.” We dive into the details behind both of these papers and learn a ton along the way.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Adji Bousso Dieng, PhD Student in the Department of Statistics at Columbia University to discuss two of her recent papers, “Noisin: Unbiased Regularization for Recurrent Neural Networks” and “TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency.” We dive into the details behind both of these papers and learn a ton along the way.]]>
      </content:encoded>
      <itunes:duration>2302</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[9d8c53245b884538b97d469ac2d1f0ba]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1779490303.mp3?updated=1629216902"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Love Love: AI and ML in Tennis with Stephanie Kovalchik - TWiML Talk #159</title>
      <link>https://twimlai.com/twiml-talk-159-love-love-ai-and-ml-in-tennis-with-stephanie-kovalchik</link>
      <description>In the final show in our AI in Sports series, I’m joined by Stephanie Kovalchik, Research Fellow at Victoria University and Senior Sports Scientist at Tennis Australia.  In our conversation we discuss Tennis Australia's use of data to develop a player rating system based on ability and probability, some of the interesting products her Game Insight Group is developing, including a win forecasting algorithm, and a statistic that measures a given player’s workload during a match.</description>
      <pubDate>Fri, 29 Jun 2018 16:24:15 -0000</pubDate>
      <itunes:title>Love Love: AI and ML in Tennis with Stephanie Kovalchik</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>159</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6b3b7ef2-ee98-11eb-9502-83eaac084357/image/TWIMLAI_Background_800x800_SK_159.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the final show in our AI in Sports series, I’m joined by Stephanie Kovalchik, Research Fellow at Victoria University and Senior Sports Scientist at Tennis Australia. Stephanie and I had a great conversation about a few of the many...</itunes:subtitle>
      <itunes:summary>In the final show in our AI in Sports series, I’m joined by Stephanie Kovalchik, Research Fellow at Victoria University and Senior Sports Scientist at Tennis Australia.  In our conversation we discuss Tennis Australia's use of data to develop a player rating system based on ability and probability, some of the interesting products her Game Insight Group is developing, including a win forecasting algorithm, and a statistic that measures a given player’s workload during a match.</itunes:summary>
      <content:encoded>
        <![CDATA[In the final show in our AI in Sports series, I’m joined by Stephanie Kovalchik, Research Fellow at Victoria University and Senior Sports Scientist at Tennis Australia.  In our conversation we discuss Tennis Australia's use of data to develop a player rating system based on ability and probability, some of the interesting products her Game Insight Group is developing, including a win forecasting algorithm, and a statistic that measures a given player’s workload during a match. ]]>
      </content:encoded>
      <itunes:duration>2810</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[d0b048635d924244aca6753dbb6d4947]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7786788745.mp3?updated=1635370830"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Growth Hacking Sports w/ Machine Learning with Noah Gift - TWiML Talk #158</title>
      <link>https://twimlai.com/twiml-talk-158-growth-hacking-sports-w-machine-learning-with-noah-gift</link>
      <description>In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah and I discuss some of his recent work in using social media to predict which players hold the most on-court value, and how this work could lead to more complete approaches to player valuation.

Check out the show notes at twimlai.com/talk/158</description>
      <pubDate>Thu, 28 Jun 2018 14:55:03 -0000</pubDate>
      <itunes:title>Growth Hacking Sports w/ Machine Learning with Noah Gift</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>158</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6b60d1d4-ee98-11eb-9502-bff322548b44/image/TWIMLAI_Background_800x800_NG_158.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah previously worked for a startup called Score Sports, which used machine learning to uncover athlete...</itunes:subtitle>
      <itunes:summary>In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah and I discuss some of his recent work in using social media to predict which players hold the most on-court value, and how this work could lead to more complete approaches to player valuation.

Check out the show notes at twimlai.com/talk/158</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI in Sports series I'm joined by Noah Gift, Founder and Consulting CTO at Pragmatic Labs and professor at UC Davis. Noah and I discuss some of his recent work in using social media to predict which players hold the most on-court value, and how this work could lead to more complete approaches to player valuation.

Check out the show notes at twimlai.com/talk/158]]>
      </content:encoded>
      <itunes:duration>3035</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7fdab228deac4a4287fa7fb32cce1d07]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9527981306.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fine-Grained Player Prediction in Sports with Jennifer Hobbs - TWiML Talk #157</title>
      <link>https://twimlai.com/twiml-talk-157-fine-grained-player-prediction-in-sports-with-jennifer-hobbs</link>
      <description>In this episode of our AI in Sports series, I'm joined by Jennifer Hobbs, Senior Data Scientist at STATS, a collector and distributor of sports data, to discuss the STATS data pipeline and how they collect and store different types of data for easy consumption and application. We also look into a paper she co-authored, Mythbusting Set-Pieces in Soccer, which was presented at the MIT Sloan Conference this year. 

https://twimlai.com/talk/157</description>
      <pubDate>Wed, 27 Jun 2018 16:08:15 -0000</pubDate>
      <itunes:title>Fine-Grained Player Prediction in Sports with Jennifer Hobbs</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>157</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6b86e932-ee98-11eb-9502-7f0a296b8ff2/image/TWIMLAI_Background_800x800_JH_157.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of the series, I'm joined by Jennifer Hobbs, Senior Data Scientist at STATS, a collector and distributor of sports data, covering sports like basketball, soccer, American football and rugby. Jennifer and I explore the STATS data...</itunes:subtitle>
      <itunes:summary>In this episode of our AI in Sports series, I'm joined by Jennifer Hobbs, Senior Data Scientist at STATS, a collector and distributor of sports data, to discuss the STATS data pipeline and how they collect and store different types of data for easy consumption and application. We also look into a paper she co-authored, Mythbusting Set-Pieces in Soccer, which was presented at the MIT Sloan Conference this year. 

https://twimlai.com/talk/157</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our AI in Sports series, I'm joined by Jennifer Hobbs, Senior Data Scientist at STATS, a collector and distributor of sports data, to discuss the STATS data pipeline and how they collect and store different types of data for easy consumption and application. We also look into a paper she co-authored, Mythbusting Set-Pieces in Soccer, which was presented at the MIT Sloan Conference this year. 

https://twimlai.com/talk/157]]>
      </content:encoded>
      <itunes:duration>2568</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[5d04d0e57f324aff95ecb0b540ba8d2f]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3497768025.mp3?updated=1629216903"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Targeted Ticket Sales Using Azure ML with the Trail Blazers w/ Mike Schumacher &amp; Chenhui Hu - TWiML Talk #156</title>
      <link>https://twimlai.com/twiml-talk-156-targeted-ticket-sales-using-azure-ml-with-the-trail-blazers-w-mike-schumacher-chenhui-hu</link>
      <description>In today’s episode of our AI in Sports series I'm joined by Mike Schumacher, director of business analytics for the Portland Trail Blazers, and Chenhui Hu, a data scientist at Microsoft to discuss how the Blazers are using machine learning to produce better-targeted sales campaigns, for both single-game and season-ticket buyers.</description>
      <pubDate>Tue, 26 Jun 2018 16:21:46 -0000</pubDate>
      <itunes:title>Targeted Ticket Sales Using Azure ML with the Trail Blazers w/ Mike Schumacher &amp; Chenhui Hu</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>156</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ba5c686-ee98-11eb-9502-33be69cc2bc2/image/TWIMLAI_Background_800x800_MSCH_156.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode of our AI in Sports series I'm joined by Mike Schumacher, director of business analytics for the Portland Trail Blazers, and Chenhui Hu, a data scientist at Microsoft. In our conversation, Mike, Chenhui and I discuss how the...</itunes:subtitle>
      <itunes:summary>In today’s episode of our AI in Sports series I'm joined by Mike Schumacher, director of business analytics for the Portland Trail Blazers, and Chenhui Hu, a data scientist at Microsoft to discuss how the Blazers are using machine learning to produce better-targeted sales campaigns, for both single-game and season-ticket buyers.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode of our AI in Sports series I'm joined by Mike Schumacher, director of business analytics for the Portland Trail Blazers, and Chenhui Hu, a data scientist at Microsoft to discuss how the Blazers are using machine learning to produce better-targeted sales campaigns, for both single-game and season-ticket buyers.]]>
      </content:encoded>
      <itunes:duration>2248</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[6850ea2ce5b941dba87d3ac388251904]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8096453341.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Athlete Optimization with Sinead Flahive - TWiML Talk #155</title>
      <link>https://twimlai.com/twiml-talk-155-ai-for-athlete-optimization-with-sinead-flahive</link>
      <description>This week we’re excited to kick off a series of shows on AI in sports. In this episode I'm joined by Sinead Flahive, data scientist at Dublin, Ireland based Kitman Labs to discuss Kitman’s Athlete Optimization System, which allows sports trainers and coaches to collect and analyze data for player performance optimization and injury reduction. Enjoy!</description>
      <pubDate>Mon, 25 Jun 2018 19:57:32 -0000</pubDate>
      <itunes:title>AI for Athlete Optimization with Sinead Flahive</itunes:title>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>155</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6bc79216-ee98-11eb-9502-f7c18e96876e/image/TWIMLAI_Background_800x800_SF_155.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Perhaps especially appropriate given that much of the globe is glued to the World Cup at the moment, this week we’re excited to kick off a series of shows on AI in sports. While I'm not personally the biggest sports fan, my producer Imari is a huge...</itunes:subtitle>
      <itunes:summary>This week we’re excited to kick off a series of shows on AI in sports. In this episode I'm joined by Sinead Flahive, data scientist at Dublin, Ireland based Kitman Labs to discuss Kitman’s Athlete Optimization System, which allows sports trainers and coaches to collect and analyze data for player performance optimization and injury reduction. Enjoy!</itunes:summary>
      <content:encoded>
        <![CDATA[This week we’re excited to kick off a series of shows on AI in sports. In this episode I'm joined by Sinead Flahive, data scientist at Dublin, Ireland based Kitman Labs to discuss Kitman’s Athlete Optimization System, which allows sports trainers and coaches to collect and analyze data for player performance optimization and injury reduction. Enjoy!]]>
      </content:encoded>
      <itunes:duration>2424</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[7bc002d239894a778a9de75d8185a92e]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4747443801.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Omni-Channel Customer Experiences with Vince Jeffs - TWiML Talk #154</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/461483106-twiml-twiml-talk-154-omni-channel-customer-experiences-with-vince-jeffs.mp3</link>
      <description>In this, the final episode of our PegaWorld series I’m joined by Vince Jeffs, Senior Director of Product Strategy for AI and Decisioning at Pegasystems. Vince and I had a great talk about the role AI and advanced analytics will play in defining future customer experiences. We do this in the context provided by one of his presentations from the conference, which explores four technology scenarios from Pegasystems’ innovation labs. These look at a connected car experience, the use of deep learning for diagnostics, dynamic notifications, and continuously optimized marketing. We also get into an interesting discussion about how much is too much when it comes to hyperpersonalized experiences, and how businesses can manage this challenge. The notes for this show can be found at twimlai.com/talk/154. For more information on the Pegaworld series, visit twimlai.com/pegaworld2018.</description>
      <pubDate>Thu, 21 Jun 2018 17:25:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>154</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6be5e568-ee98-11eb-9502-639c601a99f9/image/artworks-000363516117-687x1a-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this, the final episode of our PegaWorld serie…</itunes:subtitle>
      <itunes:summary>In this, the final episode of our PegaWorld series I’m joined by Vince Jeffs, Senior Director of Product Strategy for AI and Decisioning at Pegasystems. Vince and I had a great talk about the role AI and advanced analytics will play in defining future customer experiences. We do this in the context provided by one of his presentations from the conference, which explores four technology scenarios from Pegasystems’ innovation labs. These look at a connected car experience, the use of deep learning for diagnostics, dynamic notifications, and continuously optimized marketing. We also get into an interesting discussion about how much is too much when it comes to hyperpersonalized experiences, and how businesses can manage this challenge. The notes for this show can be found at twimlai.com/talk/154. For more information on the Pegaworld series, visit twimlai.com/pegaworld2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this, the final episode of our PegaWorld series I’m joined by Vince Jeffs, Senior Director of Product Strategy for AI and Decisioning at Pegasystems. Vince and I had a great talk about the role AI and advanced analytics will play in defining future customer experiences. We do this in the context provided by one of his presentations from the conference, which explores four technology scenarios from Pegasystems’ innovation labs. These look at a connected car experience, the use of deep learning for diagnostics, dynamic notifications, and continuously optimized marketing. We also get into an interesting discussion about how much is too much when it comes to hyperpersonalized experiences, and how businesses can manage this challenge. The notes for this show can be found at twimlai.com/talk/154. For more information on the Pegaworld series, visit twimlai.com/pegaworld2018.]]>
      </content:encoded>
      <itunes:duration>2580</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/461483106]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5652656149.mp3?updated=1629216898"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Workforce Intelligence for Automation &amp; Productivity with Michael Kempe - TWiML Talk #153</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/461036040-twiml-twiml-talk-153-workforce-intelligence-for-automation-productivity-with-michael-kempe.mp3</link>
      <description>In this episode of our PegaWorld series, I’m joined by Michael Kempe, chief operating officer at global share registry and financial services provider Link Market Services. In the interview, Michael and I dig into Link’s use of workforce intelligence software to allow it to track and analyze the performance of its workforce and business processes. Michael and I discuss some of the initial challenges associated with implementing this type of system, including skepticism amongst employees, and how it ultimately sets the stage for the Link’s broader use of machine learning, AI and so called “robotic process automation” to increase workforce productivity. The notes for this show can be found at twimlai.com/talk/153. For more information on our PegaWorld series, visit twimlai.com/pegaworld2018.</description>
      <pubDate>Wed, 20 Jun 2018 18:45:58 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>153</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c03e608-ee98-11eb-9502-4bc2a1324a3d/image/artworks-000363129210-wakj3p-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our PegaWorld series, I’m join…</itunes:subtitle>
      <itunes:summary>In this episode of our PegaWorld series, I’m joined by Michael Kempe, chief operating officer at global share registry and financial services provider Link Market Services. In the interview, Michael and I dig into Link’s use of workforce intelligence software to allow it to track and analyze the performance of its workforce and business processes. Michael and I discuss some of the initial challenges associated with implementing this type of system, including skepticism amongst employees, and how it ultimately sets the stage for the Link’s broader use of machine learning, AI and so called “robotic process automation” to increase workforce productivity. The notes for this show can be found at twimlai.com/talk/153. For more information on our PegaWorld series, visit twimlai.com/pegaworld2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our PegaWorld series, I’m joined by Michael Kempe, chief operating officer at global share registry and financial services provider Link Market Services. In the interview, Michael and I dig into Link’s use of workforce intelligence software to allow it to track and analyze the performance of its workforce and business processes. Michael and I discuss some of the initial challenges associated with implementing this type of system, including skepticism amongst employees, and how it ultimately sets the stage for the Link’s broader use of machine learning, AI and so called “robotic process automation” to increase workforce productivity. The notes for this show can be found at twimlai.com/talk/153. For more information on our PegaWorld series, visit twimlai.com/pegaworld2018.]]>
      </content:encoded>
      <itunes:duration>2186</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/461036040]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5662788492.mp3?updated=1629216898"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Platforms for Decision Automation at Scotiabank with Jim Saleh - TWiML Talk #152</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/460489560-twiml-twiml-talk-152-data-platforms-for-decision-automation-at-scotiabank-with-jim-saleh.mp3</link>
      <description>In this show, part of our PegaWorld 18 series, I'm joined by Jim Saleh, Senior Director of process and decision automation at Scotiabank. Jim is tasked with helping the bank transition from a world where customer interactions are based on historical analytics to one where they’re based on real-time decisioning and automation. In our conversation we discuss what’s required to deliver real-time decisioning, starting from the ground up with the data platform. In this vein we explore topics like data lakes, data warehouses, integration, and more, and the effort required to take advantage of these. The notes for this show can be found at twimlai.com/talk/152. For more info on our PegaWorld 2018 series, visit twimlai.com/pegaworld2018.</description>
      <pubDate>Tue, 19 Jun 2018 16:47:37 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>152</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c276f9c-ee98-11eb-9502-ab66397ec530/image/artworks-000362650389-w0g4vf-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this show, part of our PegaWorld 18 series, I'…</itunes:subtitle>
      <itunes:summary>In this show, part of our PegaWorld 18 series, I'm joined by Jim Saleh, Senior Director of process and decision automation at Scotiabank. Jim is tasked with helping the bank transition from a world where customer interactions are based on historical analytics to one where they’re based on real-time decisioning and automation. In our conversation we discuss what’s required to deliver real-time decisioning, starting from the ground up with the data platform. In this vein we explore topics like data lakes, data warehouses, integration, and more, and the effort required to take advantage of these. The notes for this show can be found at twimlai.com/talk/152. For more info on our PegaWorld 2018 series, visit twimlai.com/pegaworld2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this show, part of our PegaWorld 18 series, I'm joined by Jim Saleh, Senior Director of process and decision automation at Scotiabank. Jim is tasked with helping the bank transition from a world where customer interactions are based on historical analytics to one where they’re based on real-time decisioning and automation. In our conversation we discuss what’s required to deliver real-time decisioning, starting from the ground up with the data platform. In this vein we explore topics like data lakes, data warehouses, integration, and more, and the effort required to take advantage of these. The notes for this show can be found at twimlai.com/talk/152. For more info on our PegaWorld 2018 series, visit twimlai.com/pegaworld2018.]]>
      </content:encoded>
      <itunes:duration>1951</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/460489560]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1233150233.mp3?updated=1629216894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Towards the Self-Driving Enterprise with Kirk Borne - TWiML Talk #151</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/460032054-twiml-twiml-talk-151-towards-the-self-driving-enterprise-with-kirk-borne.mp3</link>
      <description>In this show, the first of our PegaWorld 18 series, I'm joined by Kirk Borne, Principal Data Scientist at management consulting firm Booz Allen Hamilton. In our conversation, Kirk shares his views on automation as it applies to enterprises and their customers. We discuss his experiences evangelizing data science within the context of a large organization, and the role of AI in helping organizations achieve automation. Along the way Kirk, shares a great analogy for intelligent automation, comparing it to an autonomous vehicle . We covered a ton of ground in this chat, which I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/151. For more info about our PegaWorld 2018 Series, visit twimlai.com/pegaworld2018.</description>
      <pubDate>Mon, 18 Jun 2018 16:54:53 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>151</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c41b398-ee98-11eb-9502-1b3f0958425c/image/artworks-000362228457-00bgcq-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this show, the first of our PegaWorld 18 serie…</itunes:subtitle>
      <itunes:summary>In this show, the first of our PegaWorld 18 series, I'm joined by Kirk Borne, Principal Data Scientist at management consulting firm Booz Allen Hamilton. In our conversation, Kirk shares his views on automation as it applies to enterprises and their customers. We discuss his experiences evangelizing data science within the context of a large organization, and the role of AI in helping organizations achieve automation. Along the way Kirk, shares a great analogy for intelligent automation, comparing it to an autonomous vehicle . We covered a ton of ground in this chat, which I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/151. For more info about our PegaWorld 2018 Series, visit twimlai.com/pegaworld2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this show, the first of our PegaWorld 18 series, I'm joined by Kirk Borne, Principal Data Scientist at management consulting firm Booz Allen Hamilton. In our conversation, Kirk shares his views on automation as it applies to enterprises and their customers. We discuss his experiences evangelizing data science within the context of a large organization, and the role of AI in helping organizations achieve automation. Along the way Kirk, shares a great analogy for intelligent automation, comparing it to an autonomous vehicle . We covered a ton of ground in this chat, which I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/151. For more info about our PegaWorld 2018 Series, visit twimlai.com/pegaworld2018.]]>
      </content:encoded>
      <itunes:duration>2477</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/460032054]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8989314035.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>How a Global Energy Company Adopts ML &amp; AI with Nicholas Osborn - TWiML Talk #150</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/458347614-twiml-twiml-talk-150-how-a-global-energy-company-adopts-ml-ai-with-nicholas-osborn.mp3</link>
      <description>On today’s show I’m excited to share this interview with Nick Osborn, a longtime listener of the show and Leader of the Global Machine Learning Project Management Office at AES Corporation, a Fortune 200 power company. Nick and I met at my AI Summit a few weeks back, and after a brief chat about some of the things he was up to at AES, I knew I needed to get him on the show! In this interview, Nick and I explore how AES is implementing machine learning across multiple domains at the company. We dig into several examples falling under the Natural Language, Computer Vision, and Cognitive Assets categories he’s established for his projects. Along the way we cover some of the key podcast episodes that helped Nick discover potentially applicable ML techniques, and how those are helping his team broaden the use of machine learning at AES. This was a fun and informative conversation that has a lot to offer. Thanks, Nick! The notes for this episode can be found at twimlai.com/talk/150.</description>
      <pubDate>Thu, 14 Jun 2018 16:50:31 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>150</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c5feb74-ee98-11eb-9502-2369539d4807/image/artworks-000360727515-rnc5cb-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>On today’s show I’m excited to share this intervi…</itunes:subtitle>
      <itunes:summary>On today’s show I’m excited to share this interview with Nick Osborn, a longtime listener of the show and Leader of the Global Machine Learning Project Management Office at AES Corporation, a Fortune 200 power company. Nick and I met at my AI Summit a few weeks back, and after a brief chat about some of the things he was up to at AES, I knew I needed to get him on the show! In this interview, Nick and I explore how AES is implementing machine learning across multiple domains at the company. We dig into several examples falling under the Natural Language, Computer Vision, and Cognitive Assets categories he’s established for his projects. Along the way we cover some of the key podcast episodes that helped Nick discover potentially applicable ML techniques, and how those are helping his team broaden the use of machine learning at AES. This was a fun and informative conversation that has a lot to offer. Thanks, Nick! The notes for this episode can be found at twimlai.com/talk/150.</itunes:summary>
      <content:encoded>
        <![CDATA[On today’s show I’m excited to share this interview with Nick Osborn, a longtime listener of the show and Leader of the Global Machine Learning Project Management Office at AES Corporation, a Fortune 200 power company. Nick and I met at my AI Summit a few weeks back, and after a brief chat about some of the things he was up to at AES, I knew I needed to get him on the show! In this interview, Nick and I explore how AES is implementing machine learning across multiple domains at the company. We dig into several examples falling under the Natural Language, Computer Vision, and Cognitive Assets categories he’s established for his projects. Along the way we cover some of the key podcast episodes that helped Nick discover potentially applicable ML techniques, and how those are helping his team broaden the use of machine learning at AES. This was a fun and informative conversation that has a lot to offer. Thanks, Nick! The notes for this episode can be found at twimlai.com/talk/150.]]>
      </content:encoded>
      <itunes:duration>2769</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/458347614]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7198452204.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Problem Formulation for Machine Learning with Romer Rosales - TWiML Talk #149</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/457012779-twiml-twiml-talk-149-problem-formulation-for-machine-learning-with-romer-rosales.mp3</link>
      <description>In this episode, i'm joined by Romer Rosales, Director of AI at LinkedIn. We begin with a discussion of graphical models and approximate probability inference, and he helps me make an important connection in the way I think about that topic. We then review some of the applications of machine learning at LinkedIn, and how what Romer calls their ‘holistic approach’ guides the evolution of ML projects at LinkedIn. This leads us into a really interesting discussion about problem formulation and selecting the right objective function for a given problem. We then talk through some of the tools they’ve built to scale their data science efforts, including large-scale constrained optimization solvers, online hyperparameter optimization and more. This was a really fun conversation, that I’m sure you’ll enjoy! The notes for this show can be found at twimlai.com/talk/149.</description>
      <pubDate>Mon, 11 Jun 2018 20:55:21 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>149</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c8440e6-ee98-11eb-9502-ef158287125e/image/artworks-000359530896-ceri9i-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, i'm joined by Romer Rosales, Dir…</itunes:subtitle>
      <itunes:summary>In this episode, i'm joined by Romer Rosales, Director of AI at LinkedIn. We begin with a discussion of graphical models and approximate probability inference, and he helps me make an important connection in the way I think about that topic. We then review some of the applications of machine learning at LinkedIn, and how what Romer calls their ‘holistic approach’ guides the evolution of ML projects at LinkedIn. This leads us into a really interesting discussion about problem formulation and selecting the right objective function for a given problem. We then talk through some of the tools they’ve built to scale their data science efforts, including large-scale constrained optimization solvers, online hyperparameter optimization and more. This was a really fun conversation, that I’m sure you’ll enjoy! The notes for this show can be found at twimlai.com/talk/149.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, i'm joined by Romer Rosales, Director of AI at LinkedIn. We begin with a discussion of graphical models and approximate probability inference, and he helps me make an important connection in the way I think about that topic. We then review some of the applications of machine learning at LinkedIn, and how what Romer calls their ‘holistic approach’ guides the evolution of ML projects at LinkedIn. This leads us into a really interesting discussion about problem formulation and selecting the right objective function for a given problem. We then talk through some of the tools they’ve built to scale their data science efforts, including large-scale constrained optimization solvers, online hyperparameter optimization and more. This was a really fun conversation, that I’m sure you’ll enjoy! The notes for this show can be found at twimlai.com/talk/149.]]>
      </content:encoded>
      <itunes:duration>3028</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/457012779]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5895762780.mp3?updated=1629216901"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Materials Discovery with Greg Mulholland - TWiML Talk #148</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/455258127-twiml-twiml-talk-148-ai-for-materials-discovery-with-greg-mulholland.mp3</link>
      <description>In this episode I’m joined by Greg Mulholland, Founder and CEO of Citrine Informatics, which is applying AI to the discovery and development of new materials. Greg and I start out with an exploration of some of the challenges of the status quo in materials science, and what’s to be gained by introducing machine learning into this process. We discuss how limitations in materials manifest themselves, and Greg shares a few examples from the company’s work optimizing battery components and solar cells. We dig into the role and sources of data used in applying ML in materials, and some of the unique challenges to collecting it, and discuss the pipeline and algorithms Citrine uses to deliver its service. This was a fun conversation that spans physics, chemistry, and of course machine learning, and I hope you enjoy it. The notes for this show can be found at twimlai.com/talk/148.</description>
      <pubDate>Thu, 07 Jun 2018 20:07:58 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>148</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6c9f91ac-ee98-11eb-9502-b73e9e4a6516/image/artworks-000358049838-p5m1xr-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Greg Mulholland, Fo…</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Greg Mulholland, Founder and CEO of Citrine Informatics, which is applying AI to the discovery and development of new materials. Greg and I start out with an exploration of some of the challenges of the status quo in materials science, and what’s to be gained by introducing machine learning into this process. We discuss how limitations in materials manifest themselves, and Greg shares a few examples from the company’s work optimizing battery components and solar cells. We dig into the role and sources of data used in applying ML in materials, and some of the unique challenges to collecting it, and discuss the pipeline and algorithms Citrine uses to deliver its service. This was a fun conversation that spans physics, chemistry, and of course machine learning, and I hope you enjoy it. The notes for this show can be found at twimlai.com/talk/148.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Greg Mulholland, Founder and CEO of Citrine Informatics, which is applying AI to the discovery and development of new materials. Greg and I start out with an exploration of some of the challenges of the status quo in materials science, and what’s to be gained by introducing machine learning into this process. We discuss how limitations in materials manifest themselves, and Greg shares a few examples from the company’s work optimizing battery components and solar cells. We dig into the role and sources of data used in applying ML in materials, and some of the unique challenges to collecting it, and discuss the pipeline and algorithms Citrine uses to deliver its service. This was a fun conversation that spans physics, chemistry, and of course machine learning, and I hope you enjoy it. The notes for this show can be found at twimlai.com/talk/148.]]>
      </content:encoded>
      <itunes:duration>2544</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/455258127]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1053483403.mp3?updated=1629216893"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Innovation &amp; AI at Capital One with Adam Wenchel - TWiML Talk #147</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/453659808-twiml-twiml-talk-147-data-innovation-ai-at-capital-one-with-adam-wenchel.mp3</link>
      <description>In this episode I’m joined by Adam Wenchel, vice president of AI and Data Innovation at Capital One, to discuss how Machine Learning &amp; AI are being integrated into their day-to-day practices, and how those advances benefit the customer. In our conversation, we look into a few of the many applications of AI at the bank, including fraud detection, money laundering, customer service, and automating back office processes. Adam describes some of the challenges of applying ML in financial services and how Capital One maintains consistent portfolio management practices across the organization. We also discuss how the bank has organized to scale their machine learning efforts, and the steps they’ve taken to overcome the talent shortage in the space. The notes for this show can be found at twimlai.com/talk/147.</description>
      <pubDate>Mon, 04 Jun 2018 17:17:06 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>147</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6cbf0262-ee98-11eb-9502-83b64dd59328/image/artworks-000356593788-2e4ulv-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Adam Wenchel, vice …</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Adam Wenchel, vice president of AI and Data Innovation at Capital One, to discuss how Machine Learning &amp; AI are being integrated into their day-to-day practices, and how those advances benefit the customer. In our conversation, we look into a few of the many applications of AI at the bank, including fraud detection, money laundering, customer service, and automating back office processes. Adam describes some of the challenges of applying ML in financial services and how Capital One maintains consistent portfolio management practices across the organization. We also discuss how the bank has organized to scale their machine learning efforts, and the steps they’ve taken to overcome the talent shortage in the space. The notes for this show can be found at twimlai.com/talk/147.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>In this episode I’m joined by Adam Wenchel, vice president of AI and Data Innovation at Capital One, to discuss how Machine Learning &amp; AI are being integrated into their day-to-day practices, and how those advances benefit the customer. In our conversation, we look into a few of the many applications of AI at the bank, including fraud detection, money laundering, customer service, and automating back office processes. Adam describes some of the challenges of applying ML in financial services and how Capital One maintains consistent portfolio management practices across the organization. We also discuss how the bank has organized to scale their machine learning efforts, and the steps they’ve taken to overcome the talent shortage in the space. The notes for this show can be found at twimlai.com/talk/147.</p>]]>
      </content:encoded>
      <itunes:duration>2706</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/453659808]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4026822057.mp3?updated=1629216893"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/451775115-twiml-twiml-talk-146-deep-gradient-compression-for-distributed-training-with-song-han.mp3</link>
      <description>On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.</description>
      <pubDate>Thu, 31 May 2018 15:47:20 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>146</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6cda8410-ee98-11eb-9502-777d00fd2a9a/image/artworks-000354978468-zb4soa-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>On today’s show I chat with Song Han, assistant p…</itunes:subtitle>
      <itunes:summary>On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.</itunes:summary>
      <content:encoded>
        <![CDATA[On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.]]>
      </content:encoded>
      <itunes:duration>2772</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/451775115]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2724395611.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Masked Autoregressive Flow for Density Estimation with George Papamakarios - TWiML Talk #145</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/450384765-twiml-twiml-talk-145-masked-autoregressive-flow-for-density-estimation-with-george-papamakarios.mp3</link>
      <description>In this episode, University of Edinburgh Phd student George Papamakarios and I discuss his paper “Masked Autoregressive Flow for Density Estimation.” George walks us through the idea of Masked Autoregressive Flow, which uses neural networks to produce estimates of probability densities from a set of input examples. We discuss some of the related work that’s laid the groundwork for his research, including Inverse Autoregressive Flow, Real NVP and Masked Auto-encoders. We also look at the properties of probability density networks and discuss some of the challenges associated with this effort. The notes for this show can be found at twimlai.com/talk/145.</description>
      <pubDate>Mon, 28 May 2018 19:20:13 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>145</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6cf56cbc-ee98-11eb-9502-1f54db7bf996/image/artworks-000353783694-ut5pvc-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, University of Edinburgh Phd stud…</itunes:subtitle>
      <itunes:summary>In this episode, University of Edinburgh Phd student George Papamakarios and I discuss his paper “Masked Autoregressive Flow for Density Estimation.” George walks us through the idea of Masked Autoregressive Flow, which uses neural networks to produce estimates of probability densities from a set of input examples. We discuss some of the related work that’s laid the groundwork for his research, including Inverse Autoregressive Flow, Real NVP and Masked Auto-encoders. We also look at the properties of probability density networks and discuss some of the challenges associated with this effort. The notes for this show can be found at twimlai.com/talk/145.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, University of Edinburgh Phd student George Papamakarios and I discuss his paper “Masked Autoregressive Flow for Density Estimation.” George walks us through the idea of Masked Autoregressive Flow, which uses neural networks to produce estimates of probability densities from a set of input examples. We discuss some of the related work that’s laid the groundwork for his research, including Inverse Autoregressive Flow, Real NVP and Masked Auto-encoders. We also look at the properties of probability density networks and discuss some of the challenges associated with this effort. The notes for this show can be found at twimlai.com/talk/145.]]>
      </content:encoded>
      <itunes:duration>2077</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/450384765]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7967599252.mp3?updated=1629216881"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Training Data for Computer Vision at Figure Eight with Qazaleh Mirsharif - TWiML Talk #144</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/449148126-twiml-twiml-talk-144-training-data-for-computer-vision-at-figure-eight-with-qazaleh-mirsharif.mp3</link>
      <description>For today’s show, the last in our TrainAI series, I'm joined by Qazaleh Mirsharif, a machine learning scientist working on computer vision at Figure Eight. Qazaleh and I caught up at the TrainAI conference to discuss a couple of the projects she’s worked on in that field, namely her research into the classification of retinal images and her work on parking sign detection from Google Street View images. The former, which attempted to diagnose diseases like diabetic retinopathy using retinal scan images, is similar to the work I spoke with Ryan Poplin about on TWiML Talk #122. In my conversation with Qazaleh we focus on how she built her datasets for each of these projects and some of the key lessons she’s learned along the way. The notes for this show can be found at twimlai.com/talk/144. For series details, visit twimlai.com/trainai2018.</description>
      <pubDate>Fri, 25 May 2018 19:27:38 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>144</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6d16b2fa-ee98-11eb-9502-17b376b68528/image/artworks-000352729746-u9sqa2-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>For today’s show, the last in our TrainAI series,…</itunes:subtitle>
      <itunes:summary>For today’s show, the last in our TrainAI series, I'm joined by Qazaleh Mirsharif, a machine learning scientist working on computer vision at Figure Eight. Qazaleh and I caught up at the TrainAI conference to discuss a couple of the projects she’s worked on in that field, namely her research into the classification of retinal images and her work on parking sign detection from Google Street View images. The former, which attempted to diagnose diseases like diabetic retinopathy using retinal scan images, is similar to the work I spoke with Ryan Poplin about on TWiML Talk #122. In my conversation with Qazaleh we focus on how she built her datasets for each of these projects and some of the key lessons she’s learned along the way. The notes for this show can be found at twimlai.com/talk/144. For series details, visit twimlai.com/trainai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[For today’s show, the last in our TrainAI series, I'm joined by Qazaleh Mirsharif, a machine learning scientist working on computer vision at Figure Eight. Qazaleh and I caught up at the TrainAI conference to discuss a couple of the projects she’s worked on in that field, namely her research into the classification of retinal images and her work on parking sign detection from Google Street View images. The former, which attempted to diagnose diseases like diabetic retinopathy using retinal scan images, is similar to the work I spoke with Ryan Poplin about on TWiML Talk #122. In my conversation with Qazaleh we focus on how she built her datasets for each of these projects and some of the key lessons she’s learned along the way. The notes for this show can be found at twimlai.com/talk/144. For series details, visit twimlai.com/trainai2018.]]>
      </content:encoded>
      <itunes:duration>1314</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/449148126]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8911639054.mp3?updated=1629216858"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Agile Data Science with Sarah Aerni - TWiML Talk #143</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/448687032-twiml-twiml-talk-143-agile-data-science-with-sarah-aerni.mp3</link>
      <description>Today we continue our TrainAI series with Sarah Aerni, Director of Data Science at Salesforce Einstein. Sarah and I sat down at the TrainAI conference to discuss her talk “Notes from the Field: The Platform, People, and Processes of Agile Data Science.” Sarah and I dig into the concept of agile data science, exploring what it means to her and how she’s seen it done at Salesforce and other places she’s worked. We also dig into the notion of machine learning platforms, which is also a keen area of interest for me. We discuss some of the common elements we’ve seen in ML platforms, and when it makes sense for an organization to start building one. The notes for this show can be found at twimlai.com/talk/143. For more details on the TrainAI series, visit twimlai.com/trainai2018</description>
      <pubDate>Thu, 24 May 2018 19:55:59 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>143</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6d3170a4-ee98-11eb-9502-bb33a6029d2c/image/artworks-000352323105-gpyh5x-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we continue our TrainAI series with Sarah A…</itunes:subtitle>
      <itunes:summary>Today we continue our TrainAI series with Sarah Aerni, Director of Data Science at Salesforce Einstein. Sarah and I sat down at the TrainAI conference to discuss her talk “Notes from the Field: The Platform, People, and Processes of Agile Data Science.” Sarah and I dig into the concept of agile data science, exploring what it means to her and how she’s seen it done at Salesforce and other places she’s worked. We also dig into the notion of machine learning platforms, which is also a keen area of interest for me. We discuss some of the common elements we’ve seen in ML platforms, and when it makes sense for an organization to start building one. The notes for this show can be found at twimlai.com/talk/143. For more details on the TrainAI series, visit twimlai.com/trainai2018</itunes:summary>
      <content:encoded>
        <![CDATA[Today we continue our TrainAI series with Sarah Aerni, Director of Data Science at Salesforce Einstein. Sarah and I sat down at the TrainAI conference to discuss her talk “Notes from the Field: The Platform, People, and Processes of Agile Data Science.” Sarah and I dig into the concept of agile data science, exploring what it means to her and how she’s seen it done at Salesforce and other places she’s worked. We also dig into the notion of machine learning platforms, which is also a keen area of interest for me. We discuss some of the common elements we’ve seen in ML platforms, and when it makes sense for an organization to start building one. The notes for this show can be found at twimlai.com/talk/143. For more details on the TrainAI series, visit twimlai.com/trainai2018]]>
      </content:encoded>
      <itunes:duration>2308</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/448687032]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2099369190.mp3?updated=1629216885"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Tensor Operations for Machine Learning with Anima Anandkumar - TWiML Talk #142</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/448207584-twiml-twiml-talk-142-tensor-operations-for-machine-learning-with-anima-anandkumar.mp3</link>
      <description>In this episode of our TrainAI series, I sit down with Anima Anandkumar, Bren Professor at Caltech and Principal Scientist with Amazon Web Services. Anima joined me to discuss the research coming out of her “Tensorlab” at CalTech. In our conversation, we review the application of tensor operations to machine learning and discuss how an example problem–document categorization–might be approached using 3 dimensional tensors to discover topics and relationships between topics. We touch on multidimensionality, expectation maximization, and Amazon products Sagemaker and Comprehend. Anima also goes into how to tensorize neural networks and apply our understanding of tensor algebra to do perform better architecture searches. The notes for this show can be found at twimlai.com/talk/142. For series info, visit twimlai.com/trainai2018</description>
      <pubDate>Wed, 23 May 2018 20:15:55 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>142</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6d5b09f0-ee98-11eb-9502-ab1e1503bdb7/image/artworks-000351895998-6b9jtn-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our TrainAI series, I sit down…</itunes:subtitle>
      <itunes:summary>In this episode of our TrainAI series, I sit down with Anima Anandkumar, Bren Professor at Caltech and Principal Scientist with Amazon Web Services. Anima joined me to discuss the research coming out of her “Tensorlab” at CalTech. In our conversation, we review the application of tensor operations to machine learning and discuss how an example problem–document categorization–might be approached using 3 dimensional tensors to discover topics and relationships between topics. We touch on multidimensionality, expectation maximization, and Amazon products Sagemaker and Comprehend. Anima also goes into how to tensorize neural networks and apply our understanding of tensor algebra to do perform better architecture searches. The notes for this show can be found at twimlai.com/talk/142. For series info, visit twimlai.com/trainai2018</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our TrainAI series, I sit down with Anima Anandkumar, Bren Professor at Caltech and Principal Scientist with Amazon Web Services. Anima joined me to discuss the research coming out of her “Tensorlab” at CalTech. In our conversation, we review the application of tensor operations to machine learning and discuss how an example problem–document categorization–might be approached using 3 dimensional tensors to discover topics and relationships between topics. We touch on multidimensionality, expectation maximization, and Amazon products Sagemaker and Comprehend. Anima also goes into how to tensorize neural networks and apply our understanding of tensor algebra to do perform better architecture searches. The notes for this show can be found at twimlai.com/talk/142. For series info, visit twimlai.com/trainai2018]]>
      </content:encoded>
      <itunes:duration>2046</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/448207584]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5796819008.mp3?updated=1629216880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for Live-Cell Imaging with David Van Valen - TWiML Talk #141</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/447696312-twiml-twiml-talk-141-deep-learning-for-live-cell-imaging-with-david-van-valen.mp3</link>
      <description>In today’s show, I sit down with David Van Valen, assistant professor of Bioengineering &amp; Biology at Caltech. David joined me after his talk at the Figure Eight TrainAI conference to chat about his research using image recognition and segmentation techniques in biological settings. In particular, we discuss his use of deep learning to automate the analysis of individual cells in live-cell imaging experiments. We had a really interesting discussion around the various practicalities he’s learned about training deep neural networks for image analysis, and he shares some great insights into which of the techniques from the deep learning research have worked for him and which haven’t. If you’re a fan of our Nerd Alert shows, you’ll really like this one. Enjoy! The notes for this show can be found at twimlai.com/talk/141. For more information on this series, visit twimlai.com/trainai2018.</description>
      <pubDate>Tue, 22 May 2018 19:33:02 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>141</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6d850610-ee98-11eb-9502-5f6caf887370/image/artworks-000351392271-ejgadz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s show, I sit down with David Van Valen,…</itunes:subtitle>
      <itunes:summary>In today’s show, I sit down with David Van Valen, assistant professor of Bioengineering &amp; Biology at Caltech. David joined me after his talk at the Figure Eight TrainAI conference to chat about his research using image recognition and segmentation techniques in biological settings. In particular, we discuss his use of deep learning to automate the analysis of individual cells in live-cell imaging experiments. We had a really interesting discussion around the various practicalities he’s learned about training deep neural networks for image analysis, and he shares some great insights into which of the techniques from the deep learning research have worked for him and which haven’t. If you’re a fan of our Nerd Alert shows, you’ll really like this one. Enjoy! The notes for this show can be found at twimlai.com/talk/141. For more information on this series, visit twimlai.com/trainai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s show, I sit down with David Van Valen, assistant professor of Bioengineering &amp; Biology at Caltech. David joined me after his talk at the Figure Eight TrainAI conference to chat about his research using image recognition and segmentation techniques in biological settings. In particular, we discuss his use of deep learning to automate the analysis of individual cells in live-cell imaging experiments. We had a really interesting discussion around the various practicalities he’s learned about training deep neural networks for image analysis, and he shares some great insights into which of the techniques from the deep learning research have worked for him and which haven’t. If you’re a fan of our Nerd Alert shows, you’ll really like this one. Enjoy! The notes for this show can be found at twimlai.com/talk/141. For more information on this series, visit twimlai.com/trainai2018.]]>
      </content:encoded>
      <itunes:duration>2233</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/447696312]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7011878245.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Checking in with the Master w/ Garry Kasparov - TWiML Talk #140</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/447233793-twiml-twiml-talk-140-checking-in-with-the-master-with-garry-kasparov.mp3</link>
      <description>In this episode I’m joined by legendary chess champion, author, and fellow at the Oxford Martin School, Garry Kasparov. Garry and I sat down after his keynote at the Figure Eight Train AI conference in San Francisco last week. Garry and I discuss his bouts with the chess-playing computer Deep Blue–which became the first computer system to defeat a reigning world champion in their 1997 rematch–and how that experience has helped shaped his thinking on artificially intelligent systems. We explore his perspective on the evolution of AI, the ways in which chess and Deep Blue differ from Go and Alpha Go, and the significance of DeepMind’s Alpha Go Zero. We also talk through his views on the relationship between humans and machines, and how he expects it to change over time. The notes for this show can be found at twimlai.com/talk/140. For more information on this series, visit twimlai.com/trainai2018.</description>
      <pubDate>Mon, 21 May 2018 20:44:29 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>140</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6da81e7a-ee98-11eb-9502-9391e7a6e3d6/image/artworks-000351344475-4e32tk-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by legendary chess cha…</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by legendary chess champion, author, and fellow at the Oxford Martin School, Garry Kasparov. Garry and I sat down after his keynote at the Figure Eight Train AI conference in San Francisco last week. Garry and I discuss his bouts with the chess-playing computer Deep Blue–which became the first computer system to defeat a reigning world champion in their 1997 rematch–and how that experience has helped shaped his thinking on artificially intelligent systems. We explore his perspective on the evolution of AI, the ways in which chess and Deep Blue differ from Go and Alpha Go, and the significance of DeepMind’s Alpha Go Zero. We also talk through his views on the relationship between humans and machines, and how he expects it to change over time. The notes for this show can be found at twimlai.com/talk/140. For more information on this series, visit twimlai.com/trainai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by legendary chess champion, author, and fellow at the Oxford Martin School, Garry Kasparov. Garry and I sat down after his keynote at the Figure Eight Train AI conference in San Francisco last week. Garry and I discuss his bouts with the chess-playing computer Deep Blue–which became the first computer system to defeat a reigning world champion in their 1997 rematch–and how that experience has helped shaped his thinking on artificially intelligent systems. We explore his perspective on the evolution of AI, the ways in which chess and Deep Blue differ from Go and Alpha Go, and the significance of DeepMind’s Alpha Go Zero. We also talk through his views on the relationship between humans and machines, and how he expects it to change over time. The notes for this show can be found at twimlai.com/talk/140. For more information on this series, visit twimlai.com/trainai2018.]]>
      </content:encoded>
      <itunes:duration>1964</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/447233793]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8044548512.mp3?updated=1629216878"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Exploring AI-Generated Music with Taryn Southern - TWiML Talk #139</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/445401585-twiml-twiml-talk-139-exploring-ai-generated-music-with-taryn-southern.mp3</link>
      <description>In this episode I’m joined by Taryn Southern - a singer, digital storyteller and Youtuber, whose upcoming album I AM AI will be produced completely with AI based tools. Taryn and I explore all aspects of what it means to create music with modern AI-based tools, and the different processes she’s used to create her singles Break Free, Voices in My Head, and more. She also provides a rundown of the many tools she’s used in this space, including Google Magenta, Watson Beat, AMPer, Landr and more. This was a super fun interview that I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/139</description>
      <pubDate>Thu, 17 May 2018 17:02:38 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>139</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6dc9ef32-ee98-11eb-9502-c31cdd9e8fa4/image/artworks-000349447533-kjn6w8-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Taryn Southern - a …</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Taryn Southern - a singer, digital storyteller and Youtuber, whose upcoming album I AM AI will be produced completely with AI based tools. Taryn and I explore all aspects of what it means to create music with modern AI-based tools, and the different processes she’s used to create her singles Break Free, Voices in My Head, and more. She also provides a rundown of the many tools she’s used in this space, including Google Magenta, Watson Beat, AMPer, Landr and more. This was a super fun interview that I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/139</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Taryn Southern - a singer, digital storyteller and Youtuber, whose upcoming album I AM AI will be produced completely with AI based tools. Taryn and I explore all aspects of what it means to create music with modern AI-based tools, and the different processes she’s used to create her singles Break Free, Voices in My Head, and more. She also provides a rundown of the many tools she’s used in this space, including Google Magenta, Watson Beat, AMPer, Landr and more. This was a super fun interview that I think you’ll get a kick out of. The notes for this show can be found at twimlai.com/talk/139]]>
      </content:encoded>
      <itunes:duration>1984</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/445401585]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5265878488.mp3?updated=1629216874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Practical Deep Learning with Rachel Thomas - TWiML Talk #138</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/443972214-twiml-twiml-talk-138-practical-deep-learning-with-rachel-thomas.mp3</link>
      <description>In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138</description>
      <pubDate>Mon, 14 May 2018 18:14:57 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>138</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6de55d6c-ee98-11eb-9502-03d6d7469df0/image/artworks-000348197598-1zzrii-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, i'm joined by Rachel Thomas, fou…</itunes:subtitle>
      <itunes:summary>In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, i'm joined by Rachel Thomas, founder and researcher at Fast AI. If you’re not familiar with Fast AI, the company offers a series of courses including Practical Deep Learning for Coders, Cutting Edge Deep Learning for Coders and Rachel’s Computational Linear Algebra course. The courses are designed to make deep learning more accessible to those without the extensive math backgrounds some other courses assume. Rachel and I cover a lot of ground in this conversation, starting with the philosophy and goals behind the Fast AI courses. We also cover Fast AI’s recent decision to switch to their courses from Tensorflow to Pytorch, the reasons for this, and the lessons they’ve learned in the process. We discuss the role of the Fast AI deep learning library as well, and how it was recently used to held their team achieve top results on a popular industry benchmark of training time and training cost by a factor of more than ten. The notes for this show can be found at twimlai.com/talk/138]]>
      </content:encoded>
      <itunes:duration>2659</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/443972214]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1265392878.mp3?updated=1629216891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Kinds of Intelligence w/ Jose Hernandez-Orallo - TWiML Talk #137</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/442123365-twiml-twiml-talk-137-kinds-of-intelligence-types-tests-meeting-the-needs-of-society-w-jose-hernandez-orallo.mp3</link>
      <description>In this episode, I'm joined by Jose Hernandez-Orallo, professor in the department of information systems and computing at Universitat Politècnica de València and fellow at the Leverhulme Centre for the Future of Intelligence, working on the Kinds of Intelligence Project. Jose and I caught up at NIPS last year after the Kinds of Intelligence Symposium that he helped organize there. In our conversation, we discuss the three main themes of the symposium: understanding and identifying the main types of intelligence, including non-human intelligence, developing better ways to test and measure these intelligences, and understanding how and where research efforts should focus to best benefit society. The notes for this show can be found at twimlai.com/talk/137.</description>
      <pubDate>Thu, 10 May 2018 15:35:44 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>137</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e0787de-ee98-11eb-9502-d77181f2c326/image/artworks-000346535787-pzlt4g-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Jose Hernandez-Ora…</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Jose Hernandez-Orallo, professor in the department of information systems and computing at Universitat Politècnica de València and fellow at the Leverhulme Centre for the Future of Intelligence, working on the Kinds of Intelligence Project. Jose and I caught up at NIPS last year after the Kinds of Intelligence Symposium that he helped organize there. In our conversation, we discuss the three main themes of the symposium: understanding and identifying the main types of intelligence, including non-human intelligence, developing better ways to test and measure these intelligences, and understanding how and where research efforts should focus to best benefit society. The notes for this show can be found at twimlai.com/talk/137.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Jose Hernandez-Orallo, professor in the department of information systems and computing at Universitat Politècnica de València and fellow at the Leverhulme Centre for the Future of Intelligence, working on the Kinds of Intelligence Project. Jose and I caught up at NIPS last year after the Kinds of Intelligence Symposium that he helped organize there. In our conversation, we discuss the three main themes of the symposium: understanding and identifying the main types of intelligence, including non-human intelligence, developing better ways to test and measure these intelligences, and understanding how and where research efforts should focus to best benefit society. The notes for this show can be found at twimlai.com/talk/137.]]>
      </content:encoded>
      <itunes:duration>2658</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/442123365]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1050722154.mp3?updated=1629216891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Taming arXiv with Natural Language Processing w/ John Bohannon - TWiML Talk #136</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/440639943-twiml-twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon.mp3</link>
      <description>In this episode i'm joined by John Bohannan, Director of Science at AI startup Primer. As you all may know, a few weeks ago we released my interview with Google legend Jeff Dean, which, by the way, you should definitely check if you haven’t already. Anyway, in that interview, Jeff mentions the recent explosion of machine learning papers on arXiv, which I responded to jokingly by asking whether Google had already developed the AI system to help them summarize and track all of them. While Jeff didn’t have anything specific to offer, a listener reached out and let me know that John was in fact already working on this problem. In our conversation, John and I discuss his work on Primer Science, a tool that harvests content uploaded to arxiv, sorts it into natural topics using unsupervised learning, then gives relevant summaries of the activity happening in different innovation areas. We spend a good amount of time on the inner workings of Primer Science, including their data pipeline and some of the tools they use, how they determine “ground truth” for training their models, and the use of heuristics to supplement NLP in their processing. The notes for this show can be found at twimlai.com/talk/136</description>
      <pubDate>Mon, 07 May 2018 16:25:15 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>136</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e270294-ee98-11eb-9502-9fa099f5a63e/image/artworks-000345294876-tkjizt-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode i'm joined by John Bohannan, Dire…</itunes:subtitle>
      <itunes:summary>In this episode i'm joined by John Bohannan, Director of Science at AI startup Primer. As you all may know, a few weeks ago we released my interview with Google legend Jeff Dean, which, by the way, you should definitely check if you haven’t already. Anyway, in that interview, Jeff mentions the recent explosion of machine learning papers on arXiv, which I responded to jokingly by asking whether Google had already developed the AI system to help them summarize and track all of them. While Jeff didn’t have anything specific to offer, a listener reached out and let me know that John was in fact already working on this problem. In our conversation, John and I discuss his work on Primer Science, a tool that harvests content uploaded to arxiv, sorts it into natural topics using unsupervised learning, then gives relevant summaries of the activity happening in different innovation areas. We spend a good amount of time on the inner workings of Primer Science, including their data pipeline and some of the tools they use, how they determine “ground truth” for training their models, and the use of heuristics to supplement NLP in their processing. The notes for this show can be found at twimlai.com/talk/136</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode i'm joined by John Bohannan, Director of Science at AI startup Primer. As you all may know, a few weeks ago we released my interview with Google legend Jeff Dean, which, by the way, you should definitely check if you haven’t already. Anyway, in that interview, Jeff mentions the recent explosion of machine learning papers on arXiv, which I responded to jokingly by asking whether Google had already developed the AI system to help them summarize and track all of them. While Jeff didn’t have anything specific to offer, a listener reached out and let me know that John was in fact already working on this problem. In our conversation, John and I discuss his work on Primer Science, a tool that harvests content uploaded to arxiv, sorts it into natural topics using unsupervised learning, then gives relevant summaries of the activity happening in different innovation areas. We spend a good amount of time on the inner workings of Primer Science, including their data pipeline and some of the tools they use, how they determine “ground truth” for training their models, and the use of heuristics to supplement NLP in their processing. The notes for this show can be found at twimlai.com/talk/136]]>
      </content:encoded>
      <itunes:duration>3257</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/440639943]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6699185092.mp3?updated=1629216898"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Epsilon Software for Private Machine Learning with Chang Liu - TWiML Talk #135</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/439311513-twiml-twiml-talk-135-epsilon-software-for-private-machine-learning-with-chang-liu.mp3</link>
      <description>In this episode, our final episode in the Differential Privacy series, I speak with Chang Liu, applied research scientist at Georgian Partners, a venture capital firm that invests in growth stage business software companies in the US and Canada. Chang joined me to discuss Georgian’s new offering, Epsilon, a software product that embodies the research, development and lessons learned helps in helping their portfolio companies deliver differentially private machine learning solutions to their customers. In our conversation, Chang discusses some of the projects that led to the creation of Epsilon, including differentially private machine learning projects at BlueCore, Work Fusion and Integrate.ai. We explore some of the unique challenges of productizing differentially private ML, including business, people and technology issues. Finally, Chang provides some great pointers for those who’d like to further explore this field. The notes for this show can be found at twimlai.com/talk/135</description>
      <pubDate>Fri, 04 May 2018 14:23:34 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>135</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e4b096e-ee98-11eb-9502-bf3dea923462/image/artworks-000344152806-wkgwu6-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, our final episode in the Differe…</itunes:subtitle>
      <itunes:summary>In this episode, our final episode in the Differential Privacy series, I speak with Chang Liu, applied research scientist at Georgian Partners, a venture capital firm that invests in growth stage business software companies in the US and Canada. Chang joined me to discuss Georgian’s new offering, Epsilon, a software product that embodies the research, development and lessons learned helps in helping their portfolio companies deliver differentially private machine learning solutions to their customers. In our conversation, Chang discusses some of the projects that led to the creation of Epsilon, including differentially private machine learning projects at BlueCore, Work Fusion and Integrate.ai. We explore some of the unique challenges of productizing differentially private ML, including business, people and technology issues. Finally, Chang provides some great pointers for those who’d like to further explore this field. The notes for this show can be found at twimlai.com/talk/135</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, our final episode in the Differential Privacy series, I speak with Chang Liu, applied research scientist at Georgian Partners, a venture capital firm that invests in growth stage business software companies in the US and Canada. Chang joined me to discuss Georgian’s new offering, Epsilon, a software product that embodies the research, development and lessons learned helps in helping their portfolio companies deliver differentially private machine learning solutions to their customers. In our conversation, Chang discusses some of the projects that led to the creation of Epsilon, including differentially private machine learning projects at BlueCore, Work Fusion and Integrate.ai. We explore some of the unique challenges of productizing differentially private ML, including business, people and technology issues. Finally, Chang provides some great pointers for those who’d like to further explore this field. The notes for this show can be found at twimlai.com/talk/135]]>
      </content:encoded>
      <itunes:duration>2811</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/439311513]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5160942510.mp3?updated=1629216894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/438836289-twiml-twiml-talk-134-scalable-differential-privacy-for-deep-learning-with-nicolas-papernot.mp3</link>
      <description>In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.</description>
      <pubDate>Thu, 03 May 2018 15:52:19 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>134</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e7229ea-ee98-11eb-9502-e33e0528e43d/image/artworks-000343724991-71rs72-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Differential Privacy serie…</itunes:subtitle>
      <itunes:summary>In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.]]>
      </content:encoded>
      <itunes:duration>3568</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/438836289]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5612597700.mp3?updated=1629216906"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Differential Privacy at Bluecore with Zahi Karam - TWiML Talk #133</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/437850480-twiml-twiml-talk-133-differential-privacy-at-bluecore-with-zahi-karam.mp3</link>
      <description>In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133</description>
      <pubDate>Tue, 01 May 2018 16:11:40 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>133</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6e95012c-ee98-11eb-9502-4f9328e1041a/image/artworks-000342889320-v5w39m-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode of our Differential Privacy serie…</itunes:subtitle>
      <itunes:summary>In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode of our Differential Privacy series, I'm joined by Zahi Karam, Director of Data Science at Bluecore, whose retail marketing platform specializes in personalized email marketing. I sat down with Zahi at the Georgian Partners portfolio conference last year, where he gave me my initial exposure to the field of differential privacy, ultimately leading to this series. Zahi shared his insights into how differential privacy can be deployed in the real world and some of the technical and cultural challenges to doing so. We discuss the Bluecore use case in depth, including why and for whom they build differentially private machine learning models. The notes for this show can be found at twimlai.com/talk/133]]>
      </content:encoded>
      <itunes:duration>2288</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/437850480]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8794567454.mp3?updated=1629216887"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Differential Privacy Theory &amp; Practice with Aaron Roth - TWiML Talk #132</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/437262174-twiml-twiml-talk-132-differential-privacy-theory-practice-with-aaron-roth.mp3</link>
      <description>In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.</description>
      <pubDate>Mon, 30 Apr 2018 14:08:06 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>132</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6eb90f9a-ee98-11eb-9502-2f2b7b148622/image/artworks-000342392574-eq3tz0-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In the first episode of our Differential Privacy …</itunes:subtitle>
      <itunes:summary>In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.</itunes:summary>
      <content:encoded>
        <![CDATA[In the first episode of our Differential Privacy series, I'm joined by Aaron Roth, associate professor of computer science and information science at the University of Pennsylvania. Aaron is first and foremost a theoretician, and our conversation starts with him helping us understand the context and theory behind differential privacy, a research area he was fortunate to begin pursuing at its inception. We explore the application of differential privacy to machine learning systems, including the costs and challenges of doing so. Aaron discusses as well quite a few examples of differential privacy in action, including work being done at Google, Apple and the US Census Bureau, along with some of the major research directions currently being explored in the field. The notes for this show can be found at twimlai.com/talk/132.]]>
      </content:encoded>
      <itunes:duration>2575</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/437262174]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1856722334.mp3?updated=1629216896"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Optimal Transport and Machine Learning with Marco Cuturi - TWiML Talk #131</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/435591321-twiml-twiml-talk-131-optimal-transport-and-machine-learning-with-marco-cuturi.mp3</link>
      <description>In this episode, i’m joined by Marco Cuturi, professor of statistics at Université Paris-Saclay. Marco and I spent some time discussing his work on Optimal Transport Theory at NIPS last year. In our discussion, Marco explains Optimal Transport, which provides a way for us to compare probability measures. We look at ways Optimal Transport can be used across machine learning applications, including graphical, NLP, and image examples. We also touch on GANs, or generative adversarial networks, and some of the challenges they present to the research community. The notes for this show can be found at twimlai.com/talk/131.</description>
      <pubDate>Thu, 26 Apr 2018 17:49:37 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>131</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ed7f036-ee98-11eb-9502-236ace85114e/image/artworks-000340784280-7ntra6-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, i’m joined by Marco Cuturi, prof…</itunes:subtitle>
      <itunes:summary>In this episode, i’m joined by Marco Cuturi, professor of statistics at Université Paris-Saclay. Marco and I spent some time discussing his work on Optimal Transport Theory at NIPS last year. In our discussion, Marco explains Optimal Transport, which provides a way for us to compare probability measures. We look at ways Optimal Transport can be used across machine learning applications, including graphical, NLP, and image examples. We also touch on GANs, or generative adversarial networks, and some of the challenges they present to the research community. The notes for this show can be found at twimlai.com/talk/131.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, i’m joined by Marco Cuturi, professor of statistics at Université Paris-Saclay. Marco and I spent some time discussing his work on Optimal Transport Theory at NIPS last year. In our discussion, Marco explains Optimal Transport, which provides a way for us to compare probability measures. We look at ways Optimal Transport can be used across machine learning applications, including graphical, NLP, and image examples. We also touch on GANs, or generative adversarial networks, and some of the challenges they present to the research community. The notes for this show can be found at twimlai.com/talk/131.]]>
      </content:encoded>
      <itunes:duration>1957</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/435591321]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5409163050.mp3?updated=1629216873"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Collecting and Annotating Data for AI with Kiran Vajapey - TWiML Talk #130</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/433990842-twiml-twiml-talk-130-collecting-and-annotating-data-for-ai-with-kiran-vajapey.mp3</link>
      <description>In this episode, I’m joined by Kiran Vajapey, a human-computer interaction developer at Figure Eight. In this interview, Kiran shares some of what he’s has learned through his work developing applications for data collection and annotation at Figure Eight and earlier in his career. We explore techniques like data augmentation, domain adaptation, and active and transfer learning for enhancing and enriching training datasets. We also touch on the use of Imagenet and other public datasets for real-world AI applications. If you like what you hear in this interview, Kiran will be speaking at my AI Summit April 30th and May 1st in Las Vegas and I’ll be joining Kiran at the upcoming Figure Eight TrainAI conference, May 9th&amp;10th in San Francisco. The notes for this show can be found at twimlai.com/talk/130</description>
      <pubDate>Mon, 23 Apr 2018 17:36:47 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>130</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ef806dc-ee98-11eb-9502-57bd6c7b587f/image/artworks-000338921406-rg9oqm-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by Kiran Vajapey, a h…</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by Kiran Vajapey, a human-computer interaction developer at Figure Eight. In this interview, Kiran shares some of what he’s has learned through his work developing applications for data collection and annotation at Figure Eight and earlier in his career. We explore techniques like data augmentation, domain adaptation, and active and transfer learning for enhancing and enriching training datasets. We also touch on the use of Imagenet and other public datasets for real-world AI applications. If you like what you hear in this interview, Kiran will be speaking at my AI Summit April 30th and May 1st in Las Vegas and I’ll be joining Kiran at the upcoming Figure Eight TrainAI conference, May 9th&amp;10th in San Francisco. The notes for this show can be found at twimlai.com/talk/130</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by Kiran Vajapey, a human-computer interaction developer at Figure Eight. In this interview, Kiran shares some of what he’s has learned through his work developing applications for data collection and annotation at Figure Eight and earlier in his career. We explore techniques like data augmentation, domain adaptation, and active and transfer learning for enhancing and enriching training datasets. We also touch on the use of Imagenet and other public datasets for real-world AI applications. If you like what you hear in this interview, Kiran will be speaking at my AI Summit April 30th and May 1st in Las Vegas and I’ll be joining Kiran at the upcoming Figure Eight TrainAI conference, May 9th&amp;10th in San Francisco. The notes for this show can be found at twimlai.com/talk/130]]>
      </content:encoded>
      <itunes:duration>2418</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/433990842]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7865985058.mp3?updated=1629216884"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Autonomous Aerial Guidance, Navigation and Control Systems with Christopher Lum - TWiML Talk #129</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/431908098-twiml-twiml-talk-129-autonomous-aerial-guidance-navigation-and-control-systems-with-christopher-lum.mp3</link>
      <description>Ok, In this episode, I'm joined by Christopher Lum, Research Assistant Professor in the University of Washington’s Department of Aeronautics and Astronautics. Chris also co-heads the University’s Autonomous Flight Systems Lab, where he and his students are working on the guidance, navigation, and control of unmanned systems. In our conversation, we discuss some of the technical and regulatory challenges of building and deploying Unmanned Autonomous Systems. We also talk about some interesting work he’s doing on evolutionary path planning systems as well as an Precision Agriculture use case. Finally, Chris shares some great starting places for those looking to begin a journey into autonomous systems research. The notes for this show can be found at twimlai.com/talk/129.</description>
      <pubDate>Thu, 19 Apr 2018 16:01:08 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>129</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f19c1c8-ee98-11eb-9502-0bbf60a25072/image/artworks-000337278072-mh13g0-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Ok, In this episode, I'm joined by Christopher Lu…</itunes:subtitle>
      <itunes:summary>Ok, In this episode, I'm joined by Christopher Lum, Research Assistant Professor in the University of Washington’s Department of Aeronautics and Astronautics. Chris also co-heads the University’s Autonomous Flight Systems Lab, where he and his students are working on the guidance, navigation, and control of unmanned systems. In our conversation, we discuss some of the technical and regulatory challenges of building and deploying Unmanned Autonomous Systems. We also talk about some interesting work he’s doing on evolutionary path planning systems as well as an Precision Agriculture use case. Finally, Chris shares some great starting places for those looking to begin a journey into autonomous systems research. The notes for this show can be found at twimlai.com/talk/129.</itunes:summary>
      <content:encoded>
        <![CDATA[Ok, In this episode, I'm joined by Christopher Lum, Research Assistant Professor in the University of Washington’s Department of Aeronautics and Astronautics. Chris also co-heads the University’s Autonomous Flight Systems Lab, where he and his students are working on the guidance, navigation, and control of unmanned systems. In our conversation, we discuss some of the technical and regulatory challenges of building and deploying Unmanned Autonomous Systems. We also talk about some interesting work he’s doing on evolutionary path planning systems as well as an Precision Agriculture use case. Finally, Chris shares some great starting places for those looking to begin a journey into autonomous systems research. The notes for this show can be found at twimlai.com/talk/129.]]>
      </content:encoded>
      <itunes:duration>3155</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/431908098]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9456848044.mp3?updated=1629216902"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Infrastructure for Autonomous Vehicles with Missy Cummings - TWiML Talk #128</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/430681617-twiml-twiml-talk-128-infrastructure-for-autonomous-vehicles-with-missy-cummings.mp3</link>
      <description>In this episode, I’m joined by Missy Cummings, head of Duke University’s Humans and Autonomy Lab and professor in the department of mechanical engineering. In addition to being an accomplished researcher, Missy also became one of the first female fighter pilots in the US Navy following the repeal of the Combat Exclusion Policy in 1993. We discuss Missy’s research into the infrastructural and operational challenges presented by autonomous vehicles, including cars, drones and unmanned aircraft. We also cover trust, explainability, and interactions between humans and AV systems. This was an awesome interview and i'm glad we’re able to bring it to you! The notes for this show can be found at twimlai.com/talk/128.</description>
      <pubDate>Mon, 16 Apr 2018 20:58:11 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>128</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f43da30-ee98-11eb-9502-9be3d35550ef/image/artworks-000336048345-st6xew-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by Missy Cummings, he…</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by Missy Cummings, head of Duke University’s Humans and Autonomy Lab and professor in the department of mechanical engineering. In addition to being an accomplished researcher, Missy also became one of the first female fighter pilots in the US Navy following the repeal of the Combat Exclusion Policy in 1993. We discuss Missy’s research into the infrastructural and operational challenges presented by autonomous vehicles, including cars, drones and unmanned aircraft. We also cover trust, explainability, and interactions between humans and AV systems. This was an awesome interview and i'm glad we’re able to bring it to you! The notes for this show can be found at twimlai.com/talk/128.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by Missy Cummings, head of Duke University’s Humans and Autonomy Lab and professor in the department of mechanical engineering. In addition to being an accomplished researcher, Missy also became one of the first female fighter pilots in the US Navy following the repeal of the Combat Exclusion Policy in 1993. We discuss Missy’s research into the infrastructural and operational challenges presented by autonomous vehicles, including cars, drones and unmanned aircraft. We also cover trust, explainability, and interactions between humans and AV systems. This was an awesome interview and i'm glad we’re able to bring it to you! The notes for this show can be found at twimlai.com/talk/128.]]>
      </content:encoded>
      <itunes:duration>2612</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/430681617]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4248187106.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Hyper-Personalizing the Customer Experience w/ AI with Rob Walker - TWiML Talk #127</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/428915190-twiml-twiml-talk-127-hyper-personalizing-customer-experience-w-ai-rob-walker.mp3</link>
      <description>In this episode, we're joined by Rob Walker, Vice President of decision management and analytics at Pegasystems, a leading provider of software for customer engagement and operational excellence. Rob and I discuss what’s required for enterprises to fully realize the vision of providing a hyper-personalized customer experience, and how machine learning and AI can be used to determine the next best action an organization should take to optimize sales, service, retention, and risk at every step in the customer relationship. Along the way we dig into a couple of key areas, specifically some of the techniques his organization uses to allow customers to manage the tradeoff between model performance and transparency, particularly in light of new laws like GDPR, and how all this ties to an enterprise’s ability to manage bias and ethical issues when deploying ML. We cover a lot of ground in this one and I think you’ll find Rob’s perspective really interesting. The notes for this show can be found at twimlai.com/talk/127.</description>
      <pubDate>Thu, 12 Apr 2018 23:54:20 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>127</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f630bda-ee98-11eb-9502-bb5a5d6ac705/image/artworks-000334515318-0si4v7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we're joined by Rob Walker, Vice…</itunes:subtitle>
      <itunes:summary>In this episode, we're joined by Rob Walker, Vice President of decision management and analytics at Pegasystems, a leading provider of software for customer engagement and operational excellence. Rob and I discuss what’s required for enterprises to fully realize the vision of providing a hyper-personalized customer experience, and how machine learning and AI can be used to determine the next best action an organization should take to optimize sales, service, retention, and risk at every step in the customer relationship. Along the way we dig into a couple of key areas, specifically some of the techniques his organization uses to allow customers to manage the tradeoff between model performance and transparency, particularly in light of new laws like GDPR, and how all this ties to an enterprise’s ability to manage bias and ethical issues when deploying ML. We cover a lot of ground in this one and I think you’ll find Rob’s perspective really interesting. The notes for this show can be found at twimlai.com/talk/127.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we're joined by Rob Walker, Vice President of decision management and analytics at Pegasystems, a leading provider of software for customer engagement and operational excellence. Rob and I discuss what’s required for enterprises to fully realize the vision of providing a hyper-personalized customer experience, and how machine learning and AI can be used to determine the next best action an organization should take to optimize sales, service, retention, and risk at every step in the customer relationship. Along the way we dig into a couple of key areas, specifically some of the techniques his organization uses to allow customers to manage the tradeoff between model performance and transparency, particularly in light of new laws like GDPR, and how all this ties to an enterprise’s ability to manage bias and ethical issues when deploying ML. We cover a lot of ground in this one and I think you’ll find Rob’s perspective really interesting. The notes for this show can be found at twimlai.com/talk/127.]]>
      </content:encoded>
      <itunes:duration>2500</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/428915190]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9284875787.mp3?updated=1629216893"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Information Extraction from Natural Document Formats with David Rosenberg - TWiML Talk #126</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/427209351-twiml-twiml-talk-126-information-extraction-natural-document-formats-david-rosenberg.mp3</link>
      <description>In this episode, I’m joined by David Rosenberg, data scientist in the office of the CTO at financial publisher Bloomberg, to discuss his work on “Extracting Data from Tables and Charts in Natural Document Formats.” Bloomberg is dealing with tons of financial and company data in pdfs and other unstructured document formats on a daily basis. To make meaning from this information more efficiently, David and his team have implemented a deep learning pipeline for extracting data from the documents. In our conversation, we dig into the information extraction process, including how it was built, how they sourced their training data, why they used LaTeX as an intermediate representation and how and why they optimize on pixel-perfect accuracy. There’s a lot of interesting info in this show and I think you’re going to enjoy it. The notes for this show can be found at twimlai.com/talk/126.</description>
      <pubDate>Mon, 09 Apr 2018 17:23:56 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>126</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6f8d003e-ee98-11eb-9502-ef14f2670fdb/image/artworks-000332455581-4m7oxj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by David Rosenberg, d…</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by David Rosenberg, data scientist in the office of the CTO at financial publisher Bloomberg, to discuss his work on “Extracting Data from Tables and Charts in Natural Document Formats.” Bloomberg is dealing with tons of financial and company data in pdfs and other unstructured document formats on a daily basis. To make meaning from this information more efficiently, David and his team have implemented a deep learning pipeline for extracting data from the documents. In our conversation, we dig into the information extraction process, including how it was built, how they sourced their training data, why they used LaTeX as an intermediate representation and how and why they optimize on pixel-perfect accuracy. There’s a lot of interesting info in this show and I think you’re going to enjoy it. The notes for this show can be found at twimlai.com/talk/126.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by David Rosenberg, data scientist in the office of the CTO at financial publisher Bloomberg, to discuss his work on “Extracting Data from Tables and Charts in Natural Document Formats.” Bloomberg is dealing with tons of financial and company data in pdfs and other unstructured document formats on a daily basis. To make meaning from this information more efficiently, David and his team have implemented a deep learning pipeline for extracting data from the documents. In our conversation, we dig into the information extraction process, including how it was built, how they sourced their training data, why they used LaTeX as an intermediate representation and how and why they optimize on pixel-perfect accuracy. There’s a lot of interesting info in this show and I think you’re going to enjoy it. The notes for this show can be found at twimlai.com/talk/126.]]>
      </content:encoded>
      <itunes:duration>2736</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/427209351]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8889548921.mp3?updated=1629216896"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human-in-the-Loop AI for Emergency Response &amp; More w/ Robert Munro - TWiML Talk #125</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/425249055-twiml-twiml-talk-125-human-loop-ai-emergency-response-robert-munro.mp3</link>
      <description>In this episode, I chat with Rob Munro, CTO of the newly branded Figure Eight, formerly known as CrowdFlower. Figure Eight’s Human-in-the-Loop AI platform supports data science &amp; machine learning teams working on autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Rob and I had a really interesting discussion covering some of the work he’s previously done applying machine learning to disaster response and epidemiology, including a use case involving text translation in the wake of the catastrophic 2010 Haiti earthquake. We also dig into some of the technical challenges that he’s encountered in trying to scale the human-in-the-loop side of machine learning since joining Figure Eight, including identifying more efficient approaches to image annotation as well as the use of zero shot machine learning to minimize training data requirements. Finally, we briefly discuss Figure Eight’s upcoming TrainAI conference, which takes place on May 9th &amp; 10th in San Francisco. Train AI you can join me and Rob, along with a host of amazing speakers like Garry Kasparov, Andrej Karpathy, Marti Hearst and many more and receive hands-on AI, machine learning and deep learning training through real-world case studies on practical machine learning applications. For more information on TrainAI, head over to figure-eight.com/train-ai, and be sure to use code TWIMLAI for 30% off your registration! For those of you listening to this on or before April 6th, Figure Eight is offering an even better deal on event registration. Use the code figure-eight to register for only 88 dollars. The notes for this show can be found at twimlai.com/talk/125.</description>
      <pubDate>Thu, 05 Apr 2018 16:07:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>125</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6fb1a6d2-ee98-11eb-9502-9b3b59cb5a7d/image/artworks-000330278394-jdtw98-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I chat with Rob Munro, CTO of th…</itunes:subtitle>
      <itunes:summary>In this episode, I chat with Rob Munro, CTO of the newly branded Figure Eight, formerly known as CrowdFlower. Figure Eight’s Human-in-the-Loop AI platform supports data science &amp; machine learning teams working on autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Rob and I had a really interesting discussion covering some of the work he’s previously done applying machine learning to disaster response and epidemiology, including a use case involving text translation in the wake of the catastrophic 2010 Haiti earthquake. We also dig into some of the technical challenges that he’s encountered in trying to scale the human-in-the-loop side of machine learning since joining Figure Eight, including identifying more efficient approaches to image annotation as well as the use of zero shot machine learning to minimize training data requirements. Finally, we briefly discuss Figure Eight’s upcoming TrainAI conference, which takes place on May 9th &amp; 10th in San Francisco. Train AI you can join me and Rob, along with a host of amazing speakers like Garry Kasparov, Andrej Karpathy, Marti Hearst and many more and receive hands-on AI, machine learning and deep learning training through real-world case studies on practical machine learning applications. For more information on TrainAI, head over to figure-eight.com/train-ai, and be sure to use code TWIMLAI for 30% off your registration! For those of you listening to this on or before April 6th, Figure Eight is offering an even better deal on event registration. Use the code figure-eight to register for only 88 dollars. The notes for this show can be found at twimlai.com/talk/125.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I chat with Rob Munro, CTO of the newly branded Figure Eight, formerly known as CrowdFlower. Figure Eight’s Human-in-the-Loop AI platform supports data science &amp; machine learning teams working on autonomous vehicles, consumer product identification, natural language processing, search relevance, intelligent chatbots, and more. Rob and I had a really interesting discussion covering some of the work he’s previously done applying machine learning to disaster response and epidemiology, including a use case involving text translation in the wake of the catastrophic 2010 Haiti earthquake. We also dig into some of the technical challenges that he’s encountered in trying to scale the human-in-the-loop side of machine learning since joining Figure Eight, including identifying more efficient approaches to image annotation as well as the use of zero shot machine learning to minimize training data requirements. Finally, we briefly discuss Figure Eight’s upcoming TrainAI conference, which takes place on May 9th &amp; 10th in San Francisco. Train AI you can join me and Rob, along with a host of amazing speakers like Garry Kasparov, Andrej Karpathy, Marti Hearst and many more and receive hands-on AI, machine learning and deep learning training through real-world case studies on practical machine learning applications. For more information on TrainAI, head over to figure-eight.com/train-ai, and be sure to use code TWIMLAI for 30% off your registration! For those of you listening to this on or before April 6th, Figure Eight is offering an even better deal on event registration. Use the code figure-eight to register for only 88 dollars. The notes for this show can be found at twimlai.com/talk/125.]]>
      </content:encoded>
      <itunes:duration>2906</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/425249055]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9910524445.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Systems and Software for Machine Learning at Scale with Jeff Dean - TWiML Talk #124</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/423734781-twiml-twiml-talk-124-systems-software-machine-learning-scale-jeff-dean.mp3</link>
      <description>In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View. As you’ll hear, I was very excited for this interview, because so many of Jeff’s contributions since he started at Google in ‘99 have touched my life and work. In our conversation, Jeff and I dig into a bunch of the core machine learning innovations we’ve seen from Google. Of course we discuss TensorFlow, and its origins and evolution at Google. We also explore AI acceleration hardware, including TPU v1, v2 and future directions from Google and the broader market in this area. We talk through the machine learning toolchain, including some things that Googlers might take for granted, and where the recently announced Cloud AutoML fits in. We also discuss Google’s process for mapping problems across a variety of domains to deep learning, and much, much more. This was definitely one of my favorite conversations, and I'm pumped to be able to share it with you. The notes for this show can be found at twimlai.com/talk/124.</description>
      <pubDate>Mon, 02 Apr 2018 17:51:14 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>124</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6fdcd08c-ee98-11eb-9502-1f36186bae1d/image/artworks-000328887891-827hht-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Jeff Dean, Google S…</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View. As you’ll hear, I was very excited for this interview, because so many of Jeff’s contributions since he started at Google in ‘99 have touched my life and work. In our conversation, Jeff and I dig into a bunch of the core machine learning innovations we’ve seen from Google. Of course we discuss TensorFlow, and its origins and evolution at Google. We also explore AI acceleration hardware, including TPU v1, v2 and future directions from Google and the broader market in this area. We talk through the machine learning toolchain, including some things that Googlers might take for granted, and where the recently announced Cloud AutoML fits in. We also discuss Google’s process for mapping problems across a variety of domains to deep learning, and much, much more. This was definitely one of my favorite conversations, and I'm pumped to be able to share it with you. The notes for this show can be found at twimlai.com/talk/124.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, who I had a chance to sit down with last week at the Googleplex in Mountain View. As you’ll hear, I was very excited for this interview, because so many of Jeff’s contributions since he started at Google in ‘99 have touched my life and work. In our conversation, Jeff and I dig into a bunch of the core machine learning innovations we’ve seen from Google. Of course we discuss TensorFlow, and its origins and evolution at Google. We also explore AI acceleration hardware, including TPU v1, v2 and future directions from Google and the broader market in this area. We talk through the machine learning toolchain, including some things that Googlers might take for granted, and where the recently announced Cloud AutoML fits in. We also discuss Google’s process for mapping problems across a variety of domains to deep learning, and much, much more. This was definitely one of my favorite conversations, and I'm pumped to be able to share it with you. The notes for this show can be found at twimlai.com/talk/124.]]>
      </content:encoded>
      <itunes:duration>3271</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/423734781]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5904894205.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Semantic Segmentation of 3D Point Clouds with Lyne Tchapmi - TWiML Talk #123</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/421890321-twiml-twiml-talk-123-semantic-segmentation-3d-point-clouds-lyne-tchapmi.mp3</link>
      <description>In this episode I’m joined by Lyne Tchapmi, PhD student in the Stanford Computational Vision and Geometry Lab, to discuss her paper, “SEGCloud: Semantic Segmentation of 3D Point Clouds.” SEGCloud is an end-to-end framework that performs 3D point-level segmentation combining the advantages of neural networks, trilinear interpolation and fully connected conditional random fields. In our conversation, Lyne and I cover the ins and outs of semantic segmentation, starting from the sensor data that we’re trying to segment, 2d vs 3d representations of that data, and how we go about automatically identifying classes. Along the way we dig into some of the details, including how she obtained a more fine grain labeling of points from sensor data and the transition from point clouds to voxels. The notes for this show can be found at twimlai.com/talk/123.</description>
      <pubDate>Thu, 29 Mar 2018 16:11:19 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>123</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/6ffbff52-ee98-11eb-9502-4f6aca4435e0/image/artworks-000327017691-0gckjc-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Lyne Tchapmi, PhD s…</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Lyne Tchapmi, PhD student in the Stanford Computational Vision and Geometry Lab, to discuss her paper, “SEGCloud: Semantic Segmentation of 3D Point Clouds.” SEGCloud is an end-to-end framework that performs 3D point-level segmentation combining the advantages of neural networks, trilinear interpolation and fully connected conditional random fields. In our conversation, Lyne and I cover the ins and outs of semantic segmentation, starting from the sensor data that we’re trying to segment, 2d vs 3d representations of that data, and how we go about automatically identifying classes. Along the way we dig into some of the details, including how she obtained a more fine grain labeling of points from sensor data and the transition from point clouds to voxels. The notes for this show can be found at twimlai.com/talk/123.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Lyne Tchapmi, PhD student in the Stanford Computational Vision and Geometry Lab, to discuss her paper, “SEGCloud: Semantic Segmentation of 3D Point Clouds.” SEGCloud is an end-to-end framework that performs 3D point-level segmentation combining the advantages of neural networks, trilinear interpolation and fully connected conditional random fields. In our conversation, Lyne and I cover the ins and outs of semantic segmentation, starting from the sensor data that we’re trying to segment, 2d vs 3d representations of that data, and how we go about automatically identifying classes. Along the way we dig into some of the details, including how she obtained a more fine grain labeling of points from sensor data and the transition from point clouds to voxels. The notes for this show can be found at twimlai.com/talk/123.]]>
      </content:encoded>
      <itunes:duration>2161</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/421890321]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3798090557.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Predicting Cardiovascular Risk Factors from Eye Images with Ryan Poplin - TWiML Talk #122</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/420382632-twiml-twiml-talk-122-predicting-cardiovascular-risk-factors-eye-images-ryan-poplin.mp3</link>
      <description>In this episode, I'm joined by Google Research Scientist Ryan Poplin, who recently co-authored the paper “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.” In our conversation, Ryan details his work training a deep learning model to predict various patient risk factors for heart disease, including some surprising ones like age and gender. We also dive into some interesting findings he discovered with regards to multi-task learning, as well as his use of an attention mechanisms to provide explainability. This was a really interesting discussion that I think you’ll really enjoy! The notes for this show can be found at twimlai.com/talk/122.</description>
      <pubDate>Mon, 26 Mar 2018 21:19:32 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>122</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/701b04ba-ee98-11eb-9502-8f3139b6e15d/image/artworks-000325469948-34a870-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Google Research Sc…</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Google Research Scientist Ryan Poplin, who recently co-authored the paper “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.” In our conversation, Ryan details his work training a deep learning model to predict various patient risk factors for heart disease, including some surprising ones like age and gender. We also dive into some interesting findings he discovered with regards to multi-task learning, as well as his use of an attention mechanisms to provide explainability. This was a really interesting discussion that I think you’ll really enjoy! The notes for this show can be found at twimlai.com/talk/122.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Google Research Scientist Ryan Poplin, who recently co-authored the paper “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning.” In our conversation, Ryan details his work training a deep learning model to predict various patient risk factors for heart disease, including some surprising ones like age and gender. We also dive into some interesting findings he discovered with regards to multi-task learning, as well as his use of an attention mechanisms to provide explainability. This was a really interesting discussion that I think you’ll really enjoy! The notes for this show can be found at twimlai.com/talk/122.]]>
      </content:encoded>
      <itunes:duration>2571</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/420382632]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4902399147.mp3?updated=1629216887"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reproducibility and the Philosophy of Data with Clare Gollnick - TWiML Talk #121</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/417655376-twiml-twiml-talk-121-reproducibility-philosophy-data-clare-gollnick.mp3</link>
      <description>In this episode, i'm joined by Clare Gollnick, CTO of Terbium Labs, to discuss her thoughts on the “reproducibility crisis” currently haunting the scientific landscape. For a little background, a “Nature” survey in 2016 showed that "more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments." Clare gives us her take on the situation, and how it applies to data science, along with some great nuggets about the philosophy of data and a few interesting use cases as well. We also cover her thoughts on Bayesian vs Frequentist techniques and while we’re at it, the Vim vs Emacs debate. No, actually I’m just kidding on that last one. But this was indeed a very fun conversation that I think you’ll enjoy! For the complete show notes, visit twimlai.com/talk/121.</description>
      <pubDate>Thu, 22 Mar 2018 16:42:24 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>121</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/703e2526-ee98-11eb-9502-d3870d45047c/image/artworks-000321842492-o21xob-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, i'm joined by Clare Gollnick, CT…</itunes:subtitle>
      <itunes:summary>In this episode, i'm joined by Clare Gollnick, CTO of Terbium Labs, to discuss her thoughts on the “reproducibility crisis” currently haunting the scientific landscape. For a little background, a “Nature” survey in 2016 showed that "more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments." Clare gives us her take on the situation, and how it applies to data science, along with some great nuggets about the philosophy of data and a few interesting use cases as well. We also cover her thoughts on Bayesian vs Frequentist techniques and while we’re at it, the Vim vs Emacs debate. No, actually I’m just kidding on that last one. But this was indeed a very fun conversation that I think you’ll enjoy! For the complete show notes, visit twimlai.com/talk/121.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, i'm joined by Clare Gollnick, CTO of Terbium Labs, to discuss her thoughts on the “reproducibility crisis” currently haunting the scientific landscape. For a little background, a “Nature” survey in 2016 showed that "more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments." Clare gives us her take on the situation, and how it applies to data science, along with some great nuggets about the philosophy of data and a few interesting use cases as well. We also cover her thoughts on Bayesian vs Frequentist techniques and while we’re at it, the Vim vs Emacs debate. No, actually I’m just kidding on that last one. But this was indeed a very fun conversation that I think you’ll enjoy! For the complete show notes, visit twimlai.com/talk/121.]]>
      </content:encoded>
      <itunes:duration>2283</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/417655376]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8963154324.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Surveying the Connected Car Landscape with GK Senthil - TWiML Talk #120</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/416224467-twiml-twiml-talk-120-surveying-connected-car-landscape-gk-senthil.mp3</link>
      <description>In this episode, I’m joined by GK Senthil, director &amp; chief product owner for innovation at Toyota Connected. GK and I spoke about some of the potential opportunities and challenges for smart cars. We discussed Toyota’s recently announced partnership with Amazon to embed Alexa in vehicles, and more generally the approach they’re taking to get connected car technology up to par with smartphones and other intelligent devices we use on a daily basis. We cover in-car voice recognition and touch on the ways ML &amp; AI need to be developed to be useful in vehicles, as well as the approaches to getting there. The notes for this show can be found at twimlai.com/talk/120</description>
      <pubDate>Mon, 19 Mar 2018 22:29:35 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>120</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/705f24d8-ee98-11eb-9502-fbd750aa604d/image/artworks-000320137644-rjn2h3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by GK Senthil, direct…</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by GK Senthil, director &amp; chief product owner for innovation at Toyota Connected. GK and I spoke about some of the potential opportunities and challenges for smart cars. We discussed Toyota’s recently announced partnership with Amazon to embed Alexa in vehicles, and more generally the approach they’re taking to get connected car technology up to par with smartphones and other intelligent devices we use on a daily basis. We cover in-car voice recognition and touch on the ways ML &amp; AI need to be developed to be useful in vehicles, as well as the approaches to getting there. The notes for this show can be found at twimlai.com/talk/120</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by GK Senthil, director &amp; chief product owner for innovation at Toyota Connected. GK and I spoke about some of the potential opportunities and challenges for smart cars. We discussed Toyota’s recently announced partnership with Amazon to embed Alexa in vehicles, and more generally the approach they’re taking to get connected car technology up to par with smartphones and other intelligent devices we use on a daily basis. We cover in-car voice recognition and touch on the ways ML &amp; AI need to be developed to be useful in vehicles, as well as the approaches to getting there. The notes for this show can be found at twimlai.com/talk/120]]>
      </content:encoded>
      <itunes:duration>1819</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/416224467]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9313157896.mp3?updated=1629216872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Adversarial Attacks Against Reinforcement Learning Agents with Ian Goodfellow &amp; Sandy Huang</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/414102462-twiml-twiml-talk-119-adversarial-attacks-reinforcement-learning-agents-ian-goodfellow-sandy-huang.mp3</link>
      <description>In this episode, I’m joined by Ian Goodfellow, Staff Research Scientist at Google Brain and Sandy Huang, Phd Student in the EECS department at UC Berkeley, to discuss their work on the paper Adversarial Attacks on Neural Network Policies. If you’re a regular listener here you’ve probably heard of adversarial attacks, and have seen examples of deep learning based object detectors that can be fooled into thinking that, for example, a giraffe is actually a school bus, by injecting some imperceptible noise into the image. Well, Sandy and Ian’s paper sits at the intersection of adversarial attacks and reinforcement learning, another area we’ve discussed quite a bit on the podcast. In their paper, they describe how adversarial attacks can also be effective at targeting neural network policies in reinforcement learning. Sandy gives us an overview of the paper, including how changing a single pixel value can throw off performance of a model trained to play Atari games. We also cover a lot of interesting topics relating to adversarial attacks and RL individually, and some related areas such as hierarchical reward functions and transfer learning. This was a great conversation that I’m really excited to bring to you! For complete show notes, head over to twimlai.com/talk/119</description>
      <pubDate>Thu, 15 Mar 2018 16:27:41 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>119</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/707d7690-ee98-11eb-9502-fb7bdb356d69/image/artworks-000317101545-n5xawo-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I’m joined by Ian Goodfellow, St…</itunes:subtitle>
      <itunes:summary>In this episode, I’m joined by Ian Goodfellow, Staff Research Scientist at Google Brain and Sandy Huang, Phd Student in the EECS department at UC Berkeley, to discuss their work on the paper Adversarial Attacks on Neural Network Policies. If you’re a regular listener here you’ve probably heard of adversarial attacks, and have seen examples of deep learning based object detectors that can be fooled into thinking that, for example, a giraffe is actually a school bus, by injecting some imperceptible noise into the image. Well, Sandy and Ian’s paper sits at the intersection of adversarial attacks and reinforcement learning, another area we’ve discussed quite a bit on the podcast. In their paper, they describe how adversarial attacks can also be effective at targeting neural network policies in reinforcement learning. Sandy gives us an overview of the paper, including how changing a single pixel value can throw off performance of a model trained to play Atari games. We also cover a lot of interesting topics relating to adversarial attacks and RL individually, and some related areas such as hierarchical reward functions and transfer learning. This was a great conversation that I’m really excited to bring to you! For complete show notes, head over to twimlai.com/talk/119</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I’m joined by Ian Goodfellow, Staff Research Scientist at Google Brain and Sandy Huang, Phd Student in the EECS department at UC Berkeley, to discuss their work on the paper Adversarial Attacks on Neural Network Policies. If you’re a regular listener here you’ve probably heard of adversarial attacks, and have seen examples of deep learning based object detectors that can be fooled into thinking that, for example, a giraffe is actually a school bus, by injecting some imperceptible noise into the image. Well, Sandy and Ian’s paper sits at the intersection of adversarial attacks and reinforcement learning, another area we’ve discussed quite a bit on the podcast. In their paper, they describe how adversarial attacks can also be effective at targeting neural network policies in reinforcement learning. Sandy gives us an overview of the paper, including how changing a single pixel value can throw off performance of a model trained to play Atari games. We also cover a lot of interesting topics relating to adversarial attacks and RL individually, and some related areas such as hierarchical reward functions and transfer learning. This was a great conversation that I’m really excited to bring to you! For complete show notes, head over to twimlai.com/talk/119]]>
      </content:encoded>
      <itunes:duration>2829</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/414102462]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7678074210.mp3?updated=1629216894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Towards Abstract Robotic Understanding with Raja Chatila - TWiML Talk #118</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/412683609-twiml-twiml-talk-118-towards-abstract-robotic-understanding-raja-chatila.mp3</link>
      <description>In this episode, we're joined by Raja Chatila, director of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris, and executive committee chair of the IEEE global initiative on ethics of intelligent and autonomous systems. Raja and I had a great chat about his research, which deals with robotic perception and discovery. We discuss the relationship between learning and discovery, particularly as it applies to robots and their environments, and the connection between robotic perception and action. We also dig into the concepts of affordances, abstract teachings, meta-reasoning and self-awareness as they apply to intelligent systems. Finally, we touch on the issue of values and ethics of these systems. The notes for this show can be found at twimlai.com/talk/118.</description>
      <pubDate>Mon, 12 Mar 2018 20:18:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>118</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/70a4a774-ee98-11eb-9502-3f01ee75b364/image/artworks-000315312981-11hsu3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we're joined by Raja Chatila, di…</itunes:subtitle>
      <itunes:summary>In this episode, we're joined by Raja Chatila, director of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris, and executive committee chair of the IEEE global initiative on ethics of intelligent and autonomous systems. Raja and I had a great chat about his research, which deals with robotic perception and discovery. We discuss the relationship between learning and discovery, particularly as it applies to robots and their environments, and the connection between robotic perception and action. We also dig into the concepts of affordances, abstract teachings, meta-reasoning and self-awareness as they apply to intelligent systems. Finally, we touch on the issue of values and ethics of these systems. The notes for this show can be found at twimlai.com/talk/118.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we're joined by Raja Chatila, director of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris, and executive committee chair of the IEEE global initiative on ethics of intelligent and autonomous systems. Raja and I had a great chat about his research, which deals with robotic perception and discovery. We discuss the relationship between learning and discovery, particularly as it applies to robots and their environments, and the connection between robotic perception and action. We also dig into the concepts of affordances, abstract teachings, meta-reasoning and self-awareness as they apply to intelligent systems. Finally, we touch on the issue of values and ethics of these systems. The notes for this show can be found at twimlai.com/talk/118.]]>
      </content:encoded>
      <itunes:duration>2858</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/412683609]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6404668813.mp3?updated=1629216888"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Discovering Exoplanets w/ Deep Learning with Chris Shallue - TWiML Talk #117</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/410558802-twiml-twiml-talk-117-discovering-exoplanets-deep-learning-chris-shallue.mp3</link>
      <description>Earlier this week, I had a chance to speak with Chris Shallue, Senior Software Engineer on the Google Brain Team, about his project and paper on “Exploring Exoplanets with Deep Learning.” This is a great story. Chris, inspired by a book he was reading, reached out on a whim to a Harvard astrophysics researcher, kicking off a collaboration and side project eventually leading to the discovery of two new planets outside our solar system. In our conversation, we walk through the entire process Chris followed to find these two exoplanets, including how he researched the domain as an outsider, how he sourced and processed his dataset, and how he built and evolved his models. Finally, we discuss the results of his project and his plans for future work in this area. This podcast is being published in parallel with Google’s release of the source code and data that Chris developed and used, which we’ll link to below, so if what you hear inspires you to dig into this area, you’ve got a nice head start. This was a really interesting conversation, and I'm excited to share it with you! The notes for this show can be found at twimlai.com/talk/117 The corresponding blog post for this project can be found at https://research.googleblog.com/2018/03/open-sourcing-hunt-for-exoplanets.html</description>
      <pubDate>Thu, 08 Mar 2018 19:02:46 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>117</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/70bf0de4-ee98-11eb-9502-3f5bc0a05c18/image/artworks-000312785637-qx99qg-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Earlier this week, I had a chance to speak with C…</itunes:subtitle>
      <itunes:summary>Earlier this week, I had a chance to speak with Chris Shallue, Senior Software Engineer on the Google Brain Team, about his project and paper on “Exploring Exoplanets with Deep Learning.” This is a great story. Chris, inspired by a book he was reading, reached out on a whim to a Harvard astrophysics researcher, kicking off a collaboration and side project eventually leading to the discovery of two new planets outside our solar system. In our conversation, we walk through the entire process Chris followed to find these two exoplanets, including how he researched the domain as an outsider, how he sourced and processed his dataset, and how he built and evolved his models. Finally, we discuss the results of his project and his plans for future work in this area. This podcast is being published in parallel with Google’s release of the source code and data that Chris developed and used, which we’ll link to below, so if what you hear inspires you to dig into this area, you’ve got a nice head start. This was a really interesting conversation, and I'm excited to share it with you! The notes for this show can be found at twimlai.com/talk/117 The corresponding blog post for this project can be found at https://research.googleblog.com/2018/03/open-sourcing-hunt-for-exoplanets.html</itunes:summary>
      <content:encoded>
        <![CDATA[Earlier this week, I had a chance to speak with Chris Shallue, Senior Software Engineer on the Google Brain Team, about his project and paper on “Exploring Exoplanets with Deep Learning.” This is a great story. Chris, inspired by a book he was reading, reached out on a whim to a Harvard astrophysics researcher, kicking off a collaboration and side project eventually leading to the discovery of two new planets outside our solar system. In our conversation, we walk through the entire process Chris followed to find these two exoplanets, including how he researched the domain as an outsider, how he sourced and processed his dataset, and how he built and evolved his models. Finally, we discuss the results of his project and his plans for future work in this area. This podcast is being published in parallel with Google’s release of the source code and data that Chris developed and used, which we’ll link to below, so if what you hear inspires you to dig into this area, you’ve got a nice head start. This was a really interesting conversation, and I'm excited to share it with you! The notes for this show can be found at twimlai.com/talk/117 The corresponding blog post for this project can be found at https://research.googleblog.com/2018/03/open-sourcing-hunt-for-exoplanets.html]]>
      </content:encoded>
      <itunes:duration>2725</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/410558802]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5223184784.mp3?updated=1629216896"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning Active Learning with Ksenia Konyushkova - TWiML Talk #116</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/409174488-twiml-twiml-talk-116-learning-active-learning-data-ksenia-konyushkova.mp3</link>
      <description>In this episode, I speak with Ksenia Konyushkova, Ph.D. student in the CVLab at Ecole Polytechnique Federale de Lausanne in Switzerland. Ksenia and I connected at NIPS in December to discuss her interesting research into ways we might apply machine learning to ease the challenge of creating labeled datasets for machine learning. The first paper we discuss is “Learning Active Learning from Data,” which suggests a data-driven approach to active learning that trains a secondary model to identify the unlabeled data points which, when labeled, would likely have the greatest impact on our primary model’s performance. We also discuss her paper “Learning Intelligent Dialogs for Bounding Box Annotation,” in which she trains an agent to guide the actions of a human annotator to more quickly produce bounding boxes. TWiML Online Meetup Update Join us Tuesday, March 13th for the March edition of the Online Meetup! Sean Devlin will be doing an in-depth review of reinforcement learning and presenting the Google DeepMind paper, "Playing Atari with Deep Reinforcement Learning." Head over to twimlai.com/meetup to learn more or register. Conference Update Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Early price ends February 2! The notes for this show can be found at https://twimlai.com/talk/116.</description>
      <pubDate>Mon, 05 Mar 2018 21:25:27 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>116</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/70e600fc-ee98-11eb-9502-0bc24f5ceaea/image/artworks-000311494551-3qw1sz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I speak with Ksenia Konyushkova,…</itunes:subtitle>
      <itunes:summary>In this episode, I speak with Ksenia Konyushkova, Ph.D. student in the CVLab at Ecole Polytechnique Federale de Lausanne in Switzerland. Ksenia and I connected at NIPS in December to discuss her interesting research into ways we might apply machine learning to ease the challenge of creating labeled datasets for machine learning. The first paper we discuss is “Learning Active Learning from Data,” which suggests a data-driven approach to active learning that trains a secondary model to identify the unlabeled data points which, when labeled, would likely have the greatest impact on our primary model’s performance. We also discuss her paper “Learning Intelligent Dialogs for Bounding Box Annotation,” in which she trains an agent to guide the actions of a human annotator to more quickly produce bounding boxes. TWiML Online Meetup Update Join us Tuesday, March 13th for the March edition of the Online Meetup! Sean Devlin will be doing an in-depth review of reinforcement learning and presenting the Google DeepMind paper, "Playing Atari with Deep Reinforcement Learning." Head over to twimlai.com/meetup to learn more or register. Conference Update Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Early price ends February 2! The notes for this show can be found at https://twimlai.com/talk/116.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I speak with Ksenia Konyushkova, Ph.D. student in the CVLab at Ecole Polytechnique Federale de Lausanne in Switzerland. Ksenia and I connected at NIPS in December to discuss her interesting research into ways we might apply machine learning to ease the challenge of creating labeled datasets for machine learning. The first paper we discuss is “Learning Active Learning from Data,” which suggests a data-driven approach to active learning that trains a secondary model to identify the unlabeled data points which, when labeled, would likely have the greatest impact on our primary model’s performance. We also discuss her paper “Learning Intelligent Dialogs for Bounding Box Annotation,” in which she trains an agent to guide the actions of a human annotator to more quickly produce bounding boxes. TWiML Online Meetup Update Join us Tuesday, March 13th for the March edition of the Online Meetup! Sean Devlin will be doing an in-depth review of reinforcement learning and presenting the Google DeepMind paper, "Playing Atari with Deep Reinforcement Learning." Head over to twimlai.com/meetup to learn more or register. Conference Update Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Early price ends February 2! The notes for this show can be found at https://twimlai.com/talk/116.]]>
      </content:encoded>
      <itunes:duration>1913</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/409174488]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1093394171.mp3?updated=1629216874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning Platforms at Uber with Mike Del Balso - TWiML Talk #115</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/407159355-twiml-twiml-talk-115-scaling-machine-learning-uber-mike-del-balso.mp3</link>
      <description>In this episode, I speak with Mike Del Balso, Product Manager for Machine Learning Platforms at Uber. Mike and I sat down last fall at the Georgian Partners Portfolio conference to discuss his presentation “Finding success with machine learning in your company.” In our discussion, Mike shares some great advice for organizations looking to get value out of machine learning. He also details some of the pitfalls companies run into, such as not have proper infrastructure in place for maintenance and monitoring, not managing their expectations, and not putting the right tools in place for data science and development teams. On this last point, we touch on the Michelangelo platform, which Uber uses internally to build, deploy and maintain ML systems at scale, and the open source distributed TensorFlow system they’ve created, Horovod. This was a very insightful interview, so get your notepad ready! Vote on our #MyAI Contest! Over the past few weeks, you’ve heard us talk quite a bit about our #MyAI Contest, which explores the role we see for AI in our personal lives! We received some outstanding entries, and now it’s your turn to check them out and vote for a winner. Do this by visiting our contest page at https://twimlai.com/myai. Voting remains open until Sunday, March 4th at 11:59 PM Eastern time. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/115.</description>
      <pubDate>Thu, 01 Mar 2018 19:01:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>115</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71050830-ee98-11eb-9502-9fb33af0fec7/image/artworks-000309683109-ucik41-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I speak with Mike Del Balso, Pro…</itunes:subtitle>
      <itunes:summary>In this episode, I speak with Mike Del Balso, Product Manager for Machine Learning Platforms at Uber. Mike and I sat down last fall at the Georgian Partners Portfolio conference to discuss his presentation “Finding success with machine learning in your company.” In our discussion, Mike shares some great advice for organizations looking to get value out of machine learning. He also details some of the pitfalls companies run into, such as not have proper infrastructure in place for maintenance and monitoring, not managing their expectations, and not putting the right tools in place for data science and development teams. On this last point, we touch on the Michelangelo platform, which Uber uses internally to build, deploy and maintain ML systems at scale, and the open source distributed TensorFlow system they’ve created, Horovod. This was a very insightful interview, so get your notepad ready! Vote on our #MyAI Contest! Over the past few weeks, you’ve heard us talk quite a bit about our #MyAI Contest, which explores the role we see for AI in our personal lives! We received some outstanding entries, and now it’s your turn to check them out and vote for a winner. Do this by visiting our contest page at https://twimlai.com/myai. Voting remains open until Sunday, March 4th at 11:59 PM Eastern time. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/115.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I speak with Mike Del Balso, Product Manager for Machine Learning Platforms at Uber. Mike and I sat down last fall at the Georgian Partners Portfolio conference to discuss his presentation “Finding success with machine learning in your company.” In our discussion, Mike shares some great advice for organizations looking to get value out of machine learning. He also details some of the pitfalls companies run into, such as not have proper infrastructure in place for maintenance and monitoring, not managing their expectations, and not putting the right tools in place for data science and development teams. On this last point, we touch on the Michelangelo platform, which Uber uses internally to build, deploy and maintain ML systems at scale, and the open source distributed TensorFlow system they’ve created, Horovod. This was a very insightful interview, so get your notepad ready! Vote on our #MyAI Contest! Over the past few weeks, you’ve heard us talk quite a bit about our #MyAI Contest, which explores the role we see for AI in our personal lives! We received some outstanding entries, and now it’s your turn to check them out and vote for a winner. Do this by visiting our contest page at https://twimlai.com/myai. Voting remains open until Sunday, March 4th at 11:59 PM Eastern time. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/115.]]>
      </content:encoded>
      <itunes:duration>2943</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/407159355]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2499039050.mp3?updated=1629216897"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Inverse Programming for Deeper AI with Zenna Tavares - TWiML Talk #114</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/405530181-twiml-twiml-talk-114-inverse-programming-deeper-ai-zenna-tavares.mp3</link>
      <description>For today’s show, the final episode of our Black in AI Series, I’m joined by Zenna Tavares, a PhD student in the both the department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Lab at MIT. I spent some time with Zenna after his talk at the Strange Loop conference titled “Running Programs in Reverse for Deeper AI.” Zenna shares some great insight into his work on program inversion, an idea which lies at the intersection of Bayesian modeling, deep-learning, and computational logic. We set the stage with a discussion of inverse graphics and the similarities between graphic inversion and vision inversion. We then discuss the application of these techniques to intelligent systems, including the idea of parametric inversion. Last but not least, zenna details how these techniques might be implemented, and discusses his work on ReverseFlow, a library to execute tensorflow programs backwards, and Sigma.jl a probabilistic programming environment implemented in the dynamic programming language Julia. This talk packs a punch, and I’m glad to share it with you. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/114. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018</description>
      <pubDate>Mon, 26 Feb 2018 18:29:49 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>114</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71365782-ee98-11eb-9502-6b126140610a/image/artworks-000308146944-egteej-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>For today’s show, the final episode of our Black …</itunes:subtitle>
      <itunes:summary>For today’s show, the final episode of our Black in AI Series, I’m joined by Zenna Tavares, a PhD student in the both the department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Lab at MIT. I spent some time with Zenna after his talk at the Strange Loop conference titled “Running Programs in Reverse for Deeper AI.” Zenna shares some great insight into his work on program inversion, an idea which lies at the intersection of Bayesian modeling, deep-learning, and computational logic. We set the stage with a discussion of inverse graphics and the similarities between graphic inversion and vision inversion. We then discuss the application of these techniques to intelligent systems, including the idea of parametric inversion. Last but not least, zenna details how these techniques might be implemented, and discusses his work on ReverseFlow, a library to execute tensorflow programs backwards, and Sigma.jl a probabilistic programming environment implemented in the dynamic programming language Julia. This talk packs a punch, and I’m glad to share it with you. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/114. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018</itunes:summary>
      <content:encoded>
        <![CDATA[For today’s show, the final episode of our Black in AI Series, I’m joined by Zenna Tavares, a PhD student in the both the department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Lab at MIT. I spent some time with Zenna after his talk at the Strange Loop conference titled “Running Programs in Reverse for Deeper AI.” Zenna shares some great insight into his work on program inversion, an idea which lies at the intersection of Bayesian modeling, deep-learning, and computational logic. We set the stage with a discussion of inverse graphics and the similarities between graphic inversion and vision inversion. We then discuss the application of these techniques to intelligent systems, including the idea of parametric inversion. Last but not least, zenna details how these techniques might be implemented, and discusses his work on ReverseFlow, a library to execute tensorflow programs backwards, and Sigma.jl a probabilistic programming environment implemented in the dynamic programming language Julia. This talk packs a punch, and I’m glad to share it with you. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/114. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018]]>
      </content:encoded>
      <itunes:duration>1708</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/405530181]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4008806622.mp3?updated=1629216867"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Statistical Relational Artificial Intelligence with Sriraam Natarajan - TWiML Talk #113</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/403829766-twiml-twiml-talk-113-statistical-relational-artificial-intelligence-sriraam-natarajan.mp3</link>
      <description>In this episode, I speak with Sriraam Natarajan, Associate Professor in the Department of Computer Science at UT Dallas. While at NIPS a few months back, Sriraam and I sat down to discuss his work on Statistical Relational Artificial Intelligence. StarAI is the combination of probabilistic &amp; statistical machine learning techniques with relational databases. We cover systems learning on top of relational databases and making predictions with relational data, with quite a few examples from the healthcare field. Sriraam and his collaborators have also developed BoostSRL, a gradient-boosting based approach to learning different types of statistical relational models. We briefly touch on this, along with other implementation approaches. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/113. For complete contest details, visit twimlai.com/myai.</description>
      <pubDate>Fri, 23 Feb 2018 02:14:16 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>113</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71566554-ee98-11eb-9502-e73791326d2d/image/artworks-000306683154-tbwpiu-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I speak with Sriraam Natarajan, …</itunes:subtitle>
      <itunes:summary>In this episode, I speak with Sriraam Natarajan, Associate Professor in the Department of Computer Science at UT Dallas. While at NIPS a few months back, Sriraam and I sat down to discuss his work on Statistical Relational Artificial Intelligence. StarAI is the combination of probabilistic &amp; statistical machine learning techniques with relational databases. We cover systems learning on top of relational databases and making predictions with relational data, with quite a few examples from the healthcare field. Sriraam and his collaborators have also developed BoostSRL, a gradient-boosting based approach to learning different types of statistical relational models. We briefly touch on this, along with other implementation approaches. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/113. For complete contest details, visit twimlai.com/myai.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I speak with Sriraam Natarajan, Associate Professor in the Department of Computer Science at UT Dallas. While at NIPS a few months back, Sriraam and I sat down to discuss his work on Statistical Relational Artificial Intelligence. StarAI is the combination of probabilistic &amp; statistical machine learning techniques with relational databases. We cover systems learning on top of relational databases and making predictions with relational data, with quite a few examples from the healthcare field. Sriraam and his collaborators have also developed BoostSRL, a gradient-boosting based approach to learning different types of statistical relational models. We briefly touch on this, along with other implementation approaches. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/113. For complete contest details, visit twimlai.com/myai.]]>
      </content:encoded>
      <itunes:duration>2876</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/403829766]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4540354122.mp3?updated=1629216894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Classical Machine Learning for Infant Medical Diagnosis with Charles Onu - TWiML Talk #112</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/402530979-twiml-twiml-talk-112-classical-machine-learning-infant-medical-diagnosis-charles-onu.mp3</link>
      <description>In this episode, part 4 in our Black in AI series, i'm joined by Charles Onu, Phd Student at McGill University in Montreal &amp; Founder of Ubenwa, a startup tackling the problem of infant mortality due to asphyxia. Using SVMs and other techniques from the field of automatic speech recognition, Charles and his team have built a model that detects asphyxia based on the audible noises the child makes upon birth. We go into the process he used to collect his training data, including the specific methods they used to record samples, and how their samples will be used to maximize accuracy in the field. We also take a deep dive into some of the challenges of building and deploying the platform and mobile application. This is a really interesting use case, which I think you’ll enjoy. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/112. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.</description>
      <pubDate>Tue, 20 Feb 2018 16:41:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>112</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7184a7ac-ee98-11eb-9502-1f9dc5d2b4db/image/artworks-000305511390-rngwft-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, part 4 in our Black in AI series…</itunes:subtitle>
      <itunes:summary>In this episode, part 4 in our Black in AI series, i'm joined by Charles Onu, Phd Student at McGill University in Montreal &amp; Founder of Ubenwa, a startup tackling the problem of infant mortality due to asphyxia. Using SVMs and other techniques from the field of automatic speech recognition, Charles and his team have built a model that detects asphyxia based on the audible noises the child makes upon birth. We go into the process he used to collect his training data, including the specific methods they used to record samples, and how their samples will be used to maximize accuracy in the field. We also take a deep dive into some of the challenges of building and deploying the platform and mobile application. This is a really interesting use case, which I think you’ll enjoy. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/112. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, part 4 in our Black in AI series, i'm joined by Charles Onu, Phd Student at McGill University in Montreal &amp; Founder of Ubenwa, a startup tackling the problem of infant mortality due to asphyxia. Using SVMs and other techniques from the field of automatic speech recognition, Charles and his team have built a model that detects asphyxia based on the audible noises the child makes upon birth. We go into the process he used to collect his training data, including the specific methods they used to record samples, and how their samples will be used to maximize accuracy in the field. We also take a deep dive into some of the challenges of building and deploying the platform and mobile application. This is a really interesting use case, which I think you’ll enjoy. Join the #MyAI Discussion! As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/112. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.]]>
      </content:encoded>
      <itunes:duration>2895</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/402530979]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7912289644.mp3?updated=1629216898"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning "Common Sense" and Physical Concepts with Roland Memisevic - TWiML Talk #111</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/400102980-twiml-twiml-talk-111-learning-common-sense-physical-concepts-roland-memisevic.mp3</link>
      <description>In today’s episode, I’m joined by Roland Memisevic, co-founder, CEO, and chief scientist at Twenty Billion Neurons. Roland joined me at the RE•WORK Deep Learning Summit in Montreal to discuss the work his company is doing to train deep neural networks to understand physical actions. In our conversation, we dig into video analysis and understanding, including how data-rich video can help us develop what Roland calls comparative understanding, or AI “common sense”. We briefly touch on the implications of AI/ML systems having comparative understanding, and how Roland and his team are addressing problems like getting properly labeled training data. Enter Our #MyAI Contest! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? In any case, if you’re a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. The notes for this show can be found at twimlai.com/talk/111.</description>
      <pubDate>Thu, 15 Feb 2018 17:54:05 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>111</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71a7ff54-ee98-11eb-9502-87df96bb4d99/image/artworks-000302681379-d7y85b-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s episode, I’m joined by Roland Memisevi…</itunes:subtitle>
      <itunes:summary>In today’s episode, I’m joined by Roland Memisevic, co-founder, CEO, and chief scientist at Twenty Billion Neurons. Roland joined me at the RE•WORK Deep Learning Summit in Montreal to discuss the work his company is doing to train deep neural networks to understand physical actions. In our conversation, we dig into video analysis and understanding, including how data-rich video can help us develop what Roland calls comparative understanding, or AI “common sense”. We briefly touch on the implications of AI/ML systems having comparative understanding, and how Roland and his team are addressing problems like getting properly labeled training data. Enter Our #MyAI Contest! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? In any case, if you’re a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. The notes for this show can be found at twimlai.com/talk/111.</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s episode, I’m joined by Roland Memisevic, co-founder, CEO, and chief scientist at Twenty Billion Neurons. Roland joined me at the RE•WORK Deep Learning Summit in Montreal to discuss the work his company is doing to train deep neural networks to understand physical actions. In our conversation, we dig into video analysis and understanding, including how data-rich video can help us develop what Roland calls comparative understanding, or AI “common sense”. We briefly touch on the implications of AI/ML systems having comparative understanding, and how Roland and his team are addressing problems like getting properly labeled training data. Enter Our #MyAI Contest! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? In any case, if you’re a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. The notes for this show can be found at twimlai.com/talk/111.]]>
      </content:encoded>
      <itunes:duration>1977</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/400102980]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9571355282.mp3?updated=1629216882"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Trust in Human-Robot/AI Interactions with Ayanna Howard - TWiML Talk #110</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/398718669-twiml-twiml-talk-110-trust-human-robot-ai-interactions-ayanna-howard.mp3</link>
      <description>In this episode, the third in our Black in AI series, I speak with Ayanna Howard, Chair of the Interactive School of Computing at Georgia Tech. Ayanna joined me for a lively discussion about her work in the field of human-robot interaction. We dig deep into a couple of major areas she’s active in that have significant implications for the way we design and use artificial intelligence, namly pediatric robotics and human-robot trust. That latter bit is particularly interesting, and Ayanna provides a really interesting overview of a few of her experiments, including a simulation of an emergency situation, where, well, I don’t want to spoil it, but let’s just say as the actual intelligent beings, we need to make some better decisions. Enjoy! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/110. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.</description>
      <pubDate>Tue, 13 Feb 2018 00:38:31 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>110</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71c512ec-ee98-11eb-9502-ab5acf1389d6/image/artworks-000301398966-vtv2dc-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, the third in our Black in AI ser…</itunes:subtitle>
      <itunes:summary>In this episode, the third in our Black in AI series, I speak with Ayanna Howard, Chair of the Interactive School of Computing at Georgia Tech. Ayanna joined me for a lively discussion about her work in the field of human-robot interaction. We dig deep into a couple of major areas she’s active in that have significant implications for the way we design and use artificial intelligence, namly pediatric robotics and human-robot trust. That latter bit is particularly interesting, and Ayanna provides a really interesting overview of a few of her experiments, including a simulation of an emergency situation, where, well, I don’t want to spoil it, but let’s just say as the actual intelligent beings, we need to make some better decisions. Enjoy! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/110. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, the third in our Black in AI series, I speak with Ayanna Howard, Chair of the Interactive School of Computing at Georgia Tech. Ayanna joined me for a lively discussion about her work in the field of human-robot interaction. We dig deep into a couple of major areas she’s active in that have significant implications for the way we design and use artificial intelligence, namly pediatric robotics and human-robot trust. That latter bit is particularly interesting, and Ayanna provides a really interesting overview of a few of her experiments, including a simulation of an emergency situation, where, well, I don’t want to spoil it, but let’s just say as the actual intelligent beings, we need to make some better decisions. Enjoy! Are you looking forward to the role AI will play in your life, or in your children’s lives? Or, are you afraid of what’s to come, and the changes AI will bring? Or, maybe you’re skeptical, and don’t think we’ll ever really achieve enough with AI to make a difference? As a TWiML listener, you probably have an opinion on the role AI will play in our lives, and we want to hear your take. Sharing your thoughts takes two minutes, can be done from anywhere, and qualifies you to win some great prizes. So hit pause, and jump on over twimlai.com/myai right now to share or learn more. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/110. For complete contest details, visit twimlai.com/myai. For complete series details, visit twimlai.com/blackinai2018.]]>
      </content:encoded>
      <itunes:duration>2810</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/398718669]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5900937958.mp3?updated=1629216893"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Science for Poaching Prevention and Disease Treatment with Nyalleng Moorosi - TWiML Talk #109</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/396479022-twiml-twiml-talk-109-data-science-poaching-prevention-disease-treatment-nyalleng-moorosi.mp3</link>
      <description>For today’s show, I'm joined by Nyalleng Moorosi, Senior Data Science Researcher at The Council for Scientific &amp; Industrial Research or CSIR, in Pretoria, South Africa. In our discussion, we discuss two major projects that Nyalleng is apart of at the CSIR, one, a predictive policing use case, which focused on understanding and preventing rhino poaching in Kruger National Park, and the other, a healthcare use case which focuses on understanding the effects of a drug treatment that was causing pancreatic cancer in South Africans. Along the way we talk about the challenges of data collection, data pipelines and overcoming sparsity. This was a really interesting conversation that I’m sure you’ll enjoy. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/109. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.</description>
      <pubDate>Thu, 08 Feb 2018 18:39:23 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>109</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/71ec3818-ee98-11eb-9502-4f76fb8cae5a/image/artworks-000299255463-vmgzmk-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>For today’s show, I'm joined by Nyalleng Moorosi,…</itunes:subtitle>
      <itunes:summary>For today’s show, I'm joined by Nyalleng Moorosi, Senior Data Science Researcher at The Council for Scientific &amp; Industrial Research or CSIR, in Pretoria, South Africa. In our discussion, we discuss two major projects that Nyalleng is apart of at the CSIR, one, a predictive policing use case, which focused on understanding and preventing rhino poaching in Kruger National Park, and the other, a healthcare use case which focuses on understanding the effects of a drug treatment that was causing pancreatic cancer in South Africans. Along the way we talk about the challenges of data collection, data pipelines and overcoming sparsity. This was a really interesting conversation that I’m sure you’ll enjoy. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/109. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[For today’s show, I'm joined by Nyalleng Moorosi, Senior Data Science Researcher at The Council for Scientific &amp; Industrial Research or CSIR, in Pretoria, South Africa. In our discussion, we discuss two major projects that Nyalleng is apart of at the CSIR, one, a predictive policing use case, which focused on understanding and preventing rhino poaching in Kruger National Park, and the other, a healthcare use case which focuses on understanding the effects of a drug treatment that was causing pancreatic cancer in South Africans. Along the way we talk about the challenges of data collection, data pipelines and overcoming sparsity. This was a really interesting conversation that I’m sure you’ll enjoy. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. The notes for this show can be found at twimlai.com/talk/109. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.]]>
      </content:encoded>
      <itunes:duration>3178</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/396479022]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2148141181.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Security and Safety in AI: Adversarial Examples, Bias and Trust w/ Moustapha Cissé - TWiML Talk #108</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/395109459-twiml-twiml-talk-108-security-safety-ai-adversarial-examples-bias-trust-moustapha-cisse.mp3</link>
      <description>In this episode I’m joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris. Moustapha’s broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/108. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.</description>
      <pubDate>Tue, 06 Feb 2018 00:54:32 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>108</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/720ed21a-ee98-11eb-9502-879bbd0d90a2/image/artworks-000297969969-ai5vxy-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Moustapha Cissé, Re…</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris. Moustapha’s broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/108. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris. Moustapha’s broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/108. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/blackinai2018.]]>
      </content:encoded>
      <itunes:duration>3020</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/395109459]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4628026848.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Peering into the Home w/ Aerial.ai's Wifi Motion Analytics - TWiML Talk #107</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/393602724-twiml-twiml-talk-107-peering-home-w-aerials-wifi-motion-analytics-michel-allegue-negar-ghourchian.mp3</link>
      <description>In this episode I’m joined by Michel Allegue and Negar Ghourchian of Aerial.ai. Aerial is doing some really interesting things in the home automation space, by using wifi signal statistics to identify and understand what’s happening in our homes and office environments. Michel, the CTO, describes some of the capabilities of their platform, including its ability to detect not only people and pets within the home, but surprising characteristics like breathing rates and patterns. He also gives us a look into the data collection process, including the types of data needed, how they obtain it, and how it is parsed. Negar, a senior data scientist with Aerial, describes the types of models used, including semi-supervised, unsupervised and signal processing based models, and how they’ve scaled their platform, and provides us with some real-world use cases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/107. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Fri, 02 Feb 2018 21:08:11 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>107</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/722dd6e2-ee98-11eb-9502-7333be0c43bb/image/artworks-000296516529-u3v9a8-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I’m joined by Michel Allegue and …</itunes:subtitle>
      <itunes:summary>In this episode I’m joined by Michel Allegue and Negar Ghourchian of Aerial.ai. Aerial is doing some really interesting things in the home automation space, by using wifi signal statistics to identify and understand what’s happening in our homes and office environments. Michel, the CTO, describes some of the capabilities of their platform, including its ability to detect not only people and pets within the home, but surprising characteristics like breathing rates and patterns. He also gives us a look into the data collection process, including the types of data needed, how they obtain it, and how it is parsed. Negar, a senior data scientist with Aerial, describes the types of models used, including semi-supervised, unsupervised and signal processing based models, and how they’ve scaled their platform, and provides us with some real-world use cases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/107. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I’m joined by Michel Allegue and Negar Ghourchian of Aerial.ai. Aerial is doing some really interesting things in the home automation space, by using wifi signal statistics to identify and understand what’s happening in our homes and office environments. Michel, the CTO, describes some of the capabilities of their platform, including its ability to detect not only people and pets within the home, but surprising characteristics like breathing rates and patterns. He also gives us a look into the data collection process, including the types of data needed, how they obtain it, and how it is parsed. Negar, a senior data scientist with Aerial, describes the types of models used, including semi-supervised, unsupervised and signal processing based models, and how they’ve scaled their platform, and provides us with some real-world use cases. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/107. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2447</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/393602724]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7124090932.mp3?updated=1629216888"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Physiology-Based Models for Fitness and Training w/ Firstbeat with Ilkka Korhonen - TWiML Talk #106</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/393538632-twiml-twiml-talk-106-physiology-based-models-fitness-training-w-firstbeat-ilkka-korhonen.mp3</link>
      <description>In this episode i'm joined by Ilkka Korhonen, Vice President of Technology at Firstbeat, a company whose algorithms are embedded in fitness watches from companies like Garmin and Suunto and which use your heartbeat data to offer personalized insights into stress, fitness, recovery and sleep patterns. We cover a ton about Firstbeat in the conversation, including how they transform the sensor readings into more actionable data, their use of a digital physiological model of the human body, how they use sensor data to identify and predict physiological changes within the body, and some of the opportunities that Firstbeat has to further apply ML in the future. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/106. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Fri, 02 Feb 2018 18:52:41 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>106</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/724f3f12-ee98-11eb-9502-efc9e652f205/image/artworks-000296454993-vjs1jb-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode i'm joined by Ilkka Korhonen, Vic…</itunes:subtitle>
      <itunes:summary>In this episode i'm joined by Ilkka Korhonen, Vice President of Technology at Firstbeat, a company whose algorithms are embedded in fitness watches from companies like Garmin and Suunto and which use your heartbeat data to offer personalized insights into stress, fitness, recovery and sleep patterns. We cover a ton about Firstbeat in the conversation, including how they transform the sensor readings into more actionable data, their use of a digital physiological model of the human body, how they use sensor data to identify and predict physiological changes within the body, and some of the opportunities that Firstbeat has to further apply ML in the future. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/106. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode i'm joined by Ilkka Korhonen, Vice President of Technology at Firstbeat, a company whose algorithms are embedded in fitness watches from companies like Garmin and Suunto and which use your heartbeat data to offer personalized insights into stress, fitness, recovery and sleep patterns. We cover a ton about Firstbeat in the conversation, including how they transform the sensor readings into more actionable data, their use of a digital physiological model of the human body, how they use sensor data to identify and predict physiological changes within the body, and some of the opportunities that Firstbeat has to further apply ML in the future. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/106. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2140</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/393538632]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5228772662.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning for Signal Processing Applications w/ Stuart Feffer &amp; Brady Tsai - TWiML Talk #105</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/393013443-twiml-twiml-talk-105-machine-learning-signal-processing-applications-stuart-feffer-brady-tsai.mp3</link>
      <description>In this episode, I'm joined by Stuart Feffer, co-founder and CEO of Reality AI, which provides tools and services for engineers working with sensors and signals, and Brady Tsai, Business Development Manager at Koito, which develops automotive lighting solutions for car manufacturers. Stuart and Brady joined me at CES a few weeks ago after they announced a partnership to bring Adaptive Driving Beam, or ADB, headlights to North America. Brady explains what exactly ADB technology is and how it works, while Stuart walks me through the technical aspects of not only this partnership, but of the reality AI platform as a whole. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/105. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Thu, 01 Feb 2018 17:58:59 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>105</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/726d9be2-ee98-11eb-9502-8fa79199d0c6/image/artworks-000295491003-lyip4r-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Stuart Feffer, co-…</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Stuart Feffer, co-founder and CEO of Reality AI, which provides tools and services for engineers working with sensors and signals, and Brady Tsai, Business Development Manager at Koito, which develops automotive lighting solutions for car manufacturers. Stuart and Brady joined me at CES a few weeks ago after they announced a partnership to bring Adaptive Driving Beam, or ADB, headlights to North America. Brady explains what exactly ADB technology is and how it works, while Stuart walks me through the technical aspects of not only this partnership, but of the reality AI platform as a whole. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/105. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Stuart Feffer, co-founder and CEO of Reality AI, which provides tools and services for engineers working with sensors and signals, and Brady Tsai, Business Development Manager at Koito, which develops automotive lighting solutions for car manufacturers. Stuart and Brady joined me at CES a few weeks ago after they announced a partnership to bring Adaptive Driving Beam, or ADB, headlights to North America. Brady explains what exactly ADB technology is and how it works, while Stuart walks me through the technical aspects of not only this partnership, but of the reality AI platform as a whole. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/105. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2188</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/393013443]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1416567668.mp3?updated=1629216868"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Personalizing the Ferrari Challenge Experience w/ Intel AI - TWiML Talk #104</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/392461950-twiml-twiml-talk-104-personalizing-ferrari-challenge-experience-andy-keller-emile-chin-dickey.mp3</link>
      <description>In this episode, I'm joined by Andy Keller and Emile Chin-Dickey to discuss Intel's partnership with the Ferrari Challenge North American Series. Andy is a Deep Learning Data Scientist at Intel and Emile is Senior Manager of Marketing Partnerships at the company. In this show, Emile gives us a high-level overview of the Ferrari Challenge partnership and the goals of the collaboration. Andy &amp; I then dive into the AI aspects of the project, including how the training data was collected, the techniques they used to perform fine-grained object detection in the video streams, how they built the analytics platform, some of the remaining challenges with this project, and more! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/104. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Wed, 31 Jan 2018 17:03:37 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>104</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/72877044-ee98-11eb-9502-7343537548c2/image/artworks-000294254250-ex3l3k-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Andy Keller and Em…</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Andy Keller and Emile Chin-Dickey to discuss Intel's partnership with the Ferrari Challenge North American Series. Andy is a Deep Learning Data Scientist at Intel and Emile is Senior Manager of Marketing Partnerships at the company. In this show, Emile gives us a high-level overview of the Ferrari Challenge partnership and the goals of the collaboration. Andy &amp; I then dive into the AI aspects of the project, including how the training data was collected, the techniques they used to perform fine-grained object detection in the video streams, how they built the analytics platform, some of the remaining challenges with this project, and more! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/104. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Andy Keller and Emile Chin-Dickey to discuss Intel's partnership with the Ferrari Challenge North American Series. Andy is a Deep Learning Data Scientist at Intel and Emile is Senior Manager of Marketing Partnerships at the company. In this show, Emile gives us a high-level overview of the Ferrari Challenge partnership and the goals of the collaboration. Andy &amp; I then dive into the AI aspects of the project, including how the training data was collected, the techniques they used to perform fine-grained object detection in the video streams, how they built the analytics platform, some of the remaining challenges with this project, and more! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/104. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2260</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/392461950]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8519641982.mp3?updated=1629216885"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for 3D Sensors and Cameras in Lighthouse with Alex Teichman - TWiML Talk #103</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/391964598-twiml-twiml-talk-103-deep-learning-3d-sensors-cameras-lighthouse-alex-teichman.mp3</link>
      <description>In this episode, I sit down with Alex Teichman, CEO and Co-Founder of Lighthouse, a company taking a new approach to the in-home smart camera. Alex and I dig into what exactly the Lighthouse product is, and all the interesting stuff inside, including its combination of 3D sensing, computer vision, and NLP. We also talk about Alex’s process for building the Lighthouse network architecture, they tech stack the product is based on, and some things that surprised him in their efforts to get AI into a consumer product. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/103. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Tue, 30 Jan 2018 18:58:47 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>103</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/72abb008-ee98-11eb-9502-7fb8be2e5575/image/artworks-000293605968-si58uj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I sit down with Alex Teichman, C…</itunes:subtitle>
      <itunes:summary>In this episode, I sit down with Alex Teichman, CEO and Co-Founder of Lighthouse, a company taking a new approach to the in-home smart camera. Alex and I dig into what exactly the Lighthouse product is, and all the interesting stuff inside, including its combination of 3D sensing, computer vision, and NLP. We also talk about Alex’s process for building the Lighthouse network architecture, they tech stack the product is based on, and some things that surprised him in their efforts to get AI into a consumer product. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/103. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I sit down with Alex Teichman, CEO and Co-Founder of Lighthouse, a company taking a new approach to the in-home smart camera. Alex and I dig into what exactly the Lighthouse product is, and all the interesting stuff inside, including its combination of 3D sensing, computer vision, and NLP. We also talk about Alex’s process for building the Lighthouse network architecture, they tech stack the product is based on, and some things that surprised him in their efforts to get AI into a consumer product. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/103. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2527</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/391964598]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8951998058.mp3?updated=1629216889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Computer Vision for Cozmo, the Cutest Toy Robot Everrrrr! with Andrew Stein - TWiML Talk #102</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/391506162-twiml-twiml-talk-102-computer-vision-cozmo-cutest-toy-robot-everrrrr-andrew-stein.mp3</link>
      <description>In this episode, I'm joined by Andrew Stein, computer vision engineer at consumer robotics company Anki, and his partner in crime Cozmo, a toy robot with tons of personality. Andrew joined me during the hustle and bustle of CES a few weeks ago to give me some insight into how Cozmo works, plays, and learns, and how he’s different from other consumer robots you may know, such as the Roomba. We discuss the types of algorithms that help power Cozmo, such as facial detection and recognition, 3D pose recognition, reasoning, and even some simple emotional AI. We also cover Cozmo’s functionality and programmability, including a cool feature called Code Lab. This was a really fun interview, and you’ll be happy to know there’s a companion video starring Cozmo himself right here: https://youtu.be/jUkacU1I0QI. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/102. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</description>
      <pubDate>Tue, 30 Jan 2018 01:23:16 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>102</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/72c94424-ee98-11eb-9502-0bd803f72795/image/artworks-000293153013-70deb5-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I'm joined by Andrew Stein, comp…</itunes:subtitle>
      <itunes:summary>In this episode, I'm joined by Andrew Stein, computer vision engineer at consumer robotics company Anki, and his partner in crime Cozmo, a toy robot with tons of personality. Andrew joined me during the hustle and bustle of CES a few weeks ago to give me some insight into how Cozmo works, plays, and learns, and how he’s different from other consumer robots you may know, such as the Roomba. We discuss the types of algorithms that help power Cozmo, such as facial detection and recognition, 3D pose recognition, reasoning, and even some simple emotional AI. We also cover Cozmo’s functionality and programmability, including a cool feature called Code Lab. This was a really fun interview, and you’ll be happy to know there’s a companion video starring Cozmo himself right here: https://youtu.be/jUkacU1I0QI. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/102. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I'm joined by Andrew Stein, computer vision engineer at consumer robotics company Anki, and his partner in crime Cozmo, a toy robot with tons of personality. Andrew joined me during the hustle and bustle of CES a few weeks ago to give me some insight into how Cozmo works, plays, and learns, and how he’s different from other consumer robots you may know, such as the Roomba. We discuss the types of algorithms that help power Cozmo, such as facial detection and recognition, 3D pose recognition, reasoning, and even some simple emotional AI. We also cover Cozmo’s functionality and programmability, including a cool feature called Code Lab. This was a really fun interview, and you’ll be happy to know there’s a companion video starring Cozmo himself right here: https://youtu.be/jUkacU1I0QI. Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2! The notes for this show can be found at twimlai.com/talk/102. For complete contest details, visit twimlai.com/myaicontest. For complete series details, visit twimlai.com/aiathome.]]>
      </content:encoded>
      <itunes:duration>2631</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/391506162]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5874004262.mp3?updated=1629216890"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Expectation Maximization, Gaussian Mixtures &amp; Belief Propagation, OH MY! w/ Inmar Givoni - Talk #101</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/388445721-twiml-twiml-talk-101-expectation-maximization-gaussian-mixtures-belief-propagation-oh-inmar-givoni.mp3</link>
      <description>In this episode i'm joined by Inmar Givoni, Autonomy Engineering Manager at Uber ATG, to discuss her work on the paper Min-Max Propagation, which was presented at NIPS last month in Long Beach. Inmar and I get into a meaty discussion about graphical models, including what they are and how they’re used, some of the challenges they present for both training and inference, and how and where they can be best applied. Then we jump into an in-depth look at the key ideas behind the Min-Max Propagation paper itself, including the relationship to the broader domain of belief propagation and ideas like affinity propagation, and how all these can be applied to a use case example like the makespan problem. This was a really fun conversation! Enjoy! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Visit twimlai.com/ainy2018 for registration details. Early price ends February 2!</description>
      <pubDate>Fri, 26 Jan 2018 17:23:59 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/72ea4f0c-ee98-11eb-9502-bb10a672ac08/image/artworks-000291137007-7p78t0-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode i'm joined by Inmar Givoni, Auton…</itunes:subtitle>
      <itunes:summary>In this episode i'm joined by Inmar Givoni, Autonomy Engineering Manager at Uber ATG, to discuss her work on the paper Min-Max Propagation, which was presented at NIPS last month in Long Beach. Inmar and I get into a meaty discussion about graphical models, including what they are and how they’re used, some of the challenges they present for both training and inference, and how and where they can be best applied. Then we jump into an in-depth look at the key ideas behind the Min-Max Propagation paper itself, including the relationship to the broader domain of belief propagation and ideas like affinity propagation, and how all these can be applied to a use case example like the makespan problem. This was a really fun conversation! Enjoy! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Visit twimlai.com/ainy2018 for registration details. Early price ends February 2!</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode i'm joined by Inmar Givoni, Autonomy Engineering Manager at Uber ATG, to discuss her work on the paper Min-Max Propagation, which was presented at NIPS last month in Long Beach. Inmar and I get into a meaty discussion about graphical models, including what they are and how they’re used, some of the challenges they present for both training and inference, and how and where they can be best applied. Then we jump into an in-depth look at the key ideas behind the Min-Max Propagation paper itself, including the relationship to the broader domain of belief propagation and ideas like affinity propagation, and how all these can be applied to a use case example like the makespan problem. This was a really fun conversation! Enjoy! Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you'll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI's latest developments, separate what's hype and what's really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML. Visit twimlai.com/ainy2018 for registration details. Early price ends February 2!]]>
      </content:encoded>
      <itunes:duration>2937</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/388445721]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9186936399.mp3?updated=1627362853"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>A Linear-Time Kernel Goodness-of-Fit Test - NIPS Best Paper '17 - TWiML Talk #100</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/388292339-twiml-twiml-talk-100-linear-time-kernel-goodness-fit-test-wittawat-jitkrittum-zoltan-szabo-kenji-fukumizu-arthur-gretton-nips-best-paper-17.mp3</link>
      <description>In this episode, I speak with Arthur Gretton, Wittawat Jitkrittum, Zoltan Szabo and Kenji Fukumizu, who, alongside Wenkai Xu authored the 2017 NIPS Best Paper Award winner “A Linear-Time Kernel Goodness-of-Fit Test.” In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/100.</description>
      <pubDate>Wed, 24 Jan 2018 17:08:29 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>100</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/730d8c92-ee98-11eb-9502-8324815d6908/image/artworks-000289902914-906ghf-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I speak with Arthur Gretton, Wit…</itunes:subtitle>
      <itunes:summary>In this episode, I speak with Arthur Gretton, Wittawat Jitkrittum, Zoltan Szabo and Kenji Fukumizu, who, alongside Wenkai Xu authored the 2017 NIPS Best Paper Award winner “A Linear-Time Kernel Goodness-of-Fit Test.” In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/100.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I speak with Arthur Gretton, Wittawat Jitkrittum, Zoltan Szabo and Kenji Fukumizu, who, alongside Wenkai Xu authored the 2017 NIPS Best Paper Award winner “A Linear-Time Kernel Goodness-of-Fit Test.” In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! In our discussion, we cover what exactly a “goodness of fit” test is, and how it can be used to determine how well a statistical model applies to a given real-world scenario. The group and I the discuss this particular test, the applications of this work, as well as how this work fits in with other research the group has recently published. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/100.]]>
      </content:encoded>
      <itunes:duration>1349</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/388292339]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8531827198.mp3?updated=1629216853"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Solving Imperfect-Information Games with Tuomas Sandholm - NIPS ’17 Best Paper - TWiML Talk #99</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/387424580-twiml-twiml-talk-99-nips-best-paper-safe-nested-subgame-solving-imperfect-information-games-tuomas-sandholm.mp3</link>
      <description>In this episode I speak with Tuomas Sandholm, Carnegie Mellon University Professor and Founder and CEO of startups Optimized Markets and Strategic Machine. Tuomas, along with his PhD student Noam Brown, won a 2017 NIPS Best Paper award for their paper “Safe and Nested Subgame Solving for Imperfect-Information Games.” Tuomas and I dig into the significance of the paper, including a breakdown of perfect vs imperfect information games, the role of abstractions in game solving, and how the concept of safety applies to gameplay. We discuss how all these elements and techniques are applied to poker, and how the algorithm described in this paper was used by Noam and Tuomas to create Libratus, the first AI to beat top human pros in No Limit Texas Hold’em, a particularly difficult game to beat due to its large state space. This was a fascinating interview that I'm really excited to share with you all. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/99</description>
      <pubDate>Mon, 22 Jan 2018 17:38:56 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>99</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/732b36de-ee98-11eb-9502-e712fc55fd45/image/artworks-000289014224-wr6xfv-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode I speak with Tuomas Sandholm, Car…</itunes:subtitle>
      <itunes:summary>In this episode I speak with Tuomas Sandholm, Carnegie Mellon University Professor and Founder and CEO of startups Optimized Markets and Strategic Machine. Tuomas, along with his PhD student Noam Brown, won a 2017 NIPS Best Paper award for their paper “Safe and Nested Subgame Solving for Imperfect-Information Games.” Tuomas and I dig into the significance of the paper, including a breakdown of perfect vs imperfect information games, the role of abstractions in game solving, and how the concept of safety applies to gameplay. We discuss how all these elements and techniques are applied to poker, and how the algorithm described in this paper was used by Noam and Tuomas to create Libratus, the first AI to beat top human pros in No Limit Texas Hold’em, a particularly difficult game to beat due to its large state space. This was a fascinating interview that I'm really excited to share with you all. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/99</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode I speak with Tuomas Sandholm, Carnegie Mellon University Professor and Founder and CEO of startups Optimized Markets and Strategic Machine. Tuomas, along with his PhD student Noam Brown, won a 2017 NIPS Best Paper award for their paper “Safe and Nested Subgame Solving for Imperfect-Information Games.” Tuomas and I dig into the significance of the paper, including a breakdown of perfect vs imperfect information games, the role of abstractions in game solving, and how the concept of safety applies to gameplay. We discuss how all these elements and techniques are applied to poker, and how the algorithm described in this paper was used by Noam and Tuomas to create Libratus, the first AI to beat top human pros in No Limit Texas Hold’em, a particularly difficult game to beat due to its large state space. This was a fascinating interview that I'm really excited to share with you all. Enjoy! This is your last chance to register for the RE•WORK Deep Learning and AI Assistant Summits in San Francisco, which are this Thursday and Friday, January 25th and 26th. These events feature leading researchers and technologists like the ones you heard in our Deep Learning Summit series last week. The San Francisco will event is headlined by Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/99]]>
      </content:encoded>
      <itunes:duration>1669</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/387424580]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6608964509.mp3?updated=1629216865"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Separating Vocals in Recorded Music at Spotify with Eric Humphrey - TWiML Talk #98</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/386021459-twiml-twiml-talk-98-separating-vocals-recorded-music-spotify-eric-humphrey.mp3</link>
      <description>In today’s show, I sit down with Eric Humphrey, Research Scientist in the music understanding group at Spotify. Eric was at the Deep Learning Summit to give a talk on Advances in Deep Architectures and Methods for Separating Vocals in Recorded Music. We discuss his talk, including how Spotify's large music catalog enables such an experiment to even take place, the methods they use to train algorithms to isolate and remove vocals from music, and how architectures like U-Net and Pix2Pix come into play when building his algorithms. We also hit on the idea of “creative AI,” Spotify’s attempt at understanding music content at scale, optical music recognition, and more. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/98</description>
      <pubDate>Fri, 19 Jan 2018 16:07:51 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>98</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/73576ce0-ee98-11eb-9502-7fe70452ff7f/image/artworks-000287626994-zkwatb-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In today’s show, I sit down with Eric Humphrey, R…</itunes:subtitle>
      <itunes:summary>In today’s show, I sit down with Eric Humphrey, Research Scientist in the music understanding group at Spotify. Eric was at the Deep Learning Summit to give a talk on Advances in Deep Architectures and Methods for Separating Vocals in Recorded Music. We discuss his talk, including how Spotify's large music catalog enables such an experiment to even take place, the methods they use to train algorithms to isolate and remove vocals from music, and how architectures like U-Net and Pix2Pix come into play when building his algorithms. We also hit on the idea of “creative AI,” Spotify’s attempt at understanding music content at scale, optical music recognition, and more. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/98</itunes:summary>
      <content:encoded>
        <![CDATA[In today’s show, I sit down with Eric Humphrey, Research Scientist in the music understanding group at Spotify. Eric was at the Deep Learning Summit to give a talk on Advances in Deep Architectures and Methods for Separating Vocals in Recorded Music. We discuss his talk, including how Spotify's large music catalog enables such an experiment to even take place, the methods they use to train algorithms to isolate and remove vocals from music, and how architectures like U-Net and Pix2Pix come into play when building his algorithms. We also hit on the idea of “creative AI,” Spotify’s attempt at understanding music content at scale, optical music recognition, and more. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration. The notes for this show can be found at twimlai.com/talk/98]]>
      </content:encoded>
      <itunes:duration>1635</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/386021459]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3313445537.mp3?updated=1629216862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accelerating Deep Learning with Mixed Precision Arithmetic with Greg Diamos - TWiML Talk #97</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/385174106-twiml-twiml-talk-97-accelerating-deep-learning-mixed-precision-arithmetic-greg-diamos.mp3</link>
      <description>In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.” Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.</description>
      <pubDate>Wed, 17 Jan 2018 22:19:25 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>97</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/73749c48-ee98-11eb-9502-6be87305de4d/image/artworks-000286736864-vnm6uo-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this show I speak with Greg Diamos, senior com…</itunes:subtitle>
      <itunes:summary>In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.” Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.</itunes:summary>
      <content:encoded>
        <![CDATA[In this show I speak with Greg Diamos, senior computer systems researcher at Baidu. Greg joined me before his talk at the Deep Learning Summit, where he spoke on “The Next Generation of AI Chips.” Greg’s talk focused on some work his team was involved in that accelerates deep learning training by using mixed 16-bit and 32-bit floating point arithmetic. We cover a ton of interesting ground in this conversation, and if you’re interested in systems level thinking around scaling and accelerating deep learning, you’re really going to like this one. And of course, if you like this one, you’re also going to like TWiML Talk #14 with Greg’s former colleague, Shubho Sengupta, which covers a bunch of related topics. This show is part of a series of shows recorded at the RE•WORK Deep Learning Summit in Montreal back in October. This was a great event and, in fact, their next event, the Deep Learning Summit San Francisco is right around the corner on January 25th and 26th, and will feature more leading researchers and technologists like the ones you’ll hear here on the show this week, including Ian Goodfellow of Google Brain, Daphne Koller of Calico Labs, and more! Definitely check it out and use the code TWIMLAI for 20% off of registration.]]>
      </content:encoded>
      <itunes:duration>2359</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/385174106]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8071529706.mp3?updated=1629216884"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Composing Graphical Models With Neural Networks with David Duvenaud - TWiML Talk #96</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/384169817-twiml-twiml-talk-96-composing-graphical-models-neural-networks-david-duvenaud.mp3</link>
      <description>In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96</description>
      <pubDate>Mon, 15 Jan 2018 23:22:56 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>96</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/739228a8-ee98-11eb-9502-17d3d8ffcd1f/image/artworks-000286681292-04dixg-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, we hear from David Duvenaud, ass…</itunes:subtitle>
      <itunes:summary>In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, we hear from David Duvenaud, assistant professor in the Computer Science and Statistics departments at the University of Toronto. David joined me after his talk at the Deep Learning Summit on “Composing Graphical Models With Neural Networks for Structured Representations and Fast Inference.” In our conversation, we discuss the generalized modeling and inference framework that David and his team have created, which combines the strengths of both probabilistic graphical models and deep learning methods. He gives us a walkthrough of his use case which is to automatically segment and categorize mouse behavior from raw video, and we discuss how the framework is applied here and for other use cases. We also discuss some of the differences between the frequentist and bayesian statistical approaches. The notes for this show can be found at twimlai.com/talk/96]]>
      </content:encoded>
      <itunes:duration>2117</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/384169817]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4001837047.mp3?updated=1629216879"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Embedded Deep Learning at Deep Vision with Siddha Ganju - TWiML Talk #95</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/382620071-twiml-twiml-talk-95-embedded-deep-learning-deep-vision-siddha-ganju.mp3</link>
      <description>In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.</description>
      <pubDate>Fri, 12 Jan 2018 18:25:23 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>95</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/73aefae6-ee98-11eb-9502-53a16d7b51eb/image/artworks-000284225045-66b4o7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode we hear from Siddha Ganju, data s…</itunes:subtitle>
      <itunes:summary>In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode we hear from Siddha Ganju, data scientist at computer vision startup Deep Vision. Siddha joined me at the AI Conference a while back to chat about the challenges of developing deep learning applications “at the edge,” i.e. those targeting compute- and power-constrained environments.In our conversation, Siddha provides an overview of Deep Vision’s embedded processor, which is optimized for ultra-low power requirements, and we dig into the data processing pipeline and network architecture process she uses to support sophisticated models in embedded devices. We dig into the specific the hardware and software capabilities and restrictions typical of edge devices and how she utilizes techniques like model pruning and compression to create embedded models that deliver needed performance levels in resource constrained environments, and discuss use cases such as facial recognition, scene description and activity recognition. Siddha's research interests also include natural language processing and visual question answering, and we spend some time discussing the latter as well.]]>
      </content:encoded>
      <itunes:duration>2060</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/382620071]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4434231465.mp3?updated=1629216874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Neuroevolution: Evolving Novel Neural Network Architectures with Kenneth Stanley - TWiML Talk #94</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/381801650-twiml-twiml-talk-94-neuroevolution-evolving-novel-neural-network-architectures-kenneth-stanley.mp3</link>
      <description>Today, I'm joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence, the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 - 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.</description>
      <pubDate>Thu, 11 Jan 2018 01:08:58 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>94</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/73eeb604-ee98-11eb-9502-8bfe129d8c44/image/artworks-000283419833-b6vh4d-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, I'm joined by Kenneth Stanley, Professor i…</itunes:subtitle>
      <itunes:summary>Today, I'm joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence, the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 - 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>Today, I'm joined by Kenneth Stanley, Professor in the Department of Computer Science at the University of Central Florida and senior research scientist at Uber AI Labs. Kenneth studied under TWiML Talk #47 guest Risto Miikkulainen at UT Austin, and joined Uber AI Labs after Geometric Intelligence, the company he co-founded with Gary Marcus and others, was acquired in late 2016. Kenneth’s research focus is what he calls Neuroevolution, applies the idea of genetic algorithms to the challenge of evolving neural network architectures. In this conversation, we discuss the Neuroevolution of Augmenting Topologies (or NEAT) paper that Kenneth authored along with Risto, which won the 2017 International Society for Artificial Life’s Award for Outstanding Paper of the Decade 2002 - 2012. We also cover some of the extensions to that approach he’s created since, including, HyperNEAT, which can efficiently evolve very large networks with connectivity patterns that look more like those of the human and that are generally much larger than what prior approaches to neural learning could produce, and novelty search, an approach which unlike most evolutionary algorithms has no defined objective, but rather simply searches for novel behaviors. We also cover concepts like “Complexification” and “Deception”, biology vs computation including differences and similarities, and some of his other work including his book, and NERO, a video game complete with Real-time Neuroevolution. This is a meaty “Nerd Alert” interview that I think you’ll really enjoy.</p>]]>
      </content:encoded>
      <itunes:duration>2739</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/381801650]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4656831196.mp3?updated=1629216891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>A Quantum Computing Primer and Implications for AI with Davide Venturelli - TWiML Talk #93</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/380607002-twiml-twiml-talk-93-quantum-computing-primer-implications-ai-davide-venturelli.mp3</link>
      <description>Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93</description>
      <pubDate>Mon, 08 Jan 2018 18:00:04 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>93</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/740b81e4-ee98-11eb-9502-9b836220cca1/image/artworks-000282281828-o82hmd-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today, I'm joined by Davide Venturelli, science o…</itunes:subtitle>
      <itunes:summary>Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93</itunes:summary>
      <content:encoded>
        <![CDATA[Today, I'm joined by Davide Venturelli, science operations manager and quantum computing team lead for the Universities Space Research Association’s Institute for Advanced Computer Science at NASA Ames. Davide joined me backstage at the NYU Future Labs AI Summit a while back to give me some insight into a topic that I’ve been curious about for some time now, quantum computing. We kick off our discussion about the core ideas behind quantum computing, including what it is, how it’s applied and the ways it relates to computing as we know it today. We discuss the practical state of quantum computers and what their capabilities are, and the kinds of things you can do with them. And of course, we explore the intersection between AI and quantum computing, how quantum computing may one day accelerate machine learning, and how interested listeners can get started down the quantum rabbit hole. The notes for this show can be found at twimlai.com/talk/93]]>
      </content:encoded>
      <itunes:duration>2051</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/380607002]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1797790827.mp3?updated=1629216871"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning State Representations with Yael Niv - TWiML Talk #92</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/372612527-twiml-twiml-talk-92-learning-state-representations-yael-niv.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.</description>
      <pubDate>Fri, 22 Dec 2017 16:29:46 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>92</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7436131e-ee98-11eb-9502-8b1435332078/image/artworks-000273821024-ui38mz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I speak with Yael Niv, professor of neuroscience and psychology at Princeton University. Yael joined me after her invited talk on “Learning State Representations.” In this interview Yael and I explore the relationship between neuroscience and machine learning. In particular, we discusses the importance of state representations in human learning, some of her experimental results in this area, and how a better understanding of representation learning can lead to insights into machine learning problems such as reinforcement and transfer learning. Did I mention this was a nerd alert show? I really enjoyed this interview and I know you will too. Be sure to send over any thoughts or feedback via the show notes page at twimlai.com/talk/92.]]>
      </content:encoded>
      <itunes:duration>2829</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/372612527]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5084023010.mp3?updated=1629216888"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Philosophy of Intelligence with Matthew Crosby - TWiML Talk #91</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/372612554-twiml-twiml-talk-91-philosophy-intelligence-matthew-crosby.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.</description>
      <pubDate>Thu, 21 Dec 2017 15:00:26 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>91</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/745c2e46-ee98-11eb-9502-c3ccc0fd7975/image/artworks-000273322847-snpgyl-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests.This time around i'm joined by Matthew Crosby, a researcher at Imperial College London, working on the Kinds of Intelligence Project. Matthew joined me after the NIPS Symposium of the same name, an event that brought researchers from a variety of disciplines together towards three aims: a broader perspective of the possible types of intelligence beyond human intelligence, better measurements of intelligence, and a more purposeful analysis of where progress should be made in AI to best benefit society. Matthew’s research explores intelligence from a philosophical perspective, exploring ideas like predictive processing and controlled hallucination, and how these theories of intelligence impact the way we approach creating artificial intelligence. This was a very interesting conversation, i'm sure you’ll enjoy.]]>
      </content:encoded>
      <itunes:duration>1782</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/372612554]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7931501139.mp3?updated=1629216868"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Geometric Deep Learning with Joan Bruna &amp; Michael Bronstein - TWiML Talk #90</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/372157817-twiml-twiml-talk-90-geometric-deep-learning-joan-bruna-michael-bronstein.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. This time around I'm joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!</description>
      <pubDate>Wed, 20 Dec 2017 15:48:17 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>90</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/748142bc-ee98-11eb-9502-ef7acbdfe23e/image/artworks-000272556662-9dgajv-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. This time around I'm joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. This time around I'm joined by Joan Bruna, Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Joan and Michael join me after their tutorial on Geometric Deep Learning on Graphs and Manifolds. In our conversation we dig pretty deeply into the ideas behind geometric deep learning and how we can use it in applications like 3D vision, sensor networks, drug design, biomedicine, and recommendation systems. This is definitely a Nerd Alert show, and one that will get your multi-dimensional neurons firing. Enjoy!]]>
      </content:encoded>
      <itunes:duration>2420</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/372157817]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2092770671.mp3?updated=1629216880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI at the NASA Frontier Development Lab with Sara Jennings, Timothy Seabrook and Andres Rodriguez</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/371738543-twiml-twiml-talk-89-ai-nasa-frontier-development-lab-sara-jennings-timothy-seabrook-andres-rodriguez.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.</description>
      <pubDate>Tue, 19 Dec 2017 17:37:36 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>89</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/74a1bcc2-ee98-11eb-9502-4b9b315b13c3/image/artworks-000272111531-65ni2v-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode i'm joined by Sara Jennings, Timothy Seabrook and Andres Rodriguez to discuss NASA’s Frontier Development Lab or FDL. The FDL is an intense 8-week applied AI research accelerator, focused on tackling knowledge gaps useful to the space program. In our discussion, Sara, producer at the FDL, provides some insight into its goals and structure. Timothy, a researcher at FDL, describes his involvement with the program, including some of the projects he worked on while on-site. He also provides a look into some of this year’s FDL projects, including Planetary Defense, Solar Storm Prediction, and Lunar Water Location. Last but not least, Andres, Sr. Principal Engineer at Intel's AIPG, joins us to detail Intel’s support of the FDL, and how the various elements of the Intel AI stack supported the FDL research. This is a jam packed conversation, so be sure to check the show notes page at twimlai.com/talk/89 for all the links and tidbits from this episode.]]>
      </content:encoded>
      <itunes:duration>2198</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/371738543]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7914755032.mp3?updated=1629216876"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/371451194-twiml-twiml-talk-88-using-deep-learning-google-street-view-estimate-demographics-timnit-gebru.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I sit down with Timnit Gebru, postdoctoral researcher at Microsoft Research in the Fairness, Accountability, Transparency and Ethics in AI, or FATE, group. Timnit is also one of the organizers behind the Black in AI group, which held a very interesting symposium and poster session at NIPS. I’ll link to the group’s page in the show notes. I’ve been following Timnit’s work for a while now and was really excited to get a chance to sit down with her and pick her brain. We packed a ton into this conversation, especially keying in on her recently released paper “Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US”. Timnit describes the pipeline she developed for this research, and some of the challenges she faced building and end-to-end model based on google street view images, census data and commercial car vendor data. We also discuss the role of social awareness in her work, including an explanation of how domain adaptation and fairness are related and her view of the major research directions in the domain of fairness. The notes for this show can be found at twimlai.com/talk/88 For series information, visit twimlai.com/nips2017</description>
      <pubDate>Tue, 19 Dec 2017 00:54:46 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/74bf84aa-ee98-11eb-9502-0b74d2cd2ffc/image/artworks-000273326888-2u5vfj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I sit down with Timnit Gebru, postdoctoral researcher at Microsoft Research in the Fairness, Accountability, Transparency and Ethics in AI, or FATE, group. Timnit is also one of the organizers behind the Black in AI group, which held a very interesting symposium and poster session at NIPS. I’ll link to the group’s page in the show notes. I’ve been following Timnit’s work for a while now and was really excited to get a chance to sit down with her and pick her brain. We packed a ton into this conversation, especially keying in on her recently released paper “Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US”. Timnit describes the pipeline she developed for this research, and some of the challenges she faced building and end-to-end model based on google street view images, census data and commercial car vendor data. We also discuss the role of social awareness in her work, including an explanation of how domain adaptation and fairness are related and her view of the major research directions in the domain of fairness. The notes for this show can be found at twimlai.com/talk/88 For series information, visit twimlai.com/nips2017</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the NIPs conference in Long Beach, California. I attended a bunch of talks and learned a ton, organized an impromptu roundtable on Building AI Products, and met a bunch of great people, including some former TWiML Talk guests. In this episode I sit down with Timnit Gebru, postdoctoral researcher at Microsoft Research in the Fairness, Accountability, Transparency and Ethics in AI, or FATE, group. Timnit is also one of the organizers behind the Black in AI group, which held a very interesting symposium and poster session at NIPS. I’ll link to the group’s page in the show notes. I’ve been following Timnit’s work for a while now and was really excited to get a chance to sit down with her and pick her brain. We packed a ton into this conversation, especially keying in on her recently released paper “Using Deep Learning and Google Street View to Estimate the Demographic Makeup of the US”. Timnit describes the pipeline she developed for this research, and some of the challenges she faced building and end-to-end model based on google street view images, census data and commercial car vendor data. We also discuss the role of social awareness in her work, including an explanation of how domain adaptation and fairness are related and her view of the major research directions in the domain of fairness. The notes for this show can be found at twimlai.com/talk/88 For series information, visit twimlai.com/nips2017]]>
      </content:encoded>
      <itunes:duration>1933</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/371451194]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7053816091.mp3?updated=1627362856"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Integrative Learning for Robotic Systems with Aaron Ames - TWiML Talk #87</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/369522413-twiml-twiml-talk-87-integrative-learning-robotic-systems-aaron-ames.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. Today we’re joined by Aaron Ames, Professor of Mechanical &amp; Civil Engineering at Caltech. Aaron joined me before his talk at the Deep Learning Summit “Eye, Robot: Computer Vision and Autonomous Robotics” and I had a ton of questions for him. While he considers himself a “hardware guy”, we got into a great discussion centered around the intersection of Robotics and ML Inference. We cover a range of topics, including Boston Dynamics backflipping robot (If you haven't seen it, check out the show notes), Humanoid Robotics, His work on motion primitives and transitions and he even gives us a few predictions on the future of robotics.</description>
      <pubDate>Fri, 15 Dec 2017 18:36:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>87</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/74e48a7a-ee98-11eb-9502-af031f10c186/image/artworks-000269802506-ufrlct-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. Today we’re joined by Aaron Ames, Professor of Mechanical &amp; Civil Engineering at Caltech. Aaron joined me before his talk at the Deep Learning Summit “Eye, Robot: Computer Vision and Autonomous Robotics” and I had a ton of questions for him. While he considers himself a “hardware guy”, we got into a great discussion centered around the intersection of Robotics and ML Inference. We cover a range of topics, including Boston Dynamics backflipping robot (If you haven't seen it, check out the show notes), Humanoid Robotics, His work on motion primitives and transitions and he even gives us a few predictions on the future of robotics.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. Today we’re joined by Aaron Ames, Professor of Mechanical &amp; Civil Engineering at Caltech. Aaron joined me before his talk at the Deep Learning Summit “Eye, Robot: Computer Vision and Autonomous Robotics” and I had a ton of questions for him. While he considers himself a “hardware guy”, we got into a great discussion centered around the intersection of Robotics and ML Inference. We cover a range of topics, including Boston Dynamics backflipping robot (If you haven't seen it, check out the show notes), Humanoid Robotics, His work on motion primitives and transitions and he even gives us a few predictions on the future of robotics.]]>
      </content:encoded>
      <itunes:duration>2843</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/369522413]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4838369480.mp3?updated=1629216886"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Visual Recognition in the Cloud for Law Enforcement with Chris Adzima - TWiML Talk #86</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/369522035-twiml-twiml-talk-86-visual-recognition-cloud-law-enforcement-chris-adzima.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode we’re joined by Chris Adzima, Senior Information Analyst for the Washington County Sheriff’s Department. While Chris is not a traditional data scientist, he comes to us with a very interesting use case using AWS’s Rekognition. Chris is using Rekognition to identify suspects in the Portland area by running their mugshots through the software. In our conversation, he details how he is using Rekognition, while giving us few use cases along the way. We discuss how bias affects the work he is doing, and how they try to remove it from their process, not only from a software developer standpoint, but from a law enforcement standpoint and what his next steps are with the Rekognition software. This was a pretty interesting discussion, i’m sure you’ll enjoy it!</description>
      <pubDate>Thu, 14 Dec 2017 18:02:44 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>86</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75014d9a-ee98-11eb-9502-439ef71c6203/image/artworks-000269802200-4vp8xn-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode we’re joined by Chris Adzima, Senior Information Analyst for the Washington County Sheriff’s Department. While Chris is not a traditional data scientist, he comes to us with a very interesting use case using AWS’s Rekognition. Chris is using Rekognition to identify suspects in the Portland area by running their mugshots through the software. In our conversation, he details how he is using Rekognition, while giving us few use cases along the way. We discuss how bias affects the work he is doing, and how they try to remove it from their process, not only from a software developer standpoint, but from a law enforcement standpoint and what his next steps are with the Rekognition software. This was a pretty interesting discussion, i’m sure you’ll enjoy it!</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode we’re joined by Chris Adzima, Senior Information Analyst for the Washington County Sheriff’s Department. While Chris is not a traditional data scientist, he comes to us with a very interesting use case using AWS’s Rekognition. Chris is using Rekognition to identify suspects in the Portland area by running their mugshots through the software. In our conversation, he details how he is using Rekognition, while giving us few use cases along the way. We discuss how bias affects the work he is doing, and how they try to remove it from their process, not only from a software developer standpoint, but from a law enforcement standpoint and what his next steps are with the Rekognition software. This was a pretty interesting discussion, i’m sure you’ll enjoy it!]]>
      </content:encoded>
      <itunes:duration>2143</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/369522035]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8176842154.mp3?updated=1629216876"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Embodied Visual Learning with Kristen Grauman - TWiML Talk #85</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/369124187-twiml-twiml-talk-85-embodied-visual-learning-kristen-grauman.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. This time around we’re joined by Kristen Grauman, a professor in the department of computer science at UT Austin. Kristen specializes in Computer Vision and joined me leading up to her talk at the Deep Learning Summit “Learning where to look in video”. Kristen &amp; I cover the details from her talk, like exploring how a vision system can learn how to move and where to look. Kristen considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video. The notes for this show can be found at twimlai.com/talk/85. For series details, visit twimlai.com/reinvent.</description>
      <pubDate>Wed, 13 Dec 2017 21:18:18 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>85</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7525bc2a-ee98-11eb-9502-ab7b99bc00b0/image/artworks-000269360807-qj6i54-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. This time around we’re joined by Kristen Grauman, a professor in the department of computer science at UT Austin. Kristen specializes in Computer Vision and joined me leading up to her talk at the Deep Learning Summit “Learning where to look in video”. Kristen &amp; I cover the details from her talk, like exploring how a vision system can learn how to move and where to look. Kristen considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video. The notes for this show can be found at twimlai.com/talk/85. For series details, visit twimlai.com/reinvent.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. This time around we’re joined by Kristen Grauman, a professor in the department of computer science at UT Austin. Kristen specializes in Computer Vision and joined me leading up to her talk at the Deep Learning Summit “Learning where to look in video”. Kristen &amp; I cover the details from her talk, like exploring how a vision system can learn how to move and where to look. Kristen considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video. The notes for this show can be found at twimlai.com/talk/85. For series details, visit twimlai.com/reinvent.]]>
      </content:encoded>
      <itunes:duration>2369</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/369124187]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7763058491.mp3?updated=1629216877"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Real-Time Machine Learning in the Database with Nikita Shamgunov - TWiML Talk #84</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/368542880-twiml-twiml-talk-84-real-time-machine-learning-database-nikita-shamgunov.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode, I’ll be speaking with Nikita Shamgunov, co-founder and CEO of MemSQL, a company offering a distributed, memory-optimized data warehouse of the same name. Nikita and I take a deep dive into some of the features of their recently released 6.0 version, which supports built-in vector operations like dot product and euclidean distance to enable machine learning use cases like real-time image recognition, visual search and predictive analytics for IoT. We also discuss how to architect enterprise machine learning solutions around the data warehouse by including components like data lakes and Spark. Finally, we touch on some of the performance advantages MemSQL has seen by implementing vector operations using Intel’s latest AVX2 and AVX512 instruction sets. Make sure you check out the show notes at twimlai.com/talk/84</description>
      <pubDate>Tue, 12 Dec 2017 20:43:17 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>84</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7548c9fe-ee98-11eb-9502-57c21ff03238/image/artworks-000268766867-xahwxh-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode, I’ll be speaking with Nikita Shamgunov, co-founder and CEO of MemSQL, a company offering a distributed, memory-optimized data warehouse of the same name. Nikita and I take a deep dive into some of the features of their recently released 6.0 version, which supports built-in vector operations like dot product and euclidean distance to enable machine learning use cases like real-time image recognition, visual search and predictive analytics for IoT. We also discuss how to architect enterprise machine learning solutions around the data warehouse by including components like data lakes and Spark. Finally, we touch on some of the performance advantages MemSQL has seen by implementing vector operations using Intel’s latest AVX2 and AVX512 instruction sets. Make sure you check out the show notes at twimlai.com/talk/84</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. In this episode, I’ll be speaking with Nikita Shamgunov, co-founder and CEO of MemSQL, a company offering a distributed, memory-optimized data warehouse of the same name. Nikita and I take a deep dive into some of the features of their recently released 6.0 version, which supports built-in vector operations like dot product and euclidean distance to enable machine learning use cases like real-time image recognition, visual search and predictive analytics for IoT. We also discuss how to architect enterprise machine learning solutions around the data warehouse by including components like data lakes and Spark. Finally, we touch on some of the performance advantages MemSQL has seen by implementing vector operations using Intel’s latest AVX2 and AVX512 instruction sets. Make sure you check out the show notes at twimlai.com/talk/84]]>
      </content:encoded>
      <itunes:duration>2389</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/368542880]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2648858897.mp3?updated=1629216877"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>re:Invent Roundup Roundtable - TWiML Talk # 83</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/368077676-twiml-twiml-talk-83-reinvent-roundup-roundtable.mp3</link>
      <description>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. If you missed the news coming out of re:Invent and want to know more about what one of the biggest AI platform providers is up to, you’ll want to say tuned, because we’ll discuss many of their new offerings in this episode, a Roundtable discussion I held with Dave McCrory VP of Software Engineering at Wise.io at GE Digital and Lawrence Chung, engagement lead at ThingLogix. We cover all of AWS’ most important news, including the new SageMaker and DeepLens, their Rekognition and Transcription services, Alexa for Business, GreenGrass ML and more. This kind of discussion is something a little new for the show, and is a bit reminiscent of my days covering news here on the podcast, so I hope you enjoy it!</description>
      <pubDate>Mon, 11 Dec 2017 18:01:43 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>83</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75965a84-ee98-11eb-9502-c3b3825fdd2a/image/artworks-000268305254-d2jjts-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re featuring a series…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. If you missed the news coming out of re:Invent and want to know more about what one of the biggest AI platform providers is up to, you’ll want to say tuned, because we’ll discuss many of their new offerings in this episode, a Roundtable discussion I held with Dave McCrory VP of Software Engineering at Wise.io at GE Digital and Lawrence Chung, engagement lead at ThingLogix. We cover all of AWS’ most important news, including the new SageMaker and DeepLens, their Rekognition and Transcription services, Alexa for Business, GreenGrass ML and more. This kind of discussion is something a little new for the show, and is a bit reminiscent of my days covering news here on the podcast, so I hope you enjoy it!</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. If you missed the news coming out of re:Invent and want to know more about what one of the biggest AI platform providers is up to, you’ll want to say tuned, because we’ll discuss many of their new offerings in this episode, a Roundtable discussion I held with Dave McCrory VP of Software Engineering at Wise.io at GE Digital and Lawrence Chung, engagement lead at ThingLogix. We cover all of AWS’ most important news, including the new SageMaker and DeepLens, their Rekognition and Transcription services, Alexa for Business, GreenGrass ML and more. This kind of discussion is something a little new for the show, and is a bit reminiscent of my days covering news here on the podcast, so I hope you enjoy it!]]>
      </content:encoded>
      <itunes:duration>3977</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/368077676]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8071653060.mp3?updated=1629216907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Driving Customer Loyalty with Predictive and Conversational AI with Sherif Mityas - TWiML Talk #82</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/366890267-twiml-twiml-talk-82-driving-customer-loyalty-predictive-conversational-ai-sherif-mityas.mp3</link>
      <description>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. To close out our AI Summit New York Series, I speak with Sherif Mityas, head of Technology, Digital and Strategy at restaurant chain TGI Fridays. Sherif joins us to discuss how Fridays is utilizing conversational AI to enhance customer loyalty. Sherif wants Friday’s to be known as a tech company that happens to sell burgers and beer, and in this conversation we get an in-depth look at the technology landscape they’ve put in place to move the company in this direction. Sherif also shares some of the things on the horizon for Friday’s, as well as some of what they’ve learned along the way. Be sure to share your feedback or questions on the show notes page, which you’ll find at twimlai.com/talk/82.</description>
      <pubDate>Fri, 08 Dec 2017 21:55:07 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>82</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75b513c0-ee98-11eb-9502-8f8171e84714/image/artworks-000267053735-vdi58e-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re running a series o…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. To close out our AI Summit New York Series, I speak with Sherif Mityas, head of Technology, Digital and Strategy at restaurant chain TGI Fridays. Sherif joins us to discuss how Fridays is utilizing conversational AI to enhance customer loyalty. Sherif wants Friday’s to be known as a tech company that happens to sell burgers and beer, and in this conversation we get an in-depth look at the technology landscape they’ve put in place to move the company in this direction. Sherif also shares some of the things on the horizon for Friday’s, as well as some of what they’ve learned along the way. Be sure to share your feedback or questions on the show notes page, which you’ll find at twimlai.com/talk/82.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. To close out our AI Summit New York Series, I speak with Sherif Mityas, head of Technology, Digital and Strategy at restaurant chain TGI Fridays. Sherif joins us to discuss how Fridays is utilizing conversational AI to enhance customer loyalty. Sherif wants Friday’s to be known as a tech company that happens to sell burgers and beer, and in this conversation we get an in-depth look at the technology landscape they’ve put in place to move the company in this direction. Sherif also shares some of the things on the horizon for Friday’s, as well as some of what they’ve learned along the way. Be sure to share your feedback or questions on the show notes page, which you’ll find at twimlai.com/talk/82.]]>
      </content:encoded>
      <itunes:duration>2166</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/366890267]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1513337597.mp3?updated=1629216867"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Innovation Factories for AI in FInancial Services with Thierry Derungs - TWiML Talk #81</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/366339158-twiml-twiml-talk-81-innovations-factories-ai-financial-services-thierry-derungs.mp3</link>
      <description>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today’s show continues our discussion of enterprise AI, with a conversation with Thierry Derungs, Chief Digital Officer at BNP Paribas, a multinational bank headquartered in Paris. Thierry joined me to discuss how BNP uses AI and some of the opportunities that have arisen with the changing AI landscape. We also discuss the innovation process that BNP has used to introduce AI to the bank, via what they call innovation incubators or “factories”. The notes for this show can be found at twimlai.com/talk/81.</description>
      <pubDate>Thu, 07 Dec 2017 23:35:12 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/75e73df0-ee98-11eb-9502-af6722dfe990/image/artworks-000266360222-2hq1s4-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re running a series o…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today’s show continues our discussion of enterprise AI, with a conversation with Thierry Derungs, Chief Digital Officer at BNP Paribas, a multinational bank headquartered in Paris. Thierry joined me to discuss how BNP uses AI and some of the opportunities that have arisen with the changing AI landscape. We also discuss the innovation process that BNP has used to introduce AI to the bank, via what they call innovation incubators or “factories”. The notes for this show can be found at twimlai.com/talk/81.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today’s show continues our discussion of enterprise AI, with a conversation with Thierry Derungs, Chief Digital Officer at BNP Paribas, a multinational bank headquartered in Paris. Thierry joined me to discuss how BNP uses AI and some of the opportunities that have arisen with the changing AI landscape. We also discuss the innovation process that BNP has used to introduce AI to the bank, via what they call innovation incubators or “factories”. The notes for this show can be found at twimlai.com/talk/81.]]>
      </content:encoded>
      <itunes:duration>2456</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/366339158]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3208855124.mp3?updated=1627362858"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Block-Sparse Kernels for Deep Neural Networks with Durk Kingma - TWiML Talk #80</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/366333827-twiml-twiml-talk-80-block-sparse-kernels-deep-neural-networks-durk-kingma.mp3</link>
      <description>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. This episode features Durk Kingma, a Research Scientist at OpenAI. Although Durk is probably best known for his pioneering work on variational autoencoders, he joined me this time to talk through his latest project on block sparse kernels, which OpenAI just published this week. Block sparsity is a property of certain neural network representations, and OpenAI’s work on developing block sparse kernels helps make it more computationally efficient to take advantage of them. In addition to covering block sparse kernels themselves and the background required to understand them, we also discuss why they’re important and walk through some examples of how they can be used. I’m happy to present another fine Nerd Alert show to close out this OpenAI Series, and I know you’ll enjoy it! To find the notes for this show, visit twimlai.com/talk/80 For more info on this series, visit twimlai.com/openai</description>
      <pubDate>Thu, 07 Dec 2017 18:18:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>80</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7614a128-ee98-11eb-9502-134d9502f2a9/image/artworks-000266355146-hrq9se-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show is part of a series that I’m really exci…</itunes:subtitle>
      <itunes:summary>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. This episode features Durk Kingma, a Research Scientist at OpenAI. Although Durk is probably best known for his pioneering work on variational autoencoders, he joined me this time to talk through his latest project on block sparse kernels, which OpenAI just published this week. Block sparsity is a property of certain neural network representations, and OpenAI’s work on developing block sparse kernels helps make it more computationally efficient to take advantage of them. In addition to covering block sparse kernels themselves and the background required to understand them, we also discuss why they’re important and walk through some examples of how they can be used. I’m happy to present another fine Nerd Alert show to close out this OpenAI Series, and I know you’ll enjoy it! To find the notes for this show, visit twimlai.com/talk/80 For more info on this series, visit twimlai.com/openai</itunes:summary>
      <content:encoded>
        <![CDATA[The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. This episode features Durk Kingma, a Research Scientist at OpenAI. Although Durk is probably best known for his pioneering work on variational autoencoders, he joined me this time to talk through his latest project on block sparse kernels, which OpenAI just published this week. Block sparsity is a property of certain neural network representations, and OpenAI’s work on developing block sparse kernels helps make it more computationally efficient to take advantage of them. In addition to covering block sparse kernels themselves and the background required to understand them, we also discuss why they’re important and walk through some examples of how they can be used. I’m happy to present another fine Nerd Alert show to close out this OpenAI Series, and I know you’ll enjoy it! To find the notes for this show, visit twimlai.com/talk/80 For more info on this series, visit twimlai.com/openai]]>
      </content:encoded>
      <itunes:duration>2662</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/366333827]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7015549218.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI for Customer Service and Marketing at Aeromexico with Brian Gross - TWiML Talk #79</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/365928317-twiml-twiml-talk-79-ai-customer-service-marketing-aeromexico-brian-gross.mp3</link>
      <description>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today I'm joined by Brian Gross, Head of Digital Innovation for the Mexico City-based airline AeroMexico. AeroMexico is using AI techniques like neural nets to build a chatbot that responds to its customer’s inquiries. In our conversation, Brian describes how he views the chatbot landscape, shares his thoughts on the platform requirements that established enterprises like AeroMexico have for chatbots, and describes how AeroMexico plans to stay ahead of the curve. Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/79. For more info on this series, visit twimlai.com/aisummit.</description>
      <pubDate>Wed, 06 Dec 2017 20:45:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>79</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7630e2a2-ee98-11eb-9502-af9804990227/image/artworks-000265961273-dd125g-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re running a series o…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today I'm joined by Brian Gross, Head of Digital Innovation for the Mexico City-based airline AeroMexico. AeroMexico is using AI techniques like neural nets to build a chatbot that responds to its customer’s inquiries. In our conversation, Brian describes how he views the chatbot landscape, shares his thoughts on the platform requirements that established enterprises like AeroMexico have for chatbots, and describes how AeroMexico plans to stay ahead of the curve. Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/79. For more info on this series, visit twimlai.com/aisummit.</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. Today I'm joined by Brian Gross, Head of Digital Innovation for the Mexico City-based airline AeroMexico. AeroMexico is using AI techniques like neural nets to build a chatbot that responds to its customer’s inquiries. In our conversation, Brian describes how he views the chatbot landscape, shares his thoughts on the platform requirements that established enterprises like AeroMexico have for chatbots, and describes how AeroMexico plans to stay ahead of the curve. Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/79. For more info on this series, visit twimlai.com/aisummit.]]>
      </content:encoded>
      <itunes:duration>1744</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/365928317]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9751942546.mp3?updated=1629216863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling AI for the Enterprise with Mazin Gilbert - TWiML Talk #78</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/365335319-twiml-twiml-talk-78-scaling-ai-enterprise-mazin-gilbert.mp3</link>
      <description>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest this time around is Mazin Gilbert, vice president of advanced technology &amp; architecture with AT&amp;T. Mazin and I have a really interesting discussion on what’s really required to scale AI in the enterprise, and you’ll learn about a new open source project that AT&amp;T is working on to allow any enterprise to do this. You already know by now that I geek out when it comes to talking about the intersection of machine learning and cloud computing, and this conversation is no exception. Be sure to let us know what you think by posting your comments or questions to the show notes page at twimlai.com/talk/78. For more info on this series, visit twimlai.com/aisummit</description>
      <pubDate>Tue, 05 Dec 2017 15:49:30 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>78</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/765487a2-ee98-11eb-9502-a3cb7aab170f/image/artworks-000265416443-i17tlv-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re running a series o…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest this time around is Mazin Gilbert, vice president of advanced technology &amp; architecture with AT&amp;T. Mazin and I have a really interesting discussion on what’s really required to scale AI in the enterprise, and you’ll learn about a new open source project that AT&amp;T is working on to allow any enterprise to do this. You already know by now that I geek out when it comes to talking about the intersection of machine learning and cloud computing, and this conversation is no exception. Be sure to let us know what you think by posting your comments or questions to the show notes page at twimlai.com/talk/78. For more info on this series, visit twimlai.com/aisummit</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest this time around is Mazin Gilbert, vice president of advanced technology &amp; architecture with AT&amp;T. Mazin and I have a really interesting discussion on what’s really required to scale AI in the enterprise, and you’ll learn about a new open source project that AT&amp;T is working on to allow any enterprise to do this. You already know by now that I geek out when it comes to talking about the intersection of machine learning and cloud computing, and this conversation is no exception. Be sure to let us know what you think by posting your comments or questions to the show notes page at twimlai.com/talk/78. For more info on this series, visit twimlai.com/aisummit]]>
      </content:encoded>
      <itunes:duration>2945</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/365335319]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7895560767.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scalable Distributed Deep Learning with Hillery Hunter - TWiML Talk #77</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/364931837-twiml-twiml-talk-77-scaleable-distributed-deep-learning-hillery-hunter.mp3</link>
      <description>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest for this first show in the series is, Hillery Hunter, IBM Fellow &amp; Director of the Accelerated Cognitive Infrastructure group at IBM’s T.J. Watson Research Center. Hillery and I met a few weeks back in New York and I'm really glad that we were able to get her on the show. Hillery joins us to discuss her team's research into distributed deep learning, which was recently released as the PowerAI Distributed Deep Learning Communication Library or DDL. In my conversation with Hillery, we discuss the purpose and technical architecture of the DDL, it’s ability to offer fully synchronous distributed training of deep learning models, the advantages of its Multi-Ring Topology, and much more. This is for sure a nerd alert pod, especially for the performance and hardware geeks among us . Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/77. For more info on this series, visit twimlai.com/aisummit</description>
      <pubDate>Mon, 04 Dec 2017 19:34:01 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>77</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/76b28582-ee98-11eb-9502-3fe9939e790d/image/artworks-000265016483-6h0kky-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the podcast we’re running a series o…</itunes:subtitle>
      <itunes:summary>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest for this first show in the series is, Hillery Hunter, IBM Fellow &amp; Director of the Accelerated Cognitive Infrastructure group at IBM’s T.J. Watson Research Center. Hillery and I met a few weeks back in New York and I'm really glad that we were able to get her on the show. Hillery joins us to discuss her team's research into distributed deep learning, which was recently released as the PowerAI Distributed Deep Learning Communication Library or DDL. In my conversation with Hillery, we discuss the purpose and technical architecture of the DDL, it’s ability to offer fully synchronous distributed training of deep learning models, the advantages of its Multi-Ring Topology, and much more. This is for sure a nerd alert pod, especially for the performance and hardware geeks among us . Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/77. For more info on this series, visit twimlai.com/aisummit</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest for this first show in the series is, Hillery Hunter, IBM Fellow &amp; Director of the Accelerated Cognitive Infrastructure group at IBM’s T.J. Watson Research Center. Hillery and I met a few weeks back in New York and I'm really glad that we were able to get her on the show. Hillery joins us to discuss her team's research into distributed deep learning, which was recently released as the PowerAI Distributed Deep Learning Communication Library or DDL. In my conversation with Hillery, we discuss the purpose and technical architecture of the DDL, it’s ability to offer fully synchronous distributed training of deep learning models, the advantages of its Multi-Ring Topology, and much more. This is for sure a nerd alert pod, especially for the performance and hardware geeks among us . Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/77. For more info on this series, visit twimlai.com/aisummit</p>]]>
      </content:encoded>
      <itunes:duration>2293</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/364931837]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1806206137.mp3?updated=1629216861"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Robotics at OpenAI with Jonas Schneider - TWiML Talk #76</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/363577697-twiml-twiml-talk-76-robotics-openai-jonas-schneider.mp3</link>
      <description>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. While in San Francisco a few months ago, I spent some time with Jonas at the OpenAI office, during which we covered a lot of interesting ground around OpenAI’s work in robotics. We discuss OpenAI Gym, which was the first project he worked on at OpenAI, as well as how they approach setting up the infrastructure for their experimental work, including how they’ve set up a Robots-as-a-Service environment for their researchers and how they use the open source Kubernetes project to manage their compute environment. Check it out and let us know what you think! To find the notes for this show, visit twimlai.com/talk/76 For more info on this series, visit twimlai.com/openai</description>
      <pubDate>Fri, 01 Dec 2017 17:47:45 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>76</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/76d12834-ee98-11eb-9502-9b409ae25766/image/artworks-000263794037-ozrqe2-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show is part of a series that I’m really exci…</itunes:subtitle>
      <itunes:summary>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. While in San Francisco a few months ago, I spent some time with Jonas at the OpenAI office, during which we covered a lot of interesting ground around OpenAI’s work in robotics. We discuss OpenAI Gym, which was the first project he worked on at OpenAI, as well as how they approach setting up the infrastructure for their experimental work, including how they’ve set up a Robots-as-a-Service environment for their researchers and how they use the open source Kubernetes project to manage their compute environment. Check it out and let us know what you think! To find the notes for this show, visit twimlai.com/talk/76 For more info on this series, visit twimlai.com/openai</itunes:summary>
      <content:encoded>
        <![CDATA[The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. While in San Francisco a few months ago, I spent some time with Jonas at the OpenAI office, during which we covered a lot of interesting ground around OpenAI’s work in robotics. We discuss OpenAI Gym, which was the first project he worked on at OpenAI, as well as how they approach setting up the infrastructure for their experimental work, including how they’ve set up a Robots-as-a-Service environment for their researchers and how they use the open source Kubernetes project to manage their compute environment. Check it out and let us know what you think! To find the notes for this show, visit twimlai.com/talk/76 For more info on this series, visit twimlai.com/openai]]>
      </content:encoded>
      <itunes:duration>2722</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/363577697]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4761605786.mp3?updated=1629216891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Robustness and Safety with Dario Amodei - TWiML Talk #75</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/363182024-twiml-twiml-talk-75-ai-robustness-safety-dario-amodei.mp3</link>
      <description>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety. Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show! To find the notes for this show, visit twimlai.com/talk/75 For more info on this series, visit twimlai.com/openai</description>
      <pubDate>Thu, 30 Nov 2017 21:14:51 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>75</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/76f180b6-ee98-11eb-9502-cbe67d9eb0fd/image/artworks-000263414303-wcnk2t-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show is part of a series that I’m really exci…</itunes:subtitle>
      <itunes:summary>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety. Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show! To find the notes for this show, visit twimlai.com/talk/75 For more info on this series, visit twimlai.com/openai</itunes:summary>
      <content:encoded>
        <![CDATA[The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode i'm joined by Dario Amodei, Team Lead for Safety Research at OpenAI. While in San Francisco a few months ago, I spent some time at the OpenAI office, during which I sat down with Dario to chat about the work happening at OpenAI around AI safety. Dario and I dive into the two areas of AI safety that he and his team are focused on--robustness and alignment. We also touch on his research with the Google DeepMind team, the OpenAI Universe tool, and how human interactions can be incorporated into reinforcement learning models. This was a great conversation, and along with the other shows in this series, this is a nerd alert show! To find the notes for this show, visit twimlai.com/talk/75 For more info on this series, visit twimlai.com/openai]]>
      </content:encoded>
      <itunes:duration>2203</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/363182024]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1804163574.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Towards Artificial General Intelligence with Greg Brockman - TWiML Talk #74</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/361903115-twiml-twiml-talk-74-towards-artificial-general-intelligence-greg-brockman.mp3</link>
      <description>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode, I’m joined by Greg Brockman, OpenAI Co-Founder and CTO. Greg and I touch on a bunch of topics in the show. We start with the founding and goals of OpenAI, before diving into a discussion on Artificial General Intelligence, what it means to achieve it, and how we going about doing so safely and without bias. We also touch on how to massively scale neural networks and their training training and the evolution of computational frameworks for AI. This conversation is not only informative and nerd alert worthy, but we cover some very important topics, so please take it all in, enjoy, and send along your feedback! To find the notes for this show, visit twimlai.com/talk/74 For more info on this series, visit twimlai.com/openai</description>
      <pubDate>Tue, 28 Nov 2017 05:54:24 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>74</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/770f490c-ee98-11eb-9502-33356efea82d/image/artworks-000262163129-z7xa7o-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show is part of a series that I’m really exci…</itunes:subtitle>
      <itunes:summary>The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode, I’m joined by Greg Brockman, OpenAI Co-Founder and CTO. Greg and I touch on a bunch of topics in the show. We start with the founding and goals of OpenAI, before diving into a discussion on Artificial General Intelligence, what it means to achieve it, and how we going about doing so safely and without bias. We also touch on how to massively scale neural networks and their training training and the evolution of computational frameworks for AI. This conversation is not only informative and nerd alert worthy, but we cover some very important topics, so please take it all in, enjoy, and send along your feedback! To find the notes for this show, visit twimlai.com/talk/74 For more info on this series, visit twimlai.com/openai</itunes:summary>
      <content:encoded>
        <![CDATA[The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this episode, I’m joined by Greg Brockman, OpenAI Co-Founder and CTO. Greg and I touch on a bunch of topics in the show. We start with the founding and goals of OpenAI, before diving into a discussion on Artificial General Intelligence, what it means to achieve it, and how we going about doing so safely and without bias. We also touch on how to massively scale neural networks and their training training and the evolution of computational frameworks for AI. This conversation is not only informative and nerd alert worthy, but we cover some very important topics, so please take it all in, enjoy, and send along your feedback! To find the notes for this show, visit twimlai.com/talk/74 For more info on this series, visit twimlai.com/openai]]>
      </content:encoded>
      <itunes:duration>3358</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/361903115]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1497387557.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Explaining Black Box Predictions with Sam Ritchie - TWiML Talk #73</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/360825269-twiml-twiml-talk-73-exploring-black-box-predictions-sam-ritchie.mp3</link>
      <description>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Sam Ritchie, a software engineer at Stripe. I caught up with Sam RIGHT after his talk at the conference, where he covered his team’s work on explaining black box predictions. In our conversation, we discuss how Stripe uses black box predictions for fraud detection, and he gives a few use case scenarios. We discuss Stripe’s approach for explaining those predictions as well as other approaches, and briefly mention Carlos Guestrin’s work on LIME paper, which he and I discuss in TWiML Talk #7. The notes for this show can be found at twimlai.com/talk/73 For more series info, visit twimlai.com/STLoop</description>
      <pubDate>Sat, 25 Nov 2017 19:26:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>73</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77349400-ee98-11eb-9502-476347577c0c/image/artworks-000261153086-rnehiw-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, we’ll be featuring a series of shows r…</itunes:subtitle>
      <itunes:summary>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Sam Ritchie, a software engineer at Stripe. I caught up with Sam RIGHT after his talk at the conference, where he covered his team’s work on explaining black box predictions. In our conversation, we discuss how Stripe uses black box predictions for fraud detection, and he gives a few use case scenarios. We discuss Stripe’s approach for explaining those predictions as well as other approaches, and briefly mention Carlos Guestrin’s work on LIME paper, which he and I discuss in TWiML Talk #7. The notes for this show can be found at twimlai.com/talk/73 For more series info, visit twimlai.com/STLoop</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Sam Ritchie, a software engineer at Stripe. I caught up with Sam RIGHT after his talk at the conference, where he covered his team’s work on explaining black box predictions. In our conversation, we discuss how Stripe uses black box predictions for fraud detection, and he gives a few use case scenarios. We discuss Stripe’s approach for explaining those predictions as well as other approaches, and briefly mention Carlos Guestrin’s work on LIME paper, which he and I discuss in TWiML Talk #7. The notes for this show can be found at twimlai.com/talk/73 For more series info, visit twimlai.com/STLoop</p>]]>
      </content:encoded>
      <itunes:duration>2301</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/360825269]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7734178125.mp3?updated=1629216868"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Experimental Creative Writing with the Vectorized Word - Allison Parish - TWIML Talk #72</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/360372581-twiml-twiml-talk-72-experimental-creative-writing-vectorized-word-allison-parrish.mp3</link>
      <description>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Allison Parrish, Poet and Professor at NYU in the Interactive Telecommunications dept. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. The notes for this show can be found at twimlai.com/talk/72 For more series info, visit twimlai.com/STLoop</description>
      <pubDate>Fri, 24 Nov 2017 17:00:51 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>72</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7754de18-ee98-11eb-9502-0373f18bf5be/image/artworks-000260745653-8j8y4s-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, we’ll be featuring a series of shows r…</itunes:subtitle>
      <itunes:summary>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Allison Parrish, Poet and Professor at NYU in the Interactive Telecommunications dept. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. The notes for this show can be found at twimlai.com/talk/72 For more series info, visit twimlai.com/STLoop</itunes:summary>
      <content:encoded>
        <![CDATA[This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Allison Parrish, Poet and Professor at NYU in the Interactive Telecommunications dept. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. Allison’s work centers around generated poetry, via artificial intelligence and machine learning. She joins me prior to her conference talk on “Experimental Creative Writing with the Vectorized Word”. In our time together, we discuss some of her research into computational poetry generation, actually performing AI-produced poetry, and some of the methods and processes she uses for generating her work. The notes for this show can be found at twimlai.com/talk/72 For more series info, visit twimlai.com/STLoop]]>
      </content:encoded>
      <itunes:duration>1685</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/360372581]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2528777754.mp3?updated=1629216863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Biological Path Towards Strong AI - Matthew Taylor - TWiML Talk #71</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/359612942-twiml-twiml-talk-71-biological-path-towards-strong-ai-matthew-taylor.mp3</link>
      <description>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Matthew Taylor, Open Source Manager at Numenta. You might remember hearing a bit about Numenta from an interview I did with Francisco Weber of Cortical.io, for TWiML Talk #10, a show which remains the most popular show on the podcast. Numenta is basically trying to reverse-engineer the neocortex, and use what they learn to develop a neocortical theory for biological and machine intelligence called Hierarchical Temporal Memory. Matt joined me at the conference to discuss his talk “The Biological Path Towards Strong AI”. In our conversation, we discuss the basics of HTM, it’s biological inspiration, and how it differs from traditional neural network models including deep learning. This is a Nerd Alert show, and after you listen I would encourage you to check out the conversation with Francisco which we’ll link to in the show notes. The notes for this show can be found at twimlai.com/talk/71 For series information, visit twimlai.com/stloop</description>
      <pubDate>Wed, 22 Nov 2017 22:43:27 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>71</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/777699b8-ee98-11eb-9502-0f02afb97489/image/artworks-000260041616-k80uv5-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, we’ll be featuring a series of shows r…</itunes:subtitle>
      <itunes:summary>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Matthew Taylor, Open Source Manager at Numenta. You might remember hearing a bit about Numenta from an interview I did with Francisco Weber of Cortical.io, for TWiML Talk #10, a show which remains the most popular show on the podcast. Numenta is basically trying to reverse-engineer the neocortex, and use what they learn to develop a neocortical theory for biological and machine intelligence called Hierarchical Temporal Memory. Matt joined me at the conference to discuss his talk “The Biological Path Towards Strong AI”. In our conversation, we discuss the basics of HTM, it’s biological inspiration, and how it differs from traditional neural network models including deep learning. This is a Nerd Alert show, and after you listen I would encourage you to check out the conversation with Francisco which we’ll link to in the show notes. The notes for this show can be found at twimlai.com/talk/71 For series information, visit twimlai.com/stloop</itunes:summary>
      <content:encoded>
        <![CDATA[This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this episode, I speak with Matthew Taylor, Open Source Manager at Numenta. You might remember hearing a bit about Numenta from an interview I did with Francisco Weber of Cortical.io, for TWiML Talk #10, a show which remains the most popular show on the podcast. Numenta is basically trying to reverse-engineer the neocortex, and use what they learn to develop a neocortical theory for biological and machine intelligence called Hierarchical Temporal Memory. Matt joined me at the conference to discuss his talk “The Biological Path Towards Strong AI”. In our conversation, we discuss the basics of HTM, it’s biological inspiration, and how it differs from traditional neural network models including deep learning. This is a Nerd Alert show, and after you listen I would encourage you to check out the conversation with Francisco which we’ll link to in the show notes. The notes for this show can be found at twimlai.com/talk/71 For series information, visit twimlai.com/stloop]]>
      </content:encoded>
      <itunes:duration>2279</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/359612942]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2086096856.mp3?updated=1629216882"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pytorch: Fast Differentiable Dynamic Graphs in Python with Soumith Chintala - TWiML Talk #70</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/358982798-twiml-twiml-talk-70-pytorch-fast-differentiable-dynamic-graphs-python-soumith-chintala.mp3</link>
      <description>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show I speak with Soumith Chintala, a Research Engineer in the Facebook AI Research Lab (FAIR). Soumith joined me at Strange Loop before his talk on Pytorch, the deep learning framework. In this talk we discuss the market evolution of deep learning frameworks and tools, different approaches to programming deep learning frameworks, Facebook’s motivation for investing in Pytorch, and much more. This was a fun interview, I hope you enjoy! The notes for this show can be found at twimlai.com/talk/70 For series information, visit twimlai.com/stloop</description>
      <pubDate>Tue, 21 Nov 2017 18:15:29 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>70</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77949daa-ee98-11eb-9502-136546e4d428/image/artworks-000259416623-b52h28-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, we’ll be featuring a series of shows r…</itunes:subtitle>
      <itunes:summary>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show I speak with Soumith Chintala, a Research Engineer in the Facebook AI Research Lab (FAIR). Soumith joined me at Strange Loop before his talk on Pytorch, the deep learning framework. In this talk we discuss the market evolution of deep learning frameworks and tools, different approaches to programming deep learning frameworks, Facebook’s motivation for investing in Pytorch, and much more. This was a fun interview, I hope you enjoy! The notes for this show can be found at twimlai.com/talk/70 For series information, visit twimlai.com/stloop</itunes:summary>
      <content:encoded>
        <![CDATA[This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show I speak with Soumith Chintala, a Research Engineer in the Facebook AI Research Lab (FAIR). Soumith joined me at Strange Loop before his talk on Pytorch, the deep learning framework. In this talk we discuss the market evolution of deep learning frameworks and tools, different approaches to programming deep learning frameworks, Facebook’s motivation for investing in Pytorch, and much more. This was a fun interview, I hope you enjoy! The notes for this show can be found at twimlai.com/talk/70 For series information, visit twimlai.com/stloop]]>
      </content:encoded>
      <itunes:duration>2563</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/358982798]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8698423294.mp3?updated=1629216889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Accessible Machine Learning for the Enterprise Developer with Ryan Sevey &amp; Jason Montgomery</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/358593131-twiml-twiml-talk-69-accessible-machine-learning-enterprise-developer-ryan-sevey-jason-montgomery.mp3</link>
      <description>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show you'll hear from Nexosis founders Ryan Sevey and Jason Montgomery. Ryan, Jason and I discuss how they got their start by applying ML to identify cheaters in video games, the application of ML for time-series data analysis, and of course the Nexosis Machine Learning API. Of course, if you like what you hear, they invite you to get your free Nexosis API key and discover what they can bring to your next project at nexosis.com/twiml. The notes for this show can be found at twimlai.com/talk/69 For series information, visit twimlai.com/stloop</description>
      <pubDate>Mon, 20 Nov 2017 21:03:20 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>69</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77ae1e2e-ee98-11eb-9502-9b931fc25cc9/image/artworks-000258984557-p0jxsl-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, we’ll be featuring a series of shows r…</itunes:subtitle>
      <itunes:summary>This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show you'll hear from Nexosis founders Ryan Sevey and Jason Montgomery. Ryan, Jason and I discuss how they got their start by applying ML to identify cheaters in video games, the application of ML for time-series data analysis, and of course the Nexosis Machine Learning API. Of course, if you like what you hear, they invite you to get your free Nexosis API key and discover what they can bring to your next project at nexosis.com/twiml. The notes for this show can be found at twimlai.com/talk/69 For series information, visit twimlai.com/stloop</itunes:summary>
      <content:encoded>
        <![CDATA[This week, we’ll be featuring a series of shows recorded from Strange Loop, a great developer-focused conference that takes place every year right in my backyard! The conference is a multi-disciplinary melting pot of developers and thinkers across a variety of fields, and we’re happy to be able to bring a bit of it to those of you who couldn’t make it in person! In this show you'll hear from Nexosis founders Ryan Sevey and Jason Montgomery. Ryan, Jason and I discuss how they got their start by applying ML to identify cheaters in video games, the application of ML for time-series data analysis, and of course the Nexosis Machine Learning API. Of course, if you like what you hear, they invite you to get your free Nexosis API key and discover what they can bring to your next project at nexosis.com/twiml. The notes for this show can be found at twimlai.com/talk/69 For series information, visit twimlai.com/stloop]]>
      </content:encoded>
      <itunes:duration>2715</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/358593131]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5141541980.mp3?updated=1629216888"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bridging the Gap Between Academic and Industry Careers with Ross Fadely - TWiML Talk #68</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/355168043-twiml-twiml-talk-68-bridging-gap-academic-industry-careers-ross-fadely.mp3</link>
      <description>We close out our NYU Future Labs AI Summit interview series with Ross Fadely, a New York based AI lead with Insight Data Science. Insight is an interesting company offering a free seven week post-doctoral training fellowship helping individuals to bridge the gap between academia and careers in data science, data engineering and AI. Ross joined me backstage at the Future Labs Summit after leading a Machine Learning Primer for attendees. Our conversation explores some of the knowledge gaps that Insight has identified in folks coming out of academia, and how they structure their program to address them. If you find yourself looking to make this transition, you’ll definitely want to check out this episode. The notes for this show can be found at twimlai.com/talk/68 For series information, visit twimlai.com/ainexuslab2</description>
      <pubDate>Thu, 16 Nov 2017 18:55:27 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>68</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77ceeb2c-ee98-11eb-9502-1b995f536105/image/artworks-000255485084-5wwaxn-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>We close out our NYU Future Labs AI Summit interv…</itunes:subtitle>
      <itunes:summary>We close out our NYU Future Labs AI Summit interview series with Ross Fadely, a New York based AI lead with Insight Data Science. Insight is an interesting company offering a free seven week post-doctoral training fellowship helping individuals to bridge the gap between academia and careers in data science, data engineering and AI. Ross joined me backstage at the Future Labs Summit after leading a Machine Learning Primer for attendees. Our conversation explores some of the knowledge gaps that Insight has identified in folks coming out of academia, and how they structure their program to address them. If you find yourself looking to make this transition, you’ll definitely want to check out this episode. The notes for this show can be found at twimlai.com/talk/68 For series information, visit twimlai.com/ainexuslab2</itunes:summary>
      <content:encoded>
        <![CDATA[We close out our NYU Future Labs AI Summit interview series with Ross Fadely, a New York based AI lead with Insight Data Science. Insight is an interesting company offering a free seven week post-doctoral training fellowship helping individuals to bridge the gap between academia and careers in data science, data engineering and AI. Ross joined me backstage at the Future Labs Summit after leading a Machine Learning Primer for attendees. Our conversation explores some of the knowledge gaps that Insight has identified in folks coming out of academia, and how they structure their program to address them. If you find yourself looking to make this transition, you’ll definitely want to check out this episode. The notes for this show can be found at twimlai.com/talk/68 For series information, visit twimlai.com/ainexuslab2]]>
      </content:encoded>
      <itunes:duration>1159</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/355168043]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4345529651.mp3?updated=1629216851"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Limitations of Human-in-the-Loop AI with Dennis Mortensen - TWiML Talk #67</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/355168010-twiml-twiml-talk-67-limitations-human-loop-ai-dennis-mortensen.mp3</link>
      <description>We continue our NYU Future Labs AI Summit interview series with Dennis Mortensen, founder and CEO of X.ai, a company whose AI-based personal assistant Amy helps users with scheduling meetings. I caught up with Dennis backstage at the Future Labs event a few weeks ago, right before he went on stage to talk about “Investing in AI from the Startup POV.” Dennis gave shares some great insight into building an AI-first company, not to mention his vision for the future of scheduling, something no one actually enjoys doing, and his thoughts on the future of human-AI interaction. This was a fun interview, which I’m sure you’ll enjoy. A quick warning though… This might not be a show to listen to in the car with the kiddos, as this episode does contain a few expletives. The notes for this show can be found at twimlai.com/talk/67 For series information, visit twimlai.com/ainexuslab2</description>
      <pubDate>Mon, 13 Nov 2017 17:59:57 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>67</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/77eeebf2-ee98-11eb-9502-23e87b998e28/image/artworks-000255485057-71m4ol-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>We continue our NYU Future Labs AI Summit intervi…</itunes:subtitle>
      <itunes:summary>We continue our NYU Future Labs AI Summit interview series with Dennis Mortensen, founder and CEO of X.ai, a company whose AI-based personal assistant Amy helps users with scheduling meetings. I caught up with Dennis backstage at the Future Labs event a few weeks ago, right before he went on stage to talk about “Investing in AI from the Startup POV.” Dennis gave shares some great insight into building an AI-first company, not to mention his vision for the future of scheduling, something no one actually enjoys doing, and his thoughts on the future of human-AI interaction. This was a fun interview, which I’m sure you’ll enjoy. A quick warning though… This might not be a show to listen to in the car with the kiddos, as this episode does contain a few expletives. The notes for this show can be found at twimlai.com/talk/67 For series information, visit twimlai.com/ainexuslab2</itunes:summary>
      <content:encoded>
        <![CDATA[We continue our NYU Future Labs AI Summit interview series with Dennis Mortensen, founder and CEO of X.ai, a company whose AI-based personal assistant Amy helps users with scheduling meetings. I caught up with Dennis backstage at the Future Labs event a few weeks ago, right before he went on stage to talk about “Investing in AI from the Startup POV.” Dennis gave shares some great insight into building an AI-first company, not to mention his vision for the future of scheduling, something no one actually enjoys doing, and his thoughts on the future of human-AI interaction. This was a fun interview, which I’m sure you’ll enjoy. A quick warning though… This might not be a show to listen to in the car with the kiddos, as this episode does contain a few expletives. The notes for this show can be found at twimlai.com/talk/67 For series information, visit twimlai.com/ainexuslab2]]>
      </content:encoded>
      <itunes:duration>2142</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/355168010]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3986211831.mp3?updated=1629216862"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Nexus Lab Cohort 2 - Second Mind - TWiML Talk #66</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/353376644-twiml-twiml-talk-66-nyu-nexus-labs-cohort-2-second-mind.mp3</link>
      <description>The podcast you’re about to hear is the fourth of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this show, I speak with Kul Singh, CEO and Founder of Second Mind. Second Mind is building an integration platform for businesses that allows them to bring augmented intelligence to voice conversations. We talk to Kul about the concept behind Second Mind, and how the company combines ambient listening with a low-latency matching system to help users eliminate an estimated 2.5 hours of manual searches per day! The notes for this show can be found at twimlai.com/talk/66 For series information, visit twimlai.com/ainexuslab2</description>
      <pubDate>Thu, 09 Nov 2017 16:35:46 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>66</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/78073978-ee98-11eb-9502-cb5785f37b9e/image/artworks-000253502498-4gmr3g-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the fourth of…</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the fourth of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this show, I speak with Kul Singh, CEO and Founder of Second Mind. Second Mind is building an integration platform for businesses that allows them to bring augmented intelligence to voice conversations. We talk to Kul about the concept behind Second Mind, and how the company combines ambient listening with a low-latency matching system to help users eliminate an estimated 2.5 hours of manual searches per day! The notes for this show can be found at twimlai.com/talk/66 For series information, visit twimlai.com/ainexuslab2</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the fourth of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this show, I speak with Kul Singh, CEO and Founder of Second Mind. Second Mind is building an integration platform for businesses that allows them to bring augmented intelligence to voice conversations. We talk to Kul about the concept behind Second Mind, and how the company combines ambient listening with a low-latency matching system to help users eliminate an estimated 2.5 hours of manual searches per day! The notes for this show can be found at twimlai.com/talk/66 For series information, visit twimlai.com/ainexuslab2]]>
      </content:encoded>
      <itunes:duration>1308</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/353376644]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9386962691.mp3?updated=1629216851"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Nexus Lab Cohort 2 - Bite.ai - TWiML Talk #65</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/353072600-twiml-twiml-talk-65-nyu-nexus-labs-cohort-2-bite-ai.mp3</link>
      <description>The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City.In this episode, you’ll hear from Bite.ai, a startup founded by Vinay Anantharaman and Michal Wolski, founders who met working at Clarifai, another NYU Future Labs alumni, whose CEO Matt Zeiler I interviewed on TWiML Talk #22(Link on show notes page). Bite is using convolutional neural networks and other machine learning to help computers understand and reason about food. Their product is the app Bitesnap, which provides users with detailed nutritional information about the food they’re about to eat using just a photo and a serving size. We dive into the details of their app and service, the machine learning models and pipeline that enable it, and how they plan to compete with other apps targeting dieters, and more! The notes for this show can be found at twimlai.com/talk/65 For series information, visit twimlai.com/ainexuslab2.</description>
      <pubDate>Wed, 08 Nov 2017 22:59:23 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>65</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/783c97e4-ee98-11eb-9502-036e8f2146d3/image/artworks-000253181852-dli89z-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the second of…</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City.In this episode, you’ll hear from Bite.ai, a startup founded by Vinay Anantharaman and Michal Wolski, founders who met working at Clarifai, another NYU Future Labs alumni, whose CEO Matt Zeiler I interviewed on TWiML Talk #22(Link on show notes page). Bite is using convolutional neural networks and other machine learning to help computers understand and reason about food. Their product is the app Bitesnap, which provides users with detailed nutritional information about the food they’re about to eat using just a photo and a serving size. We dive into the details of their app and service, the machine learning models and pipeline that enable it, and how they plan to compete with other apps targeting dieters, and more! The notes for this show can be found at twimlai.com/talk/65 For series information, visit twimlai.com/ainexuslab2.</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City.In this episode, you’ll hear from Bite.ai, a startup founded by Vinay Anantharaman and Michal Wolski, founders who met working at Clarifai, another NYU Future Labs alumni, whose CEO Matt Zeiler I interviewed on TWiML Talk #22(Link on show notes page). Bite is using convolutional neural networks and other machine learning to help computers understand and reason about food. Their product is the app Bitesnap, which provides users with detailed nutritional information about the food they’re about to eat using just a photo and a serving size. We dive into the details of their app and service, the machine learning models and pipeline that enable it, and how they plan to compete with other apps targeting dieters, and more! The notes for this show can be found at twimlai.com/talk/65 For series information, visit twimlai.com/ainexuslab2.]]>
      </content:encoded>
      <itunes:duration>1619</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/353072600]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8599401385.mp3?updated=1629216857"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Nexus Lab Cohort 2 - Bowtie - TWiML Talk #64</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/352617113-twiml-twiml-talk-064-nyu-nexus-labs-cohort-2-bowtie.mp3</link>
      <description>The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this episode, I speak with Ron Fisher and Mike Wang, who, along with Vivek Sudarsan founded Bowtie Labs, a 24/7 AI-based receptionist designed to help businesses in the beauty, wellness, and fitness industries increase retail conversion rates. I’ve talked with a few startups in the conversational space recently and one common theme seems to be quickly outgrowing commercial conversational platforms. Ron and Mike shared their own experiences with decision, and shared some of the challenges they’re trying to overcome with their ML models, as well as some of the techniques they use to make their system as responsive as possible. The notes for this show can be found at twimlai.com/talk/64 For Series information, visit twimlai.com/ainexuslab2</description>
      <pubDate>Tue, 07 Nov 2017 23:54:51 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>64</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7859853e-ee98-11eb-9502-aba7bdc99476/image/artworks-000252695759-yf4989-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the second of…</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this episode, I speak with Ron Fisher and Mike Wang, who, along with Vivek Sudarsan founded Bowtie Labs, a 24/7 AI-based receptionist designed to help businesses in the beauty, wellness, and fitness industries increase retail conversion rates. I’ve talked with a few startups in the conversational space recently and one common theme seems to be quickly outgrowing commercial conversational platforms. Ron and Mike shared their own experiences with decision, and shared some of the challenges they’re trying to overcome with their ML models, as well as some of the techniques they use to make their system as responsive as possible. The notes for this show can be found at twimlai.com/talk/64 For Series information, visit twimlai.com/ainexuslab2</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the second of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. In this episode, I speak with Ron Fisher and Mike Wang, who, along with Vivek Sudarsan founded Bowtie Labs, a 24/7 AI-based receptionist designed to help businesses in the beauty, wellness, and fitness industries increase retail conversion rates. I’ve talked with a few startups in the conversational space recently and one common theme seems to be quickly outgrowing commercial conversational platforms. Ron and Mike shared their own experiences with decision, and shared some of the challenges they’re trying to overcome with their ML models, as well as some of the techniques they use to make their system as responsive as possible. The notes for this show can be found at twimlai.com/talk/64 For Series information, visit twimlai.com/ainexuslab2]]>
      </content:encoded>
      <itunes:duration>1515</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/352617113]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8471530239.mp3?updated=1629216855"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI Nexus Lab Cohort 2 - Mt. Cleverest - TWiML Talk #63</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/352065689-twiml-twiml-talk-063-nyu-nexus-labs-cohort-2-mt-cleverest.mp3</link>
      <description>The podcast you’re about to hear is the first of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. My guests this time around are James Villarrubia and Bernie Prat, CEO and COO respectively, of Mt. Cleverest, an online service for teachers and students, that can take any text via the web, and generate a quiz along with answers based on the content supplied. To do this, Bernie and James employ a pretty sophisticated natural language understanding pipeline, which we discuss in this interview. We also touch on the challenges they face in generating correct question answers, how they fine tune their ML models to improve those answers over time, and more. The notes for this show can be found at twimlai.com/talk/63 For Series information, visit twimlai.com/nexuslabs2</description>
      <pubDate>Mon, 06 Nov 2017 22:09:09 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>63</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7894e732-ee98-11eb-9502-d7c703c23e01/image/artworks-000252161114-wlknt7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the first of …</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the first of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. My guests this time around are James Villarrubia and Bernie Prat, CEO and COO respectively, of Mt. Cleverest, an online service for teachers and students, that can take any text via the web, and generate a quiz along with answers based on the content supplied. To do this, Bernie and James employ a pretty sophisticated natural language understanding pipeline, which we discuss in this interview. We also touch on the challenges they face in generating correct question answers, how they fine tune their ML models to improve those answers over time, and more. The notes for this show can be found at twimlai.com/talk/63 For Series information, visit twimlai.com/nexuslabs2</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the first of a series of shows recorded at the NYU Future Labs AI Summit last week in New York City. My guests this time around are James Villarrubia and Bernie Prat, CEO and COO respectively, of Mt. Cleverest, an online service for teachers and students, that can take any text via the web, and generate a quiz along with answers based on the content supplied. To do this, Bernie and James employ a pretty sophisticated natural language understanding pipeline, which we discuss in this interview. We also touch on the challenges they face in generating correct question answers, how they fine tune their ML models to improve those answers over time, and more. The notes for this show can be found at twimlai.com/talk/63 For Series information, visit twimlai.com/nexuslabs2]]>
      </content:encoded>
      <itunes:duration>1929</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/352065689]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3844122334.mp3?updated=1629216854"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Learning to Learn, and other Opportunities in Machine Learning with Graham Taylor - TWiML Talk #62</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/349502870-twiml-twiml-talk-062-learning-learn-opportunities-machine-learning-graham-taylor.mp3</link>
      <description>The podcast you’re about to hear is the third of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest this time is Graham Taylor, professor of engineering at the University of Guelph, who keynoted day two of the conference. Graham leads the Machine Learning Research Group at Guelph, and is affiliated with Toronto’s recently formed Vector Institute for Artificial Intelligence. Graham and I discussed a number of the most important trends and challenges in artificial intelligence, including the move from predictive to creative systems, the rise of human-in-the-loop AI, and how modern AI is accelerating with our ability to teach computers how to learn-to-learn. The notes for this show can be found at twimlai.com/talk/62. For series info, visit twimlai.com/GPPC2017</description>
      <pubDate>Fri, 03 Nov 2017 15:48:27 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>62</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/78b18bd0-ee98-11eb-9502-977afac0c3df/image/artworks-000249720948-cdz8ap-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the third of …</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the third of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest this time is Graham Taylor, professor of engineering at the University of Guelph, who keynoted day two of the conference. Graham leads the Machine Learning Research Group at Guelph, and is affiliated with Toronto’s recently formed Vector Institute for Artificial Intelligence. Graham and I discussed a number of the most important trends and challenges in artificial intelligence, including the move from predictive to creative systems, the rise of human-in-the-loop AI, and how modern AI is accelerating with our ability to teach computers how to learn-to-learn. The notes for this show can be found at twimlai.com/talk/62. For series info, visit twimlai.com/GPPC2017</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the third of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest this time is Graham Taylor, professor of engineering at the University of Guelph, who keynoted day two of the conference. Graham leads the Machine Learning Research Group at Guelph, and is affiliated with Toronto’s recently formed Vector Institute for Artificial Intelligence. Graham and I discussed a number of the most important trends and challenges in artificial intelligence, including the move from predictive to creative systems, the rise of human-in-the-loop AI, and how modern AI is accelerating with our ability to teach computers how to learn-to-learn. The notes for this show can be found at twimlai.com/talk/62. For series info, visit twimlai.com/GPPC2017]]>
      </content:encoded>
      <itunes:duration>2241</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/349502870]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1442900773.mp3?updated=1629216868"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Building Conversational Application for Financial Services with Kenneth Conroy - TWiML Talk #61</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/349501328-twiml-twiml-talk-61-building-conversational-application-financial-services-kenneth-conroy.mp3</link>
      <description>The podcast you’re about to hear is the second of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this interview is Kenneth Conroy, VP of data science at Vancouver, Canada-based Finn.ai, a company building a chatbot system for banks. Kenneth and I spoke about how Finn.AI built its core conversational platform. We spoke in depth about the requirements and challenges of conversational applications, and how and why they transitioned off of a commercial chatbot platform--in their case API.ai--and built their own custom platform based on deep learning, word2vec and other natural language understanding technologies. The notes for this show can be found at https://twimlai.com/talk/61</description>
      <pubDate>Wed, 01 Nov 2017 14:28:30 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>61</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/78d7750c-ee98-11eb-9502-2b2b984f0d0e/image/artworks-000249716180-11undv-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the second of…</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the second of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this interview is Kenneth Conroy, VP of data science at Vancouver, Canada-based Finn.ai, a company building a chatbot system for banks. Kenneth and I spoke about how Finn.AI built its core conversational platform. We spoke in depth about the requirements and challenges of conversational applications, and how and why they transitioned off of a commercial chatbot platform--in their case API.ai--and built their own custom platform based on deep learning, word2vec and other natural language understanding technologies. The notes for this show can be found at https://twimlai.com/talk/61</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the second of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this interview is Kenneth Conroy, VP of data science at Vancouver, Canada-based Finn.ai, a company building a chatbot system for banks. Kenneth and I spoke about how Finn.AI built its core conversational platform. We spoke in depth about the requirements and challenges of conversational applications, and how and why they transitioned off of a commercial chatbot platform--in their case API.ai--and built their own custom platform based on deep learning, word2vec and other natural language understanding technologies. The notes for this show can be found at https://twimlai.com/talk/61]]>
      </content:encoded>
      <itunes:duration>2252</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/349501328]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4892400665.mp3?updated=1629216880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Fighting Fraud with Machine Learning at Shopify with Solmaz Shahalizadeh - TWiML Talk #60</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/349309151-twiml-twiml-talk-60-fighting-fraud-machine-learning-shopify-solmaz-shahalizadeh.mp3</link>
      <description>The podcast you’re about to hear is the first of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this show is Solmaz Shahalizadeh, Director of Merchant Services Algorithms at Shopify. Solmaz gave a great talk at the GPPC focused on her team’s experiences applying machine learning to fight fraud and improve merchant satisfaction. Solmaz and I dig into, step-by-step, the process they used to transition from a legacy, rules-based fraud detection system system to a more scalable, flexible one based on machine learning models. We discuss the importance of well-defined project scope; tips and traps when selecting features to train your models; and the various models, transformations and pipelines the Shopify team selected; and how they use PMML to make their Python models available to their Ruby-on-Rails web application. The notes for this show can be found at twimlai.com/talk/60 For Series info, visit twimlai.com/GPPC2017</description>
      <pubDate>Mon, 30 Oct 2017 19:54:19 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>60</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/78f9e9e8-ee98-11eb-9502-3b9f68c3fd99/image/artworks-000249513885-4sk4a2-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The podcast you’re about to hear is the first of …</itunes:subtitle>
      <itunes:summary>The podcast you’re about to hear is the first of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this show is Solmaz Shahalizadeh, Director of Merchant Services Algorithms at Shopify. Solmaz gave a great talk at the GPPC focused on her team’s experiences applying machine learning to fight fraud and improve merchant satisfaction. Solmaz and I dig into, step-by-step, the process they used to transition from a legacy, rules-based fraud detection system system to a more scalable, flexible one based on machine learning models. We discuss the importance of well-defined project scope; tips and traps when selecting features to train your models; and the various models, transformations and pipelines the Shopify team selected; and how they use PMML to make their Python models available to their Ruby-on-Rails web application. The notes for this show can be found at twimlai.com/talk/60 For Series info, visit twimlai.com/GPPC2017</itunes:summary>
      <content:encoded>
        <![CDATA[The podcast you’re about to hear is the first of a series of shows recorded at the Georgian Partners Portfolio Conference last week in Toronto. My guest for this show is Solmaz Shahalizadeh, Director of Merchant Services Algorithms at Shopify. Solmaz gave a great talk at the GPPC focused on her team’s experiences applying machine learning to fight fraud and improve merchant satisfaction. Solmaz and I dig into, step-by-step, the process they used to transition from a legacy, rules-based fraud detection system system to a more scalable, flexible one based on machine learning models. We discuss the importance of well-defined project scope; tips and traps when selecting features to train your models; and the various models, transformations and pipelines the Shopify team selected; and how they use PMML to make their Python models available to their Ruby-on-Rails web application. The notes for this show can be found at twimlai.com/talk/60 For Series info, visit twimlai.com/GPPC2017]]>
      </content:encoded>
      <itunes:duration>2149</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/349309151]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5835145872.mp3?updated=1629216876"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Modeling Human Drivers for Autonomous Vehicles with Katie Driggs-Campbell - TWiML Talk #59</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/348908293-twiml-twiml-talk-059-modeling-human-drivers-autonomous-driving-katie-driggs-campbell.mp3</link>
      <description>We are back with our third show this week, episode 3 of our Autonomous Vehicles Series. My guest this time is Katie Driggs-Campbell, PostDoc in the Intelligent Systems Lab at Stanford University’s Department of Aeronautics and Astronautics. Katie joins us to discuss her research into human behavioral modeling and control systems for self-driving vehicles. Katie also gives us some insight into her process for collecting training data, how social nuances come into play for self-driving cars, and more. The notes for this show can be found at twimlai.com/talk/59 For Series info, visit twimlai.com/av2017</description>
      <pubDate>Fri, 27 Oct 2017 20:22:06 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>59</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79195094-ee98-11eb-9502-73d7b5a2326c/image/artworks-000249152617-25539j-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>We are back with our third show this week, episod…</itunes:subtitle>
      <itunes:summary>We are back with our third show this week, episode 3 of our Autonomous Vehicles Series. My guest this time is Katie Driggs-Campbell, PostDoc in the Intelligent Systems Lab at Stanford University’s Department of Aeronautics and Astronautics. Katie joins us to discuss her research into human behavioral modeling and control systems for self-driving vehicles. Katie also gives us some insight into her process for collecting training data, how social nuances come into play for self-driving cars, and more. The notes for this show can be found at twimlai.com/talk/59 For Series info, visit twimlai.com/av2017</itunes:summary>
      <content:encoded>
        <![CDATA[We are back with our third show this week, episode 3 of our Autonomous Vehicles Series. My guest this time is Katie Driggs-Campbell, PostDoc in the Intelligent Systems Lab at Stanford University’s Department of Aeronautics and Astronautics. Katie joins us to discuss her research into human behavioral modeling and control systems for self-driving vehicles. Katie also gives us some insight into her process for collecting training data, how social nuances come into play for self-driving cars, and more. The notes for this show can be found at twimlai.com/talk/59 For Series info, visit twimlai.com/av2017]]>
      </content:encoded>
      <itunes:duration>2010</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/348908293]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1099183007.mp3?updated=1629216856"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Perception Models for Self-Driving Cars with Jianxiong Xiao - TWiML Talk #58</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/348573768-twiml-twiml-talk-058-perception-models-self-driving-cars-jianxiong-xiao.mp3</link>
      <description>We are back with our second show this week, episode 2 of our Autonomous Vehicles Series. This time around we are joined by Jianxiong Xiao of AutoX, a company building computer vision centric solutions for autonomous vehicles. Jianxiong, a PhD graduate of MIT’s CSAIL Lab, joins me to discuss the different layers of the autonomous vehicle stack and the models for machine perception currently used in self-driving cars. If you’re new to the autonomous vehicles space I’m confident you’ll learn a ton, and even if you know the space in general, you’ll get a glimpse into why Jianxiong thinks AutoX’s direct perception approach is superior to end-to-end processing or mediated perception. The notes for this show can be found at twimlai.com/talk/58 For Series info, visit twimlai.com/av2017</description>
      <pubDate>Wed, 25 Oct 2017 19:43:05 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>58</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/793a5208-ee98-11eb-9502-d3fb7c21c0a8/image/artworks-000248836297-6ngzch-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>We are back with our second show this week, episo…</itunes:subtitle>
      <itunes:summary>We are back with our second show this week, episode 2 of our Autonomous Vehicles Series. This time around we are joined by Jianxiong Xiao of AutoX, a company building computer vision centric solutions for autonomous vehicles. Jianxiong, a PhD graduate of MIT’s CSAIL Lab, joins me to discuss the different layers of the autonomous vehicle stack and the models for machine perception currently used in self-driving cars. If you’re new to the autonomous vehicles space I’m confident you’ll learn a ton, and even if you know the space in general, you’ll get a glimpse into why Jianxiong thinks AutoX’s direct perception approach is superior to end-to-end processing or mediated perception. The notes for this show can be found at twimlai.com/talk/58 For Series info, visit twimlai.com/av2017</itunes:summary>
      <content:encoded>
        <![CDATA[We are back with our second show this week, episode 2 of our Autonomous Vehicles Series. This time around we are joined by Jianxiong Xiao of AutoX, a company building computer vision centric solutions for autonomous vehicles. Jianxiong, a PhD graduate of MIT’s CSAIL Lab, joins me to discuss the different layers of the autonomous vehicle stack and the models for machine perception currently used in self-driving cars. If you’re new to the autonomous vehicles space I’m confident you’ll learn a ton, and even if you know the space in general, you’ll get a glimpse into why Jianxiong thinks AutoX’s direct perception approach is superior to end-to-end processing or mediated perception. The notes for this show can be found at twimlai.com/talk/58 For Series info, visit twimlai.com/av2017]]>
      </content:encoded>
      <itunes:duration>2499</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/348573768]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7569671065.mp3?updated=1629216887"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Training Data for Autonomous Vehicles - Daryn Nakhuda - TWiML Talk #57</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/348253938-twiml-twiml-talk-057-training-data-autonomous-vehicles-daryn-nakhuda.mp3</link>
      <description>The episode you are about to hear is the first of a new series of shows on Autonomous Vehicles. We all know that self-driving cars is one of the hottest topics in ML &amp; AI, so we had to dig a little deeper into the space. To get us started on this journey, I’m excited to present this interview with Daryn Nakhuda, CEO and Co-Founder of MightyAI. Daryn and I discuss the many challenges of collecting training data for autonomous vehicles, along with some thoughts on human-powered insights and annotation, semantic segmentation, and a ton more great stuff. For the notes for this show, Visit twimlai.com/talk/57. For series info, visit twimlai.com/AV2017</description>
      <pubDate>Mon, 23 Oct 2017 20:24:40 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>57</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79589556-ee98-11eb-9502-afd6f41cfb95/image/artworks-000248517178-qh3pgd-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The episode you are about to hear is the first of…</itunes:subtitle>
      <itunes:summary>The episode you are about to hear is the first of a new series of shows on Autonomous Vehicles. We all know that self-driving cars is one of the hottest topics in ML &amp; AI, so we had to dig a little deeper into the space. To get us started on this journey, I’m excited to present this interview with Daryn Nakhuda, CEO and Co-Founder of MightyAI. Daryn and I discuss the many challenges of collecting training data for autonomous vehicles, along with some thoughts on human-powered insights and annotation, semantic segmentation, and a ton more great stuff. For the notes for this show, Visit twimlai.com/talk/57. For series info, visit twimlai.com/AV2017</itunes:summary>
      <content:encoded>
        <![CDATA[The episode you are about to hear is the first of a new series of shows on Autonomous Vehicles. We all know that self-driving cars is one of the hottest topics in ML &amp; AI, so we had to dig a little deeper into the space. To get us started on this journey, I’m excited to present this interview with Daryn Nakhuda, CEO and Co-Founder of MightyAI. Daryn and I discuss the many challenges of collecting training data for autonomous vehicles, along with some thoughts on human-powered insights and annotation, semantic segmentation, and a ton more great stuff. For the notes for this show, Visit twimlai.com/talk/57. For series info, visit twimlai.com/AV2017]]>
      </content:encoded>
      <itunes:duration>2825</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/348253938]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1658443621.mp3?updated=1629216890"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Human Factors in Machine Intelligence with James Guszcza - TWiML Talk #56</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/347170510-twiml-twiml-talk-056-human-factors-in-machine-intelligence-with-james-guszcza.mp3</link>
      <description>As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. I sat down with James Guszcza, US Chief Data Scientist at Deloitte Consulting to talk about human factors in machine intelligence. James was in San Francisco to give a talk at the O’Reilly AI Conference on “Why AI needs human-centered design.” We had an amazing chat, in which we explored the many reasons why the human element is so important in ML and AI, along with useful ways to build algorithms and models that reflect this human element, while avoiding out problems like group-think and bias. This was a very interesting conversation. I enjoyed it a ton, and I’m sure you will too! The notes for this episode can be found at twimlai.com/talk/56</description>
      <pubDate>Mon, 16 Oct 2017 18:04:44 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>56</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/797f5222-ee98-11eb-9502-13039864134a/image/artworks-000247447617-e8dmm9-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>As you all know, a few weeks ago, I spent some ti…</itunes:subtitle>
      <itunes:summary>As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. I sat down with James Guszcza, US Chief Data Scientist at Deloitte Consulting to talk about human factors in machine intelligence. James was in San Francisco to give a talk at the O’Reilly AI Conference on “Why AI needs human-centered design.” We had an amazing chat, in which we explored the many reasons why the human element is so important in ML and AI, along with useful ways to build algorithms and models that reflect this human element, while avoiding out problems like group-think and bias. This was a very interesting conversation. I enjoyed it a ton, and I’m sure you will too! The notes for this episode can be found at twimlai.com/talk/56</itunes:summary>
      <content:encoded>
        <![CDATA[As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. I sat down with James Guszcza, US Chief Data Scientist at Deloitte Consulting to talk about human factors in machine intelligence. James was in San Francisco to give a talk at the O’Reilly AI Conference on “Why AI needs human-centered design.” We had an amazing chat, in which we explored the many reasons why the human element is so important in ML and AI, along with useful ways to build algorithms and models that reflect this human element, while avoiding out problems like group-think and bias. This was a very interesting conversation. I enjoyed it a ton, and I’m sure you will too! The notes for this episode can be found at twimlai.com/talk/56]]>
      </content:encoded>
      <itunes:duration>2574</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/347170510]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6973721211.mp3?updated=1629216889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>AI-Powered Conversational Interfaces with Paul Tepper - TWiML Talk #52</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/346117345-twiml-twiml-talk-052-ai-powered-conversational-interfaces-paul-tepper.mp3</link>
      <description>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Paul Tepper, worldwide head of cognitive innovation and product manager for machine learning &amp; AI at Nuance Communications. Paul gave a talk at the conference on critical factors in building successful AI-powered conversational interfaces. We covered a bunch of topics, like voice UI design, behavioral biometrics and a ton of other interesting things that Nuance has in the works. The notes for this show can be found at twimlai.com/talk/52</description>
      <pubDate>Fri, 06 Oct 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>52</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79c70676-ee98-11eb-9502-4bf1bea244c2/image/artworks-000246301613-wem3k1-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re about to hear is part of a series…</itunes:subtitle>
      <itunes:summary>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Paul Tepper, worldwide head of cognitive innovation and product manager for machine learning &amp; AI at Nuance Communications. Paul gave a talk at the conference on critical factors in building successful AI-powered conversational interfaces. We covered a bunch of topics, like voice UI design, behavioral biometrics and a ton of other interesting things that Nuance has in the works. The notes for this show can be found at twimlai.com/talk/52</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Paul Tepper, worldwide head of cognitive innovation and product manager for machine learning &amp; AI at Nuance Communications. Paul gave a talk at the conference on critical factors in building successful AI-powered conversational interfaces. We covered a bunch of topics, like voice UI design, behavioral biometrics and a ton of other interesting things that Nuance has in the works. The notes for this show can be found at twimlai.com/talk/52]]>
      </content:encoded>
      <itunes:duration>2210</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/346117345]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9990006429.mp3?updated=1629216872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>ML Use Cases at Think Big Analytics with Mo Patel and Laura Frølich - TWiML Talk #54</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/346117337-twiml-twiml-talk-054-ml-use-cases-think-big-analytics-mo-patel-laura-frolich.mp3</link>
      <description>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. This time around, I speak with Mo Patel, practice director of AI &amp; deep learning and Laura Frølich, data scientist, of Think Big Analytics. Mo and Laura joined me at the AI conference after their session on “Training vision models with public transportation datasets.” We talked over a bunch of use cases they’ve worked on involving image analysis and deep learning, including an assisted driving system. We also talk through a bunch of practical challenges faced when working on real machine learning problems, like feature detection, data augmentation, and training data. The notes for this show can be found at twimlai.com/talk/54</description>
      <pubDate>Fri, 06 Oct 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>54</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/79e6e428-ee98-11eb-9502-8708529cff20/image/artworks-000246301876-fvhi1z-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re about to hear is part of a series…</itunes:subtitle>
      <itunes:summary>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. This time around, I speak with Mo Patel, practice director of AI &amp; deep learning and Laura Frølich, data scientist, of Think Big Analytics. Mo and Laura joined me at the AI conference after their session on “Training vision models with public transportation datasets.” We talked over a bunch of use cases they’ve worked on involving image analysis and deep learning, including an assisted driving system. We also talk through a bunch of practical challenges faced when working on real machine learning problems, like feature detection, data augmentation, and training data. The notes for this show can be found at twimlai.com/talk/54</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. This time around, I speak with Mo Patel, practice director of AI &amp; deep learning and Laura Frølich, data scientist, of Think Big Analytics. Mo and Laura joined me at the AI conference after their session on “Training vision models with public transportation datasets.” We talked over a bunch of use cases they’ve worked on involving image analysis and deep learning, including an assisted driving system. We also talk through a bunch of practical challenges faced when working on real machine learning problems, like feature detection, data augmentation, and training data. The notes for this show can be found at twimlai.com/talk/54]]>
      </content:encoded>
      <itunes:duration>2724</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/346117337]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9013377108.mp3?updated=1629216890"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Intel Nervana Devcloud with Naveen Rao &amp; Scott Apeland - TWiML Talk #51</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/346117347-twiml-twiml-talk-051-intel-nervana-devcloud-naveen-rao-scott-apeland.mp3</link>
      <description>In this episode, I talk to Naveen Rao, VP and GM of Intel’s AI Products Group, and Scott Apeland, director of Intel’s Developer Network. It's been a few months since we last spoke to Naveen, so he gives us a quick update on what Intel’s been up to and we discuss his perspective on some recent developments in the AI ecosystem. Scott and I dig into Intel Nervana’s new DevCloud offering, which was announced at the conference. We also discuss the Intel Nervana AI Academy, a new portal offering hands-on learning tools and other resources for various aspects of machine learning and AI. The notes for this show can be found at twimlai.com/talk/51</description>
      <pubDate>Fri, 06 Oct 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>51</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/799d3526-ee98-11eb-9502-bffcfdf4e081/image/artworks-000246301366-jbjd2q-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>In this episode, I talk to Naveen Rao, VP and GM …</itunes:subtitle>
      <itunes:summary>In this episode, I talk to Naveen Rao, VP and GM of Intel’s AI Products Group, and Scott Apeland, director of Intel’s Developer Network. It's been a few months since we last spoke to Naveen, so he gives us a quick update on what Intel’s been up to and we discuss his perspective on some recent developments in the AI ecosystem. Scott and I dig into Intel Nervana’s new DevCloud offering, which was announced at the conference. We also discuss the Intel Nervana AI Academy, a new portal offering hands-on learning tools and other resources for various aspects of machine learning and AI. The notes for this show can be found at twimlai.com/talk/51</itunes:summary>
      <content:encoded>
        <![CDATA[In this episode, I talk to Naveen Rao, VP and GM of Intel’s AI Products Group, and Scott Apeland, director of Intel’s Developer Network. It's been a few months since we last spoke to Naveen, so he gives us a quick update on what Intel’s been up to and we discuss his perspective on some recent developments in the AI ecosystem. Scott and I dig into Intel Nervana’s new DevCloud offering, which was announced at the conference. We also discuss the Intel Nervana AI Academy, a new portal offering hands-on learning tools and other resources for various aspects of machine learning and AI. The notes for this show can be found at twimlai.com/talk/51]]>
      </content:encoded>
      <itunes:duration>2225</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/346117347]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1459401679.mp3?updated=1629216859"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Ray: A Distributed Computing Platform for Reinforcement Learning with Ion Stoica - TWiML Talk #55</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/346117333-twiml-twiml-talk-055-ray-distributed-computing-platform-reinforcement-learning-ion-stoica.mp3</link>
      <description>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. In this episode, I talk with Ion Stoica, professor of computer science &amp; director of the RISE Lab at UC Berkeley. Ion joined us after he gave his talk “Building reinforcement learning applications with Ray.” We dive into Ray, a new distributed computing platform for RL, as well as RL generally, along with some of the other interesting projects RISE Lab is working on, like Clipper &amp; Tegra. This was a pretty interesting talk. Enjoy! The notes for this show can be found at twimlai.com/talk/55</description>
      <pubDate>Thu, 05 Oct 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>55</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7a0e5152-ee98-11eb-9502-634deff7976b/image/artworks-000246302039-v76q14-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re about to hear is part of a series…</itunes:subtitle>
      <itunes:summary>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. In this episode, I talk with Ion Stoica, professor of computer science &amp; director of the RISE Lab at UC Berkeley. Ion joined us after he gave his talk “Building reinforcement learning applications with Ray.” We dive into Ray, a new distributed computing platform for RL, as well as RL generally, along with some of the other interesting projects RISE Lab is working on, like Clipper &amp; Tegra. This was a pretty interesting talk. Enjoy! The notes for this show can be found at twimlai.com/talk/55</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. In this episode, I talk with Ion Stoica, professor of computer science &amp; director of the RISE Lab at UC Berkeley. Ion joined us after he gave his talk “Building reinforcement learning applications with Ray.” We dive into Ray, a new distributed computing platform for RL, as well as RL generally, along with some of the other interesting projects RISE Lab is working on, like Clipper &amp; Tegra. This was a pretty interesting talk. Enjoy! The notes for this show can be found at twimlai.com/talk/55]]>
      </content:encoded>
      <itunes:duration>1698</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/346117333]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2264593133.mp3?updated=1629216853"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Topological Data Analysis with Gunnar Carlsson - TWiML Talk #53</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/346117341-twiml-twiml-talk-053-topological-data-analysis-gunnar-carlsson.mp3</link>
      <description>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Gunnar Carlsson, professor emeritus of mathematics at Stanford University and president and co-founder of machine learning startup Ayasdi. Gunnar joined me after his session at the conference on “Topological data analysis as a framework for machine intelligence.” In our talk, we take a super deep dive on the mathematical underpinnings of TDA and its practical application through software. Nerd Alert! The notes for this show can be found at twimlai.com/talk/53</description>
      <pubDate>Tue, 03 Oct 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>53</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7a35c3ae-ee98-11eb-9502-03a90b31e208/image/artworks-000246318347-5d7ehq-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re about to hear is part of a series…</itunes:subtitle>
      <itunes:summary>The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Gunnar Carlsson, professor emeritus of mathematics at Stanford University and president and co-founder of machine learning startup Ayasdi. Gunnar joined me after his session at the conference on “Topological data analysis as a framework for machine intelligence.” In our talk, we take a super deep dive on the mathematical underpinnings of TDA and its practical application through software. Nerd Alert! The notes for this show can be found at twimlai.com/talk/53</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re about to hear is part of a series of shows recorded in San Francisco at the Artificial Intelligence Conference. My guest for this show is Gunnar Carlsson, professor emeritus of mathematics at Stanford University and president and co-founder of machine learning startup Ayasdi. Gunnar joined me after his session at the conference on “Topological data analysis as a framework for machine intelligence.” In our talk, we take a super deep dive on the mathematical underpinnings of TDA and its practical application through software. Nerd Alert! The notes for this show can be found at twimlai.com/talk/53]]>
      </content:encoded>
      <itunes:duration>2034</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/346117341]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4363466387.mp3?updated=1629216871"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Bayesian Optimization for Hyperparameter Tuning with Scott Clark - TWiML Talk #50</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/345054699-twiml-twiml-talk-050-bayesian-optimization-hyperparameter-tuning-scott-clark.mp3</link>
      <description>As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. While I was there, I had just enough time to sneak away and catch up with Scott Clark, Co-Founder and CEO of Sigopt, a company whose software is focused on automatically tuning your model’s parameters through Bayesian optimization. We dive pretty deeply into that process through the course of this discussion, while hitting on topics like Exploration vs Exploitation, Bayesian Regression, Heterogeneous Configuration Models and Covariance Kernels. I had a great time and learned a ton, but be forewarned, this is most definitely a Nerd Alert show! Notes for this show can be found at twimlai.com/talk/50</description>
      <pubDate>Mon, 02 Oct 2017 21:58:51 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>50</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7a554788-ee98-11eb-9502-eb238d868d11/image/artworks-000245304111-fe2i3s-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>As you all know, a few weeks ago, I spent some ti…</itunes:subtitle>
      <itunes:summary>As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. While I was there, I had just enough time to sneak away and catch up with Scott Clark, Co-Founder and CEO of Sigopt, a company whose software is focused on automatically tuning your model’s parameters through Bayesian optimization. We dive pretty deeply into that process through the course of this discussion, while hitting on topics like Exploration vs Exploitation, Bayesian Regression, Heterogeneous Configuration Models and Covariance Kernels. I had a great time and learned a ton, but be forewarned, this is most definitely a Nerd Alert show! Notes for this show can be found at twimlai.com/talk/50</itunes:summary>
      <content:encoded>
        <![CDATA[As you all know, a few weeks ago, I spent some time in SF at the Artificial Intelligence Conference. While I was there, I had just enough time to sneak away and catch up with Scott Clark, Co-Founder and CEO of Sigopt, a company whose software is focused on automatically tuning your model’s parameters through Bayesian optimization. We dive pretty deeply into that process through the course of this discussion, while hitting on topics like Exploration vs Exploitation, Bayesian Regression, Heterogeneous Configuration Models and Covariance Kernels. I had a great time and learned a ton, but be forewarned, this is most definitely a Nerd Alert show! Notes for this show can be found at twimlai.com/talk/50]]>
      </content:encoded>
      <itunes:duration>2823</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/345054699]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3599715991.mp3?updated=1629216872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Symbolic and Sub-Symbolic Natural Language Processing with Jonathan Mugan - TWiML Talk #49</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/344008076-twiml-twiml-talk-049-symbolic-sub-symbolic-natural-language-processing-jonathan-mugan.mp3</link>
      <description>Like last week’s interview with Bruno Goncalves, this week’s interview was also recorded at the last O’Reilly AI Conference back in New York in June. Also like last week’s show, this week’s is also focused on Natural Language Processing and I think you’ll enjoy it. I’m joined by Jonathan Mugan, co-founder and CEO of Deep Grammar, a company that is building a grammar checker using deep learning and what they call deep symbolic processing. This interview is a great complement to my conversation with Bruno, and we cover a variety of topics from both the sub-symbolic and symbolic schools of NLP, such as attention mechanisms like sequence to sequence, and ontological approaches like WordNet, synsets, FrameNet, and SUMO. You can find the notes for this show at twimlai.com/talk/49</description>
      <pubDate>Mon, 25 Sep 2017 20:56:21 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>49</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7a7b692c-ee98-11eb-9502-73dd66d4bbf6/image/artworks-000244263655-a9cqp9-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Like last week’s interview with Bruno Goncalves, …</itunes:subtitle>
      <itunes:summary>Like last week’s interview with Bruno Goncalves, this week’s interview was also recorded at the last O’Reilly AI Conference back in New York in June. Also like last week’s show, this week’s is also focused on Natural Language Processing and I think you’ll enjoy it. I’m joined by Jonathan Mugan, co-founder and CEO of Deep Grammar, a company that is building a grammar checker using deep learning and what they call deep symbolic processing. This interview is a great complement to my conversation with Bruno, and we cover a variety of topics from both the sub-symbolic and symbolic schools of NLP, such as attention mechanisms like sequence to sequence, and ontological approaches like WordNet, synsets, FrameNet, and SUMO. You can find the notes for this show at twimlai.com/talk/49</itunes:summary>
      <content:encoded>
        <![CDATA[Like last week’s interview with Bruno Goncalves, this week’s interview was also recorded at the last O’Reilly AI Conference back in New York in June. Also like last week’s show, this week’s is also focused on Natural Language Processing and I think you’ll enjoy it. I’m joined by Jonathan Mugan, co-founder and CEO of Deep Grammar, a company that is building a grammar checker using deep learning and what they call deep symbolic processing. This interview is a great complement to my conversation with Bruno, and we cover a variety of topics from both the sub-symbolic and symbolic schools of NLP, such as attention mechanisms like sequence to sequence, and ontological approaches like WordNet, synsets, FrameNet, and SUMO. You can find the notes for this show at twimlai.com/talk/49]]>
      </content:encoded>
      <itunes:duration>2610</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/344008076]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1736885770.mp3?updated=1629216877"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Word2Vec &amp; Friends with Bruno Gonçalves - TWiML Talk #48</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/342985290-twiml-twiml-talk-048-word2vec-friends-bruno-goncalves.mp3</link>
      <description>This week i'm bringing you an interview from Bruno Goncalves, a Moore-Sloan Data Science Fellow at NYU. As you’ll hear in the interview, Bruno is a longtime listener of the podcast. We were able to connect at the NY AI conference back in June after I noted on a previous show that I was interested in learning more about word2vec. Bruno graciously agreed to come on the show and walk us through an overview of word embeddings, word2vec and related ideas. He provides a great overview of not only word2vec, related NLP concepts such as Skip Gram, Continuous Bag of Words, Node2Vec and TFIDF. Notes for this show can be found at twimlai.com/talk/48.</description>
      <pubDate>Tue, 19 Sep 2017 01:04:42 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>48</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7aa6bc6c-ee98-11eb-9502-4ba25962040c/image/artworks-000243210479-cfavgt-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week i'm bringing you an interview from Brun…</itunes:subtitle>
      <itunes:summary>This week i'm bringing you an interview from Bruno Goncalves, a Moore-Sloan Data Science Fellow at NYU. As you’ll hear in the interview, Bruno is a longtime listener of the podcast. We were able to connect at the NY AI conference back in June after I noted on a previous show that I was interested in learning more about word2vec. Bruno graciously agreed to come on the show and walk us through an overview of word embeddings, word2vec and related ideas. He provides a great overview of not only word2vec, related NLP concepts such as Skip Gram, Continuous Bag of Words, Node2Vec and TFIDF. Notes for this show can be found at twimlai.com/talk/48.</itunes:summary>
      <content:encoded>
        <![CDATA[This week i'm bringing you an interview from Bruno Goncalves, a Moore-Sloan Data Science Fellow at NYU. As you’ll hear in the interview, Bruno is a longtime listener of the podcast. We were able to connect at the NY AI conference back in June after I noted on a previous show that I was interested in learning more about word2vec. Bruno graciously agreed to come on the show and walk us through an overview of word embeddings, word2vec and related ideas. He provides a great overview of not only word2vec, related NLP concepts such as Skip Gram, Continuous Bag of Words, Node2Vec and TFIDF. Notes for this show can be found at twimlai.com/talk/48.]]>
      </content:encoded>
      <itunes:duration>1939</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/342985290]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9126011423.mp3?updated=1629216860"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Evolutionary Algorithms in Machine Learning with Risto Miikkulainen - TWiML Talk #47</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/341898888-twiml-twiml-talk-47-evolutionary-algorithms-machine-learning-risto-miikkulainen.mp3</link>
      <description>My guest this week is Risto Miikkulainen, professor of computer science at UT-Austin and vice president of Research at Sentient Technologies. Risto came locked and loaded to discuss a topic that we've received a ton of requests for -- evolutionary algorithms. During our talk we discuss some of the things Sentient is working on in the financial services and retail fields, and we dig into the technology behind it, evolutionary algorithms, which is also the focus of Risto’s research at UT. I really enjoyed this interview and learned a ton, and I’m sure you will too! Notes for this show can be found at twimlai.com/talk/47.</description>
      <pubDate>Mon, 11 Sep 2017 16:57:12 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>47</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ac5cc4c-ee98-11eb-9502-630072696671/image/artworks-000242167159-upy0ru-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this week is Risto Miikkulainen, profess…</itunes:subtitle>
      <itunes:summary>My guest this week is Risto Miikkulainen, professor of computer science at UT-Austin and vice president of Research at Sentient Technologies. Risto came locked and loaded to discuss a topic that we've received a ton of requests for -- evolutionary algorithms. During our talk we discuss some of the things Sentient is working on in the financial services and retail fields, and we dig into the technology behind it, evolutionary algorithms, which is also the focus of Risto’s research at UT. I really enjoyed this interview and learned a ton, and I’m sure you will too! Notes for this show can be found at twimlai.com/talk/47.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this week is Risto Miikkulainen, professor of computer science at UT-Austin and vice president of Research at Sentient Technologies. Risto came locked and loaded to discuss a topic that we've received a ton of requests for -- evolutionary algorithms. During our talk we discuss some of the things Sentient is working on in the financial services and retail fields, and we dig into the technology behind it, evolutionary algorithms, which is also the focus of Risto’s research at UT. I really enjoyed this interview and learned a ton, and I’m sure you will too! Notes for this show can be found at twimlai.com/talk/47.]]>
      </content:encoded>
      <itunes:duration>3534</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/341898888]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3187092817.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Agile Machine Learning with Jennifer Prendki - TWiML Talk #46</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/341007747-twiml-twiml-talk-46-jennifer-prendki-agile-machine-learning-walmart.mp3</link>
      <description>My guest this week is Jennifer Prendki. That name might sound familiar, as she was one of the great speakers from my Future of Data Summit back in May. At the time, Jennifer was senior data science manager and principal data scientist at Walmart Labs, but she's since moved on to become head of data science at Atlassian. Back at the summit, Jennifer gave an awesome talk on what she calls Data Mixology, the slides for which you can find on the show notes page. My conversation with Jennifer begins with a recap of that talk. After that, we shift our focus to some of the practices she helped develop and implement at Walmart around the measurement and management of machine learning models in production, and more generally, building agile processes and teams for machine learning. The notes for this show can be found at twimlai.com/talk/46</description>
      <pubDate>Tue, 05 Sep 2017 15:01:16 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>46</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ae63f4a-ee98-11eb-9502-ab7032004818/image/artworks-000241320316-54vi45-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this week is Jennifer Prendki. That name…</itunes:subtitle>
      <itunes:summary>My guest this week is Jennifer Prendki. That name might sound familiar, as she was one of the great speakers from my Future of Data Summit back in May. At the time, Jennifer was senior data science manager and principal data scientist at Walmart Labs, but she's since moved on to become head of data science at Atlassian. Back at the summit, Jennifer gave an awesome talk on what she calls Data Mixology, the slides for which you can find on the show notes page. My conversation with Jennifer begins with a recap of that talk. After that, we shift our focus to some of the practices she helped develop and implement at Walmart around the measurement and management of machine learning models in production, and more generally, building agile processes and teams for machine learning. The notes for this show can be found at twimlai.com/talk/46</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this week is Jennifer Prendki. That name might sound familiar, as she was one of the great speakers from my Future of Data Summit back in May. At the time, Jennifer was senior data science manager and principal data scientist at Walmart Labs, but she's since moved on to become head of data science at Atlassian. Back at the summit, Jennifer gave an awesome talk on what she calls Data Mixology, the slides for which you can find on the show notes page. My conversation with Jennifer begins with a recap of that talk. After that, we shift our focus to some of the practices she helped develop and implement at Walmart around the measurement and management of machine learning models in production, and more generally, building agile processes and teams for machine learning. The notes for this show can be found at twimlai.com/talk/46]]>
      </content:encoded>
      <itunes:duration>2918</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/341007747]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7303513932.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>LSTMs, Plus a Deep Learning History Lesson with Jürgen Schmidhuber - TWiML Talk #44</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/339910198-twiml-twiml-talk-44-jurgen-schmidhuber-lstms-plus-deep-learning-history-lesson.mp3</link>
      <description>This week we have a very special interview to share with you! Those of you who’ve been receiving my newsletter for a while might remember that while in Switzerland last month, I had the pleasure of interviewing Jurgen Schmidhuber, in his lab IDSIA, which is the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, where he serves as Scientific Director. In addition to his role at IDSIA, Jurgen is also Co-Founder and Chief Scientist of NNaisense, a company that is using AI to build large-scale neural network solutions for “superhuman perception and intelligent automation.” Jurgen is an interesting, accomplished and in some circles controversial figure in the AI community and we covered a lot of very interesting ground in our discussion, so much so that I couldn't truly unpack it all until I had a chance to sit with it after the fact. We talked a bunch about his work on neural networks, especially LSTM’s, or Long Short-Term Memory networks, which are a key innovation behind many of the advances we’ve seen in deep learning and its application over the past few years. Along the way, Jurgen walks us through a deep learning history lesson that spans 50+ years. It was like walking back in time with the 3 eyed raven. I know you’re really going to enjoy this one, and by the way, this is definitely a nerd alert show! For the show notes, visit twimlai.com/talk/44</description>
      <pubDate>Mon, 28 Aug 2017 22:43:26 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>44</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7b0766a2-ee98-11eb-9502-4fa02b65f906/image/artworks-000240249172-mli4ko-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week we have a very special interview to sha…</itunes:subtitle>
      <itunes:summary>This week we have a very special interview to share with you! Those of you who’ve been receiving my newsletter for a while might remember that while in Switzerland last month, I had the pleasure of interviewing Jurgen Schmidhuber, in his lab IDSIA, which is the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, where he serves as Scientific Director. In addition to his role at IDSIA, Jurgen is also Co-Founder and Chief Scientist of NNaisense, a company that is using AI to build large-scale neural network solutions for “superhuman perception and intelligent automation.” Jurgen is an interesting, accomplished and in some circles controversial figure in the AI community and we covered a lot of very interesting ground in our discussion, so much so that I couldn't truly unpack it all until I had a chance to sit with it after the fact. We talked a bunch about his work on neural networks, especially LSTM’s, or Long Short-Term Memory networks, which are a key innovation behind many of the advances we’ve seen in deep learning and its application over the past few years. Along the way, Jurgen walks us through a deep learning history lesson that spans 50+ years. It was like walking back in time with the 3 eyed raven. I know you’re really going to enjoy this one, and by the way, this is definitely a nerd alert show! For the show notes, visit twimlai.com/talk/44</itunes:summary>
      <content:encoded>
        <![CDATA[This week we have a very special interview to share with you! Those of you who’ve been receiving my newsletter for a while might remember that while in Switzerland last month, I had the pleasure of interviewing Jurgen Schmidhuber, in his lab IDSIA, which is the Dalle Molle Institute for Artificial Intelligence Research in Lugano, Switzerland, where he serves as Scientific Director. In addition to his role at IDSIA, Jurgen is also Co-Founder and Chief Scientist of NNaisense, a company that is using AI to build large-scale neural network solutions for “superhuman perception and intelligent automation.” Jurgen is an interesting, accomplished and in some circles controversial figure in the AI community and we covered a lot of very interesting ground in our discussion, so much so that I couldn't truly unpack it all until I had a chance to sit with it after the fact. We talked a bunch about his work on neural networks, especially LSTM’s, or Long Short-Term Memory networks, which are a key innovation behind many of the advances we’ve seen in deep learning and its application over the past few years. Along the way, Jurgen walks us through a deep learning history lesson that spans 50+ years. It was like walking back in time with the 3 eyed raven. I know you’re really going to enjoy this one, and by the way, this is definitely a nerd alert show! For the show notes, visit twimlai.com/talk/44]]>
      </content:encoded>
      <itunes:duration>3792</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/339910198]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8049935402.mp3?updated=1629216903"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Teaching for Better Machine Learning with Mark Hammond - TWiML Talk #43</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/338831265-twiml-twiml-talk-043-mark-hammond-machine-teaching-better-machine-learning.mp3</link>
      <description>Today’s show, which concludes the first season of the Industrial AI Series, features my interview with Bonsai co-founder and CEO Mark Hammond. I sat down with Mark at Bonsai HQ a few weeks ago and we had a great discussion while I was there. We touched on a ton of subjects throughout this talk, including his starting point in Artificial intelligence, how Bonsai came about &amp; more. Mark also describes the role of what he calls “machine teaching” in delivering practical machine learning solutions, particularly for enterprise or industrial AI use cases. This was one of my favorite conversations, I know you’ll enjoy it! The notes for this show can be found at twimlai.com/talk/43</description>
      <pubDate>Mon, 21 Aug 2017 16:21:33 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>43</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7b34d47a-ee98-11eb-9502-fff16ece5bf9/image/artworks-000239220395-ccev01-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today’s show, which concludes the first season of…</itunes:subtitle>
      <itunes:summary>Today’s show, which concludes the first season of the Industrial AI Series, features my interview with Bonsai co-founder and CEO Mark Hammond. I sat down with Mark at Bonsai HQ a few weeks ago and we had a great discussion while I was there. We touched on a ton of subjects throughout this talk, including his starting point in Artificial intelligence, how Bonsai came about &amp; more. Mark also describes the role of what he calls “machine teaching” in delivering practical machine learning solutions, particularly for enterprise or industrial AI use cases. This was one of my favorite conversations, I know you’ll enjoy it! The notes for this show can be found at twimlai.com/talk/43</itunes:summary>
      <content:encoded>
        <![CDATA[Today’s show, which concludes the first season of the Industrial AI Series, features my interview with Bonsai co-founder and CEO Mark Hammond. I sat down with Mark at Bonsai HQ a few weeks ago and we had a great discussion while I was there. We touched on a ton of subjects throughout this talk, including his starting point in Artificial intelligence, how Bonsai came about &amp; more. Mark also describes the role of what he calls “machine teaching” in delivering practical machine learning solutions, particularly for enterprise or industrial AI use cases. This was one of my favorite conversations, I know you’ll enjoy it! The notes for this show can be found at twimlai.com/talk/43]]>
      </content:encoded>
      <itunes:duration>3916</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/338831265]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6714733972.mp3?updated=1629216910"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Marrying Physics-Based and Data-Driven ML Models with Josh Bloom - TWiML Talk #42</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/337835247-twiml-twiml-talk-042-josh-bloom-marrying-physics-based-data-driven-ml-models.mp3</link>
      <description>Recently I had a chance to catch up with a friend and friend of the show, Josh Bloom, vice president of data &amp; analytics at GE Digital. If you’ve been listening for a while, you already know that Josh was on the show around this time last year, just prior to the acquisition of his company Wise.io by GE Digital. It was great to catch up with Josh on his journey within GE, and the work his team is doing around Industrial AI, now that they’re part of the one of the world’s biggest industrial companies. We talk about some really interesting things in this show, including how his team is using autoencoders to create training datasets, and how they incorporate knowledge of physics and physical systems into their machine learning models. The notes for this show can be found at twimlai.com/talk/42.</description>
      <pubDate>Mon, 14 Aug 2017 15:18:50 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>42</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7b4f8e0a-ee98-11eb-9502-33fe1e55217a/image/artworks-000238237915-9j53qz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Recently I had a chance to catch up with a friend…</itunes:subtitle>
      <itunes:summary>Recently I had a chance to catch up with a friend and friend of the show, Josh Bloom, vice president of data &amp; analytics at GE Digital. If you’ve been listening for a while, you already know that Josh was on the show around this time last year, just prior to the acquisition of his company Wise.io by GE Digital. It was great to catch up with Josh on his journey within GE, and the work his team is doing around Industrial AI, now that they’re part of the one of the world’s biggest industrial companies. We talk about some really interesting things in this show, including how his team is using autoencoders to create training datasets, and how they incorporate knowledge of physics and physical systems into their machine learning models. The notes for this show can be found at twimlai.com/talk/42.</itunes:summary>
      <content:encoded>
        <![CDATA[Recently I had a chance to catch up with a friend and friend of the show, Josh Bloom, vice president of data &amp; analytics at GE Digital. If you’ve been listening for a while, you already know that Josh was on the show around this time last year, just prior to the acquisition of his company Wise.io by GE Digital. It was great to catch up with Josh on his journey within GE, and the work his team is doing around Industrial AI, now that they’re part of the one of the world’s biggest industrial companies. We talk about some really interesting things in this show, including how his team is using autoencoders to create training datasets, and how they incorporate knowledge of physics and physical systems into their machine learning models. The notes for this show can be found at twimlai.com/talk/42.]]>
      </content:encoded>
      <itunes:duration>3169</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/337835247]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6782313095.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Cognitive Biases in Data Science with Drew Conway - TWiML Talk #39</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/336865377-twiml-twiml-talk-039-drew-conway-cognitive-biases-data-science.mp3</link>
      <description>This show features my interview with Drew Conway, whose Wrangle keynote could have been called “Confessions of a CIA Data Scientist.” The focus of our interview, and of Drew’s presentation, is an interesting set of observations he makes about the role of cognitive biases in data science. If your work involves making decisions or influencing behavior based on data-driven analysis--and it probably does or will--you’re going to want to hear what he has to say. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/39</description>
      <pubDate>Sat, 05 Aug 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>39</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7b7a81c8-ee98-11eb-9502-375c728ce32d/image/artworks-000237290063-zmnqo7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This show features my interview with Drew Conway,…</itunes:subtitle>
      <itunes:summary>This show features my interview with Drew Conway, whose Wrangle keynote could have been called “Confessions of a CIA Data Scientist.” The focus of our interview, and of Drew’s presentation, is an interesting set of observations he makes about the role of cognitive biases in data science. If your work involves making decisions or influencing behavior based on data-driven analysis--and it probably does or will--you’re going to want to hear what he has to say. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/39</itunes:summary>
      <content:encoded>
        <![CDATA[This show features my interview with Drew Conway, whose Wrangle keynote could have been called “Confessions of a CIA Data Scientist.” The focus of our interview, and of Drew’s presentation, is an interesting set of observations he makes about the role of cognitive biases in data science. If your work involves making decisions or influencing behavior based on data-driven analysis--and it probably does or will--you’re going to want to hear what he has to say. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/39]]>
      </content:encoded>
      <itunes:duration>2060</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/336865377]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9340509781.mp3?updated=1629216870"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Data Pipelines at Zymergen with Airflow with Erin Shellman - TWiML Talk #41</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/336865372-twiml-twiml-talk-041-erin-shellman-data-pipelines-at-zymergen-with-airflow.mp3</link>
      <description>The show you’re listening to features my interview with Erin Shellman. Erin is a statistician and data science manager with Zymergen, a company using robots and machine learning to engineer better microbes. If you’re wondering what exactly that means, I was too, and we talk about it in the interview. Our conversation focuses on Zymergen’s use of Apache Airflow, an open-source data management platform originating at Airbnb, that Erin and her team uses to create reliable, repeatable data pipelines for its machine learning applications. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/41</description>
      <pubDate>Sat, 05 Aug 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>41</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7b9668e8-ee98-11eb-9502-e3f2b5985a0c/image/artworks-000237291326-d57sbs-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re listening to features my intervie…</itunes:subtitle>
      <itunes:summary>The show you’re listening to features my interview with Erin Shellman. Erin is a statistician and data science manager with Zymergen, a company using robots and machine learning to engineer better microbes. If you’re wondering what exactly that means, I was too, and we talk about it in the interview. Our conversation focuses on Zymergen’s use of Apache Airflow, an open-source data management platform originating at Airbnb, that Erin and her team uses to create reliable, repeatable data pipelines for its machine learning applications. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/41</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re listening to features my interview with Erin Shellman. Erin is a statistician and data science manager with Zymergen, a company using robots and machine learning to engineer better microbes. If you’re wondering what exactly that means, I was too, and we talk about it in the interview. Our conversation focuses on Zymergen’s use of Apache Airflow, an open-source data management platform originating at Airbnb, that Erin and her team uses to create reliable, repeatable data pipelines for its machine learning applications. A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/41]]>
      </content:encoded>
      <itunes:duration>2120</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/336865372]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9846972022.mp3?updated=1629216872"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Web Scale Engineering for Machine Learning with Sharath Rao - TWiML Talk #40</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/336865375-twiml-twiml-talk-040-sharath-rao-web-scale-engineering-for-machine-learning.mp3</link>
      <description>The show you’re about to listen to features my interview with Sharath Rao, Tech Lead Manager &amp; Machine Learning Engineer at Instacart I reached out to Sharath about being on the show and was blown away when he replied that not only had he heard about the show, but that he was a fan and an avid listener. My conversation with him digs into some of the practical lessons and patterns he’s learned by building production-ready, web-scale data products based on machine learning models, including the search and recommendation systems at Instacart. We also spend a few minutes discussing our upcoming TWiML Paper Reading Meetup! A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/40.</description>
      <pubDate>Fri, 04 Aug 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>40</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7baeb38a-ee98-11eb-9502-4708909a9807/image/artworks-000237290537-wlpe66-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>The show you’re about to listen to features my in…</itunes:subtitle>
      <itunes:summary>The show you’re about to listen to features my interview with Sharath Rao, Tech Lead Manager &amp; Machine Learning Engineer at Instacart I reached out to Sharath about being on the show and was blown away when he replied that not only had he heard about the show, but that he was a fan and an avid listener. My conversation with him digs into some of the practical lessons and patterns he’s learned by building production-ready, web-scale data products based on machine learning models, including the search and recommendation systems at Instacart. We also spend a few minutes discussing our upcoming TWiML Paper Reading Meetup! A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/40.</itunes:summary>
      <content:encoded>
        <![CDATA[The show you’re about to listen to features my interview with Sharath Rao, Tech Lead Manager &amp; Machine Learning Engineer at Instacart I reached out to Sharath about being on the show and was blown away when he replied that not only had he heard about the show, but that he was a fan and an avid listener. My conversation with him digs into some of the practical lessons and patterns he’s learned by building production-ready, web-scale data products based on machine learning models, including the search and recommendation systems at Instacart. We also spend a few minutes discussing our upcoming TWiML Paper Reading Meetup! A quick note before we dive in: As is the case with my other field recordings, there’s a bit of unavoidable background noise in this interview. Sorry about that! The show notes for this episode can be found at https://twimlai.com/talk/40.]]>
      </content:encoded>
      <itunes:duration>1894</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/336865375]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9254666649.mp3?updated=1629216863"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Learning for Warehouse Operations with Calvin Seward - TWiML Talk #38</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/335687032-twiml-twiml-talk-038-calvin-seward-deep-learning-warehouse-operations.mp3</link>
      <description>This week, I’m happy to bring you my interview with Calvin Seward, a research scientist with Berlin, Germany based Zalando. While our American listeners might not know the name Zalando, they’re one of the largest e-commerce companies in Europe with a focus on fashion and shoes. Calvin is a research scientist there, while also pursuing his doctorate studies at Johannes Kepler University in Linz, Austria. Our discussion, which continues our Industrial AI series, focuses on how Calvin’s team tackled an interesting warehouse optimization problem using deep learning. Calvin also gives his thoughts on the distinction between AI and ML, and the four P’s that he focuses on: Prestige, Products, Paper, and Patents. The notes for this show can be found at https://twimlai.com/talk/38.</description>
      <pubDate>Mon, 31 Jul 2017 19:49:19 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>38</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7bd6a6d8-ee98-11eb-9502-bbf61304009e/image/artworks-000236050454-f0j0if-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week, I’m happy to bring you my interview wi…</itunes:subtitle>
      <itunes:summary>This week, I’m happy to bring you my interview with Calvin Seward, a research scientist with Berlin, Germany based Zalando. While our American listeners might not know the name Zalando, they’re one of the largest e-commerce companies in Europe with a focus on fashion and shoes. Calvin is a research scientist there, while also pursuing his doctorate studies at Johannes Kepler University in Linz, Austria. Our discussion, which continues our Industrial AI series, focuses on how Calvin’s team tackled an interesting warehouse optimization problem using deep learning. Calvin also gives his thoughts on the distinction between AI and ML, and the four P’s that he focuses on: Prestige, Products, Paper, and Patents. The notes for this show can be found at https://twimlai.com/talk/38.</itunes:summary>
      <content:encoded>
        <![CDATA[This week, I’m happy to bring you my interview with Calvin Seward, a research scientist with Berlin, Germany based Zalando. While our American listeners might not know the name Zalando, they’re one of the largest e-commerce companies in Europe with a focus on fashion and shoes. Calvin is a research scientist there, while also pursuing his doctorate studies at Johannes Kepler University in Linz, Austria. Our discussion, which continues our Industrial AI series, focuses on how Calvin’s team tackled an interesting warehouse optimization problem using deep learning. Calvin also gives his thoughts on the distinction between AI and ML, and the four P’s that he focuses on: Prestige, Products, Paper, and Patents. The notes for this show can be found at https://twimlai.com/talk/38.]]>
      </content:encoded>
      <itunes:duration>2768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/335687032]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8432752304.mp3?updated=1629216877"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Robotic Learning with Sergey Levine - TWiML Talk #37</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/334585235-twiml-twiml-talk-037-sergey-levine-deep-robotic-learning.mp3</link>
      <description>This week we continue our Industrial AI series with Sergey Levine, an Assistant Professor at UC Berkeley whose research focus is Deep Robotic Learning. Sergey is part of the same research team as a couple of our previous guests in this series, Chelsea Finn and Pieter Abbeel, and if the response we’ve seen to those shows is any indication, you’re going to love this episode! Sergey’s research interests, and our discussion, focus in on include how robotic learning techniques can be used to allow machines to acquire autonomously acquire complex behavioral skills. We really dig into some of the details of how this is done and I found that our conversation filled in a lot of gaps for me from the interviews with Pieter and Chelsea. By the way, this is definitely a nerd alert episode! Notes for this show can be found at twimlai.com/talk/37</description>
      <pubDate>Mon, 24 Jul 2017 15:46:32 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>37</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7bf2e99c-ee98-11eb-9502-a70a080bcee3/image/artworks-000234929762-zm57kj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week we continue our Industrial AI series wi…</itunes:subtitle>
      <itunes:summary>This week we continue our Industrial AI series with Sergey Levine, an Assistant Professor at UC Berkeley whose research focus is Deep Robotic Learning. Sergey is part of the same research team as a couple of our previous guests in this series, Chelsea Finn and Pieter Abbeel, and if the response we’ve seen to those shows is any indication, you’re going to love this episode! Sergey’s research interests, and our discussion, focus in on include how robotic learning techniques can be used to allow machines to acquire autonomously acquire complex behavioral skills. We really dig into some of the details of how this is done and I found that our conversation filled in a lot of gaps for me from the interviews with Pieter and Chelsea. By the way, this is definitely a nerd alert episode! Notes for this show can be found at twimlai.com/talk/37</itunes:summary>
      <content:encoded>
        <![CDATA[This week we continue our Industrial AI series with Sergey Levine, an Assistant Professor at UC Berkeley whose research focus is Deep Robotic Learning. Sergey is part of the same research team as a couple of our previous guests in this series, Chelsea Finn and Pieter Abbeel, and if the response we’ve seen to those shows is any indication, you’re going to love this episode! Sergey’s research interests, and our discussion, focus in on include how robotic learning techniques can be used to allow machines to acquire autonomously acquire complex behavioral skills. We really dig into some of the details of how this is done and I found that our conversation filled in a lot of gaps for me from the interviews with Pieter and Chelsea. By the way, this is definitely a nerd alert episode! Notes for this show can be found at twimlai.com/talk/37]]>
      </content:encoded>
      <itunes:duration>2779</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/334585235]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1043213203.mp3?updated=1629216883"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Smart Buildings &amp; IoT with Yodit Stanton - TWiML Talk #36</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/333561277-twiml-twiml-talk-36-yodit-stanton-smart-building-iot.mp3</link>
      <description>After a brief hiatus, the Industrial AI Series is making its triumphant return! Our guest this week is Yodit Stanton, a self-described Data Nerd, and the Founder &amp; CEO of Opensensors.io. OpenSensors.io is a real-time data exchange for IoT, that enables anyone to publish and subscribe to real time open data in order to build higher order smart systems and better understand the world around them. Our discussion focuses on Smart Buildings and how they’re enabled by IoT and machine learning techniques. The notes for this show can be found at twimlai.com/talk/36</description>
      <pubDate>Mon, 17 Jul 2017 15:02:17 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>36</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7c1073b8-ee98-11eb-9502-53bc4d60d875/image/artworks-000233954999-fig7uk-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>After a brief hiatus, the Industrial AI Series is…</itunes:subtitle>
      <itunes:summary>After a brief hiatus, the Industrial AI Series is making its triumphant return! Our guest this week is Yodit Stanton, a self-described Data Nerd, and the Founder &amp; CEO of Opensensors.io. OpenSensors.io is a real-time data exchange for IoT, that enables anyone to publish and subscribe to real time open data in order to build higher order smart systems and better understand the world around them. Our discussion focuses on Smart Buildings and how they’re enabled by IoT and machine learning techniques. The notes for this show can be found at twimlai.com/talk/36</itunes:summary>
      <content:encoded>
        <![CDATA[After a brief hiatus, the Industrial AI Series is making its triumphant return! Our guest this week is Yodit Stanton, a self-described Data Nerd, and the Founder &amp; CEO of Opensensors.io. OpenSensors.io is a real-time data exchange for IoT, that enables anyone to publish and subscribe to real time open data in order to build higher order smart systems and better understand the world around them. Our discussion focuses on Smart Buildings and how they’re enabled by IoT and machine learning techniques. The notes for this show can be found at twimlai.com/talk/36]]>
      </content:encoded>
      <itunes:duration>3196</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/333561277]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3022767307.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Expressive AI - Generated Music With Google's Performance RNN - Doug Eck - TWiML Talk #32</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/332532595-twiml-twiml-talk-32-doug-eck-expressive-ai-generated-music-googles-performance-rnn.mp3</link>
      <description>My guest for this second show in our O’Reilly AI series is Doug Eck of Google Brain. Doug did a keynote at the O’Reilly conference on Magenta, Google’s project for melding machine learning and the arts. Magenta’s goal is to produce open-source tools and models that help people in their personal creative processes. Doug’s research starts with using so-called “generative” machine learning models to create engaging media. Additionally, he is working on how to bring other aspects of the creative process into play. We talk about the newly announced Performance RNN project, which uses neural networks to create expressive, AI-generated music. We also touch on QuickDraw, a project by Google AI Experiments, in which users as Doug describes it, “play Pictionary” with a visual classifier. We dig into what he foresees as possibilities for Magenta, machine learning models eventually developing storylines, generative models for media and creative coding. The notes for this episode can be found at https://twimlai.com/talk/32.</description>
      <pubDate>Wed, 05 Jul 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>32</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7c78ebaa-ee98-11eb-9502-8f7703e5a0a3/image/artworks-000232940764-msbtm3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest for this second show in our O’Reilly AI …</itunes:subtitle>
      <itunes:summary>My guest for this second show in our O’Reilly AI series is Doug Eck of Google Brain. Doug did a keynote at the O’Reilly conference on Magenta, Google’s project for melding machine learning and the arts. Magenta’s goal is to produce open-source tools and models that help people in their personal creative processes. Doug’s research starts with using so-called “generative” machine learning models to create engaging media. Additionally, he is working on how to bring other aspects of the creative process into play. We talk about the newly announced Performance RNN project, which uses neural networks to create expressive, AI-generated music. We also touch on QuickDraw, a project by Google AI Experiments, in which users as Doug describes it, “play Pictionary” with a visual classifier. We dig into what he foresees as possibilities for Magenta, machine learning models eventually developing storylines, generative models for media and creative coding. The notes for this episode can be found at https://twimlai.com/talk/32.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest for this second show in our O’Reilly AI series is Doug Eck of Google Brain. Doug did a keynote at the O’Reilly conference on Magenta, Google’s project for melding machine learning and the arts. Magenta’s goal is to produce open-source tools and models that help people in their personal creative processes. Doug’s research starts with using so-called “generative” machine learning models to create engaging media. Additionally, he is working on how to bring other aspects of the creative process into play. We talk about the newly announced Performance RNN project, which uses neural networks to create expressive, AI-generated music. We also touch on QuickDraw, a project by Google AI Experiments, in which users as Doug describes it, “play Pictionary” with a visual classifier. We dig into what he foresees as possibilities for Magenta, machine learning models eventually developing storylines, generative models for media and creative coding. The notes for this episode can be found at https://twimlai.com/talk/32.]]>
      </content:encoded>
      <itunes:duration>2778</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/332532595]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5046517308.mp3?updated=1629216891"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Enhancing Customer Experiences With Emotional AI with Rana El Kaliouby - TWiML Talk #35</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/332544075-twiml-twiml-talk-035-rana-el-kaliouby-enhancing-customer-experiences-with-emotional-ai.mp3</link>
      <description>My guest for this show is Rana el Kaliouby. Rana is co-founder and CEO of Affectiva. Affectiva, as Rana puts it, "is on a mission to humanize technology by bringing in artificial emotional intelligence". If you liked my conversation about Emotional AI with Pascale Fung from last year’s O’Reilly AI conference, you’re going to love this one. My conversation with Rana kind of picks up where the previous one left off, with a focus on how her company is bringing Artificial Emotional Intelligence services to market. Rana and her team have developed a machine learning / computer vision platform that can use the camera on any device to read your facial expressions in real time, then maps it to an emotional state. Using data science to mine the world’s largest emotion repository, Affectiva has collected over 5.5 million pieces of emotional expression data to date, from laptop, driving, cellular interactions. Understanding the importance of personal privacy, Rana and her Co-Founder Rosalind Wright Picard have vowed to shy away from partnerships that would subject consumers to unknowing surveillance, a commendable effort. The notes for this show can be found at https://twimlai.com/talk/35</description>
      <pubDate>Wed, 05 Jul 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>35</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7c2f469e-ee98-11eb-9502-bb1e44097ead/image/artworks-000232933267-v46ck4-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest for this show is Rana el Kaliouby. Rana …</itunes:subtitle>
      <itunes:summary>My guest for this show is Rana el Kaliouby. Rana is co-founder and CEO of Affectiva. Affectiva, as Rana puts it, "is on a mission to humanize technology by bringing in artificial emotional intelligence". If you liked my conversation about Emotional AI with Pascale Fung from last year’s O’Reilly AI conference, you’re going to love this one. My conversation with Rana kind of picks up where the previous one left off, with a focus on how her company is bringing Artificial Emotional Intelligence services to market. Rana and her team have developed a machine learning / computer vision platform that can use the camera on any device to read your facial expressions in real time, then maps it to an emotional state. Using data science to mine the world’s largest emotion repository, Affectiva has collected over 5.5 million pieces of emotional expression data to date, from laptop, driving, cellular interactions. Understanding the importance of personal privacy, Rana and her Co-Founder Rosalind Wright Picard have vowed to shy away from partnerships that would subject consumers to unknowing surveillance, a commendable effort. The notes for this show can be found at https://twimlai.com/talk/35</itunes:summary>
      <content:encoded>
        <![CDATA[My guest for this show is Rana el Kaliouby. Rana is co-founder and CEO of Affectiva. Affectiva, as Rana puts it, "is on a mission to humanize technology by bringing in artificial emotional intelligence". If you liked my conversation about Emotional AI with Pascale Fung from last year’s O’Reilly AI conference, you’re going to love this one. My conversation with Rana kind of picks up where the previous one left off, with a focus on how her company is bringing Artificial Emotional Intelligence services to market. Rana and her team have developed a machine learning / computer vision platform that can use the camera on any device to read your facial expressions in real time, then maps it to an emotional state. Using data science to mine the world’s largest emotion repository, Affectiva has collected over 5.5 million pieces of emotional expression data to date, from laptop, driving, cellular interactions. Understanding the importance of personal privacy, Rana and her Co-Founder Rosalind Wright Picard have vowed to shy away from partnerships that would subject consumers to unknowing surveillance, a commendable effort. The notes for this show can be found at https://twimlai.com/talk/35]]>
      </content:encoded>
      <itunes:duration>2001</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/332544075]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2338747874.mp3?updated=1629216866"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Video Object Detection At Scale with Reza Zadeh - TWiML Talk #34</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/332532584-twiml-twiml-talk-034-reza-zadeh-video-object-detection-at-scale.mp3</link>
      <description>My guest for the fourth show in the O'Reilly AI Series is Reza Zadeh. Reza is an adjunct professor of computational mathematics at Stanford University and founder and CEO of the startup Matroid. Reza has a background in machine translation and distributed machine learning, along with having helped build Apache Spark, and the"Who to Follow" feature on Twitter, which is based on a chapter from his PhD thesis. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. We also spoke about the advancement of computer vision technologies, using CPU's, GPU's, the upcoming shift to TPU's and we get below the surface on Apache Spark.</description>
      <pubDate>Wed, 05 Jul 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>34</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ccd9574-ee98-11eb-9502-237da16186bd/image/artworks-000232922375-5cbnqa-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest for the fourth show in the O'Reilly AI S…</itunes:subtitle>
      <itunes:summary>My guest for the fourth show in the O'Reilly AI Series is Reza Zadeh. Reza is an adjunct professor of computational mathematics at Stanford University and founder and CEO of the startup Matroid. Reza has a background in machine translation and distributed machine learning, along with having helped build Apache Spark, and the"Who to Follow" feature on Twitter, which is based on a chapter from his PhD thesis. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. We also spoke about the advancement of computer vision technologies, using CPU's, GPU's, the upcoming shift to TPU's and we get below the surface on Apache Spark.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest for the fourth show in the O'Reilly AI Series is Reza Zadeh. Reza is an adjunct professor of computational mathematics at Stanford University and founder and CEO of the startup Matroid. Reza has a background in machine translation and distributed machine learning, along with having helped build Apache Spark, and the"Who to Follow" feature on Twitter, which is based on a chapter from his PhD thesis. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. Our conversation focused on some of the challenges and approaches to scaling deep learning, both in general and in the context of his company’s video object detection service. We also spoke about the advancement of computer vision technologies, using CPU's, GPU's, the upcoming shift to TPU's and we get below the surface on Apache Spark.]]>
      </content:encoded>
      <itunes:duration>3151</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/332532584]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7354851167.mp3?updated=1629216899"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Intel Nervana Update + Productizing AI Research with Naveen Rao And Hanlin Tang - TWiML Talk #31</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/332532596-twiml-twiml-talk-031-naveen-rao-hanlin-tang-intel-nervana-update-productizing-ai-research.mp3</link>
      <description>I talked about Intel’s acquisition of Nervana Systems on the podcast when it happened almost a year ago, so I was super excited to have an opportunity to sit down with Nervana co-founder Naveen Rao, who now leads Intel’s newly formed AI Products Group, for the first show in our O'Reilly AI series. We talked about how Intel plans to extend its leadership position in general purpose compute into the AI realm by delivering silicon designed specifically for AI, end-to-end solutions including the cloud, enterprise data center, and the edge; and tools that let customers quickly productize and scale AI-based solutions. I also spoke with Hanlin Tang, an algorithms engineer at Intel’s AIPG, about two tools announced at the conference: version 2.0 of Intel Nervana’s deep learning framework Neon and Nervana Graph, a new toolset for expressing and running deep learning applications as framework and hardware-independent computational graphs. Nervana Graph in particular sounds like a very interesting project, not to mention a smart move for Intel, and I’d encourage folks to take a look at their Github repo. The show notes for this page can be found at https://twimlai.com/talk/31</description>
      <pubDate>Wed, 05 Jul 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>31</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7c575382-ee98-11eb-9502-9f221ca046dd/image/artworks-000232921992-t2zacb-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>I talked about Intel’s acquisition of Nervana Sys…</itunes:subtitle>
      <itunes:summary>I talked about Intel’s acquisition of Nervana Systems on the podcast when it happened almost a year ago, so I was super excited to have an opportunity to sit down with Nervana co-founder Naveen Rao, who now leads Intel’s newly formed AI Products Group, for the first show in our O'Reilly AI series. We talked about how Intel plans to extend its leadership position in general purpose compute into the AI realm by delivering silicon designed specifically for AI, end-to-end solutions including the cloud, enterprise data center, and the edge; and tools that let customers quickly productize and scale AI-based solutions. I also spoke with Hanlin Tang, an algorithms engineer at Intel’s AIPG, about two tools announced at the conference: version 2.0 of Intel Nervana’s deep learning framework Neon and Nervana Graph, a new toolset for expressing and running deep learning applications as framework and hardware-independent computational graphs. Nervana Graph in particular sounds like a very interesting project, not to mention a smart move for Intel, and I’d encourage folks to take a look at their Github repo. The show notes for this page can be found at https://twimlai.com/talk/31</itunes:summary>
      <content:encoded>
        <![CDATA[I talked about Intel’s acquisition of Nervana Systems on the podcast when it happened almost a year ago, so I was super excited to have an opportunity to sit down with Nervana co-founder Naveen Rao, who now leads Intel’s newly formed AI Products Group, for the first show in our O'Reilly AI series. We talked about how Intel plans to extend its leadership position in general purpose compute into the AI realm by delivering silicon designed specifically for AI, end-to-end solutions including the cloud, enterprise data center, and the edge; and tools that let customers quickly productize and scale AI-based solutions. I also spoke with Hanlin Tang, an algorithms engineer at Intel’s AIPG, about two tools announced at the conference: version 2.0 of Intel Nervana’s deep learning framework Neon and Nervana Graph, a new toolset for expressing and running deep learning applications as framework and hardware-independent computational graphs. Nervana Graph in particular sounds like a very interesting project, not to mention a smart move for Intel, and I’d encourage folks to take a look at their Github repo. The show notes for this page can be found at https://twimlai.com/talk/31]]>
      </content:encoded>
      <itunes:duration>2290</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/332532596]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3785011356.mp3?updated=1629216864"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>The Power Of Probabilistic Programming with Ben Vigoda - TWiML Talk #33</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/332532588-twiml-twiml-talk-033-ben-vigoda-power-probabilistic-programming.mp3</link>
      <description>My guest for this third episode in the O'Reilly AI series is Ben Vigoda. Ben is the founder and CEO of Gamalon, a DARPA-funded startup working on Bayesian Program Synthesis. We dive into what exactly this means and how it enables what Ben calls idea learning in the show. Gamalon's first application structures unstructured data — input a paragraph or phrase of unstructured text and output a structured spreadsheet/database row or API call. This can be applicable to a wide range of data challenges, including enterprise product and customer information, AI or digital assistant, and many others. Before Gamalon, Ben was co-founder and CEO of Lyric Semiconductor, Inc., which created the first microprocessor architectures dedicated for statistical machine learning. The company was based on his PhD thesis at MIT and acquired by Analog Devices. In today’s talk we are discussing probabilistic programming, his new approach to deep learning, posterior distribution, and the difference between sampling methods and variational methods and how solvers work in the system. Nerd alert: We go pretty deep in this discussion. The notes for this show can be found at https://twimlai.com/talk/33</description>
      <pubDate>Wed, 05 Jul 2017 00:00:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>33</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ca811e6-ee98-11eb-9502-1336e002407d/image/artworks-000232922347-13a21w-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest for this third episode in the O'Reilly A…</itunes:subtitle>
      <itunes:summary>My guest for this third episode in the O'Reilly AI series is Ben Vigoda. Ben is the founder and CEO of Gamalon, a DARPA-funded startup working on Bayesian Program Synthesis. We dive into what exactly this means and how it enables what Ben calls idea learning in the show. Gamalon's first application structures unstructured data — input a paragraph or phrase of unstructured text and output a structured spreadsheet/database row or API call. This can be applicable to a wide range of data challenges, including enterprise product and customer information, AI or digital assistant, and many others. Before Gamalon, Ben was co-founder and CEO of Lyric Semiconductor, Inc., which created the first microprocessor architectures dedicated for statistical machine learning. The company was based on his PhD thesis at MIT and acquired by Analog Devices. In today’s talk we are discussing probabilistic programming, his new approach to deep learning, posterior distribution, and the difference between sampling methods and variational methods and how solvers work in the system. Nerd alert: We go pretty deep in this discussion. The notes for this show can be found at https://twimlai.com/talk/33</itunes:summary>
      <content:encoded>
        <![CDATA[My guest for this third episode in the O'Reilly AI series is Ben Vigoda. Ben is the founder and CEO of Gamalon, a DARPA-funded startup working on Bayesian Program Synthesis. We dive into what exactly this means and how it enables what Ben calls idea learning in the show. Gamalon's first application structures unstructured data — input a paragraph or phrase of unstructured text and output a structured spreadsheet/database row or API call. This can be applicable to a wide range of data challenges, including enterprise product and customer information, AI or digital assistant, and many others. Before Gamalon, Ben was co-founder and CEO of Lyric Semiconductor, Inc., which created the first microprocessor architectures dedicated for statistical machine learning. The company was based on his PhD thesis at MIT and acquired by Analog Devices. In today’s talk we are discussing probabilistic programming, his new approach to deep learning, posterior distribution, and the difference between sampling methods and variational methods and how solvers work in the system. Nerd alert: We go pretty deep in this discussion. The notes for this show can be found at https://twimlai.com/talk/33]]>
      </content:encoded>
      <itunes:duration>2554</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/332532588]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9323231099.mp3?updated=1629216888"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Natural Language Understanding for Amazon Alexa with Zornitsa Kozareva - TWiML Talk #30</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/330761953-twiml-twiml-talk-030-natural-language-understanding-amazon-alexa-zornitsa-kozareva.mp3</link>
      <description>Our guest this week is Zornitsa Kozareva, Manager of Machine Learning with Amazon Web Services Deep Learning, where she leads a group focused on natural language processing and dialogue systems for products like Alexa and Lex, the latter of which we introduce in the podcast. We spend most of our time talking through the architecture of modern Natural Language Understanding systems, including the role of deep learning, and some of the various ways folks are working to overcome the challenges in this field, such as understanding human intent. If you’re interested in this field she mentions the AWS Chatbot Challenge, which you’ve still got a couple more weeks to participate in. The notes for this show can be found at twimlai.com/talk/30.</description>
      <pubDate>Thu, 29 Jun 2017 18:10:47 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>30</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ceec67c-ee98-11eb-9502-77dc6acc9fb1/image/artworks-000231194076-q1l8a7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Our guest this week is Zornitsa Kozareva, Manager…</itunes:subtitle>
      <itunes:summary>Our guest this week is Zornitsa Kozareva, Manager of Machine Learning with Amazon Web Services Deep Learning, where she leads a group focused on natural language processing and dialogue systems for products like Alexa and Lex, the latter of which we introduce in the podcast. We spend most of our time talking through the architecture of modern Natural Language Understanding systems, including the role of deep learning, and some of the various ways folks are working to overcome the challenges in this field, such as understanding human intent. If you’re interested in this field she mentions the AWS Chatbot Challenge, which you’ve still got a couple more weeks to participate in. The notes for this show can be found at twimlai.com/talk/30.</itunes:summary>
      <content:encoded>
        <![CDATA[Our guest this week is Zornitsa Kozareva, Manager of Machine Learning with Amazon Web Services Deep Learning, where she leads a group focused on natural language processing and dialogue systems for products like Alexa and Lex, the latter of which we introduce in the podcast. We spend most of our time talking through the architecture of modern Natural Language Understanding systems, including the role of deep learning, and some of the various ways folks are working to overcome the challenges in this field, such as understanding human intent. If you’re interested in this field she mentions the AWS Chatbot Challenge, which you’ve still got a couple more weeks to participate in. The notes for this show can be found at twimlai.com/talk/30.]]>
      </content:encoded>
      <itunes:duration>3305</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/330761953]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3117551542.mp3?updated=1629216900"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Robotic Perception and Control with Chelsea Finn - TWiML Talk #29</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/329696513-twiml-twiml-talk-029-robotic-perception-control-chelsea-finn.mp3</link>
      <description>This week we continue our series on industrial applications of machine learning and AI with a conversation with Chelsea Finn, a PhD student at UC Berkeley. Chelsea’s research is focused on machine learning for robotic perception and control. Despite being early in her career, Chelsea is an accomplished researcher with more than 14 published papers in the past 2 years, on subjects like Deep Visual Foresight , Model-Agnostic Meta-Learning and Visuomotor Learning to name a few, all of which we discuss in the show, along with topics like zero-shot, one-shot and few-shot learning. I’d also like to give a shout out to Shreyas, a listener who wrote in to request that we interview a current PhD student about their journey and experiences. Chelsea and I spend some time at the end of the interview talking about this, and she has some great advice for current and prospective PhD students but also independent learners in the field. During this part of the discussion I wonder out loud if any listeners would be interested in forming a virtual paper reading club of some sort. I’m not sure yet exactly what this would look like, but please drop a comment in the show notes if you’re interested. I'm going to once again deploy the Nerd Alert for this episode; Chelsea and I really dig deep into these learning methods and techniques, and this conversation gets pretty technical at times, to the point that I had a tough time keeping up myself. The notes for this page can be found at twimlai.com/talk/29</description>
      <pubDate>Fri, 23 Jun 2017 19:25:43 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>29</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d12b898-ee98-11eb-9502-87e2a5008a04/image/artworks-000230209738-qh5abd-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week we continue our series on industrial ap…</itunes:subtitle>
      <itunes:summary>This week we continue our series on industrial applications of machine learning and AI with a conversation with Chelsea Finn, a PhD student at UC Berkeley. Chelsea’s research is focused on machine learning for robotic perception and control. Despite being early in her career, Chelsea is an accomplished researcher with more than 14 published papers in the past 2 years, on subjects like Deep Visual Foresight , Model-Agnostic Meta-Learning and Visuomotor Learning to name a few, all of which we discuss in the show, along with topics like zero-shot, one-shot and few-shot learning. I’d also like to give a shout out to Shreyas, a listener who wrote in to request that we interview a current PhD student about their journey and experiences. Chelsea and I spend some time at the end of the interview talking about this, and she has some great advice for current and prospective PhD students but also independent learners in the field. During this part of the discussion I wonder out loud if any listeners would be interested in forming a virtual paper reading club of some sort. I’m not sure yet exactly what this would look like, but please drop a comment in the show notes if you’re interested. I'm going to once again deploy the Nerd Alert for this episode; Chelsea and I really dig deep into these learning methods and techniques, and this conversation gets pretty technical at times, to the point that I had a tough time keeping up myself. The notes for this page can be found at twimlai.com/talk/29</itunes:summary>
      <content:encoded>
        <![CDATA[This week we continue our series on industrial applications of machine learning and AI with a conversation with Chelsea Finn, a PhD student at UC Berkeley. Chelsea’s research is focused on machine learning for robotic perception and control. Despite being early in her career, Chelsea is an accomplished researcher with more than 14 published papers in the past 2 years, on subjects like Deep Visual Foresight , Model-Agnostic Meta-Learning and Visuomotor Learning to name a few, all of which we discuss in the show, along with topics like zero-shot, one-shot and few-shot learning. I’d also like to give a shout out to Shreyas, a listener who wrote in to request that we interview a current PhD student about their journey and experiences. Chelsea and I spend some time at the end of the interview talking about this, and she has some great advice for current and prospective PhD students but also independent learners in the field. During this part of the discussion I wonder out loud if any listeners would be interested in forming a virtual paper reading club of some sort. I’m not sure yet exactly what this would look like, but please drop a comment in the show notes if you’re interested. I'm going to once again deploy the Nerd Alert for this episode; Chelsea and I really dig deep into these learning methods and techniques, and this conversation gets pretty technical at times, to the point that I had a tough time keeping up myself. The notes for this page can be found at twimlai.com/talk/29]]>
      </content:encoded>
      <itunes:duration>3286</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/329696513]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6196210316.mp3?updated=1629216882"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reinforcement Learning Deep Dive with Pieter Abbeel - TWiML Talk #28</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/328458110-twiml-twiml-talk-028-reinforcement-learning-deep-dive-pieter-abbeel.mp3</link>
      <description>This week our guest is Pieter Abbeel, Assistant Professor at UC Berkeley, Research Scientist at OpenAI, and Cofounder of Gradescope. Pieter has an extensive background in AI research, going way back to his days as Andrew Ng’s first PhD student at Stanford. His research today is focused on deep learning for robotics. During this conversation, Pieter and I really dig into reinforcement learning, a technique for allowing robots (or AIs) to learn through their own trial and error. Nerd alert!! This conversation explores cutting edge research with one of the leading researchers in the field and, as a result, it gets pretty technical at times. I try to uplevel it when I can keep up myself, so hang in there. I promise that you’ll learn a ton if you keep with it. The notes for this show can be found at twimlai.com/talk/28</description>
      <pubDate>Sat, 17 Jun 2017 00:14:40 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>28</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d302efa-ee98-11eb-9502-4b45a6f11980/image/artworks-000228864495-jvrapp-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week our guest is Pieter Abbeel, Assistant P…</itunes:subtitle>
      <itunes:summary>This week our guest is Pieter Abbeel, Assistant Professor at UC Berkeley, Research Scientist at OpenAI, and Cofounder of Gradescope. Pieter has an extensive background in AI research, going way back to his days as Andrew Ng’s first PhD student at Stanford. His research today is focused on deep learning for robotics. During this conversation, Pieter and I really dig into reinforcement learning, a technique for allowing robots (or AIs) to learn through their own trial and error. Nerd alert!! This conversation explores cutting edge research with one of the leading researchers in the field and, as a result, it gets pretty technical at times. I try to uplevel it when I can keep up myself, so hang in there. I promise that you’ll learn a ton if you keep with it. The notes for this show can be found at twimlai.com/talk/28</itunes:summary>
      <content:encoded>
        <![CDATA[This week our guest is Pieter Abbeel, Assistant Professor at UC Berkeley, Research Scientist at OpenAI, and Cofounder of Gradescope. Pieter has an extensive background in AI research, going way back to his days as Andrew Ng’s first PhD student at Stanford. His research today is focused on deep learning for robotics. During this conversation, Pieter and I really dig into reinforcement learning, a technique for allowing robots (or AIs) to learn through their own trial and error. Nerd alert!! This conversation explores cutting edge research with one of the leading researchers in the field and, as a result, it gets pretty technical at times. I try to uplevel it when I can keep up myself, so hang in there. I promise that you’ll learn a ton if you keep with it. The notes for this show can be found at twimlai.com/talk/28]]>
      </content:encoded>
      <itunes:duration>3140</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/328458110]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2144081880.mp3?updated=1629216892"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Intelligent Autonomous Robots with Ilia Baranov - TWiML Talk #27</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/327291635-twiml-twiml-talk-027-intelligent-autonomous-robots-ilia-baranov.mp3</link>
      <description>Our first guest in the Industrial AI series is Ilia Baranov, engineering manager at Clearpath Robotics. Ilia is responsible for setting the engineering direction for all of Clearpath’s research platforms. Ilia likes to describe his role at the company as “both enabling and preventing the robot revolution.” He’s a longtime contributor to the Open Source Robotics Community and ROS, an open source robotic operating system. He is the also the managing engineer of the PR2 support team at Clearpath and leads the technical demonstration group. In our conversation we cover a lot of ground, including what it really means to field autonomous robots, the use of autonomous robots in research and industrial environments, the different approaches and challenges to achieving autonomy, and much more! The notes for this show are available at twimlai.com/talk/27, and for more information on the Industrial AI Series, visit twimlai.com/IndustrialAI.</description>
      <pubDate>Fri, 09 Jun 2017 14:58:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>27</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d528356-ee98-11eb-9502-4b1acfe26567/image/artworks-000227629895-4gpce3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Our first guest in the Industrial AI series is Il…</itunes:subtitle>
      <itunes:summary>Our first guest in the Industrial AI series is Ilia Baranov, engineering manager at Clearpath Robotics. Ilia is responsible for setting the engineering direction for all of Clearpath’s research platforms. Ilia likes to describe his role at the company as “both enabling and preventing the robot revolution.” He’s a longtime contributor to the Open Source Robotics Community and ROS, an open source robotic operating system. He is the also the managing engineer of the PR2 support team at Clearpath and leads the technical demonstration group. In our conversation we cover a lot of ground, including what it really means to field autonomous robots, the use of autonomous robots in research and industrial environments, the different approaches and challenges to achieving autonomy, and much more! The notes for this show are available at twimlai.com/talk/27, and for more information on the Industrial AI Series, visit twimlai.com/IndustrialAI.</itunes:summary>
      <content:encoded>
        <![CDATA[Our first guest in the Industrial AI series is Ilia Baranov, engineering manager at Clearpath Robotics. Ilia is responsible for setting the engineering direction for all of Clearpath’s research platforms. Ilia likes to describe his role at the company as “both enabling and preventing the robot revolution.” He’s a longtime contributor to the Open Source Robotics Community and ROS, an open source robotic operating system. He is the also the managing engineer of the PR2 support team at Clearpath and leads the technical demonstration group. In our conversation we cover a lot of ground, including what it really means to field autonomous robots, the use of autonomous robots in research and industrial environments, the different approaches and challenges to achieving autonomy, and much more! The notes for this show are available at twimlai.com/talk/27, and for more information on the Industrial AI Series, visit twimlai.com/IndustrialAI.]]>
      </content:encoded>
      <itunes:duration>3220</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/327291635]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7422549100.mp3?updated=1629216889"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Global AI Trends with Ben Lorica - TWiML Talk #26</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/325746542-twiml-twiml-talk-026-global-ai-trends-ben-lorica.mp3</link>
      <description>This week I’ve invited my friend Ben Lorica onto the show. Ben is Chief Data Scientist for O’Reilly Media, and Program Director of Strata Data &amp; the O'Reilly A.I. conference. Ben has worked on analytics and machine learning in the finance and retail industries, and serves as an advisor for nearly a dozen startups. In his role at O’Reilly he’s responsible for the content for 7 major conferences around the world each year. In the show we discuss all of that, touching on how publishers can take advantage of machine learning and data mining, how the role of “data scientist” is evolving and the emergence of the machine learning engineer, and a few of the hot technologies, trends and companies that he’s seeing arise around the world. The notes for this show can be found at twimlai.com/talk/26</description>
      <pubDate>Fri, 02 Jun 2017 19:26:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>26</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d71408e-ee98-11eb-9502-27062ec4c374/image/artworks-000225670931-xghicm-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I’ve invited my friend Ben Lorica onto …</itunes:subtitle>
      <itunes:summary>This week I’ve invited my friend Ben Lorica onto the show. Ben is Chief Data Scientist for O’Reilly Media, and Program Director of Strata Data &amp; the O'Reilly A.I. conference. Ben has worked on analytics and machine learning in the finance and retail industries, and serves as an advisor for nearly a dozen startups. In his role at O’Reilly he’s responsible for the content for 7 major conferences around the world each year. In the show we discuss all of that, touching on how publishers can take advantage of machine learning and data mining, how the role of “data scientist” is evolving and the emergence of the machine learning engineer, and a few of the hot technologies, trends and companies that he’s seeing arise around the world. The notes for this show can be found at twimlai.com/talk/26</itunes:summary>
      <content:encoded>
        <![CDATA[This week I’ve invited my friend Ben Lorica onto the show. Ben is Chief Data Scientist for O’Reilly Media, and Program Director of Strata Data &amp; the O'Reilly A.I. conference. Ben has worked on analytics and machine learning in the finance and retail industries, and serves as an advisor for nearly a dozen startups. In his role at O’Reilly he’s responsible for the content for 7 major conferences around the world each year. In the show we discuss all of that, touching on how publishers can take advantage of machine learning and data mining, how the role of “data scientist” is evolving and the emergence of the machine learning engineer, and a few of the hot technologies, trends and companies that he’s seeing arise around the world. The notes for this show can be found at twimlai.com/talk/26]]>
      </content:encoded>
      <itunes:duration>3245</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/325746542]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7092000656.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Offensive vs Defensive Data Science with Deep Varma - TWiML Talk #25</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/324622200-twiml-twiml-talk-025-offensive-vs-defensive-data-science-deep-varma.mp3</link>
      <description>This week on the show my guest is Deep Varma, Vice President of Data Engineering at real estate startup Trulia. Deep has run data engineering teams in silicon valley for well over a decade, and is now responsible for the engineering efforts supporting Trulia’s Big Data Technology Platform, which encompasses everything from Data acquisition &amp; management to Data Science &amp; Algorithms. In the show we discuss all of that, with an emphasis on Trulia’s data engineering pipeline and their personalization platform, as well how they use computer vision, deep learning and natural language generation to deliver their product. Along the way, Deep offers great insights into what he calls offensive vs defensive data science, and the difference between data-driven decision making vs products. Another great interview, and i'm sure you’ll enjoy it. The notes for this show can be found at twimlai.com/talk/25 Subscribe! iTunes ➙ https://itunes.apple.com/us/podcast/this-week-in-machine-learning/id1116303051?mt=2 Soundcloud ➙ https://soundcloud.com/twiml Google Play ➙ http://bit.ly/2lrWlJZ Stitcher ➙ http://www.stitcher.com/s?fid=92079&amp;refid=stpr RSS ➙ https://twimlai.com/feed Lets Connect! Twimlai.com ➙ https://twimlai.com/contact Twitter ➙ https://twitter.com/twimlai Facebook ➙ https://Facebook.com/Twimlai Medium ➙ https://medium.com/this-week-in-machine-learning-ai</description>
      <pubDate>Fri, 26 May 2017 16:00:33 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>25</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7d91883a-ee98-11eb-9502-8bfee8e62ebe/image/artworks-000224482857-ml47eu-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week on the show my guest is Deep Varma, Vic…</itunes:subtitle>
      <itunes:summary>This week on the show my guest is Deep Varma, Vice President of Data Engineering at real estate startup Trulia. Deep has run data engineering teams in silicon valley for well over a decade, and is now responsible for the engineering efforts supporting Trulia’s Big Data Technology Platform, which encompasses everything from Data acquisition &amp; management to Data Science &amp; Algorithms. In the show we discuss all of that, with an emphasis on Trulia’s data engineering pipeline and their personalization platform, as well how they use computer vision, deep learning and natural language generation to deliver their product. Along the way, Deep offers great insights into what he calls offensive vs defensive data science, and the difference between data-driven decision making vs products. Another great interview, and i'm sure you’ll enjoy it. The notes for this show can be found at twimlai.com/talk/25 Subscribe! iTunes ➙ https://itunes.apple.com/us/podcast/this-week-in-machine-learning/id1116303051?mt=2 Soundcloud ➙ https://soundcloud.com/twiml Google Play ➙ http://bit.ly/2lrWlJZ Stitcher ➙ http://www.stitcher.com/s?fid=92079&amp;refid=stpr RSS ➙ https://twimlai.com/feed Lets Connect! Twimlai.com ➙ https://twimlai.com/contact Twitter ➙ https://twitter.com/twimlai Facebook ➙ https://Facebook.com/Twimlai Medium ➙ https://medium.com/this-week-in-machine-learning-ai</itunes:summary>
      <content:encoded>
        <![CDATA[This week on the show my guest is Deep Varma, Vice President of Data Engineering at real estate startup Trulia. Deep has run data engineering teams in silicon valley for well over a decade, and is now responsible for the engineering efforts supporting Trulia’s Big Data Technology Platform, which encompasses everything from Data acquisition &amp; management to Data Science &amp; Algorithms. In the show we discuss all of that, with an emphasis on Trulia’s data engineering pipeline and their personalization platform, as well how they use computer vision, deep learning and natural language generation to deliver their product. Along the way, Deep offers great insights into what he calls offensive vs defensive data science, and the difference between data-driven decision making vs products. Another great interview, and i'm sure you’ll enjoy it. The notes for this show can be found at twimlai.com/talk/25 Subscribe! iTunes ➙ https://itunes.apple.com/us/podcast/this-week-in-machine-learning/id1116303051?mt=2 Soundcloud ➙ https://soundcloud.com/twiml Google Play ➙ http://bit.ly/2lrWlJZ Stitcher ➙ http://www.stitcher.com/s?fid=92079&amp;refid=stpr RSS ➙ https://twimlai.com/feed Lets Connect! Twimlai.com ➙ https://twimlai.com/contact Twitter ➙ https://twitter.com/twimlai Facebook ➙ https://Facebook.com/Twimlai Medium ➙ https://medium.com/this-week-in-machine-learning-ai]]>
      </content:encoded>
      <itunes:duration>3196</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/324622200]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6963534719.mp3?updated=1629216886"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Reinforcement Learning: The Next Frontier of Gaming with Danny Lange - TWiML Talk #24</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/323511120-twiml-twiml-talk-024-reinforcement-learning-next-frontier-gaming-danny-lange.mp3</link>
      <description>My guest on the show this week is Danny Lange, VP for Machine Learning &amp; AI at video game technology developer Unity Technologies. Danny is well traveled in the world of ML and AI, and has had a hand in developing machine learning platforms at companies like Uber, Amazon and Microsoft. In this conversation we cover a bunch of topics, including How ML &amp; AI are being used in gaming, the importance of reinforcement learning in the future of game development, the intersection between AI and AR/VR and the next steps in natural character interaction. The notes for this show can be found at twimlai.com/talk/24</description>
      <pubDate>Sat, 20 May 2017 00:54:04 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>24</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7db7ba0a-ee98-11eb-9502-d3a3d18b5426/image/artworks-000223388935-1idyvu-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest on the show this week is Danny Lange, VP…</itunes:subtitle>
      <itunes:summary>My guest on the show this week is Danny Lange, VP for Machine Learning &amp; AI at video game technology developer Unity Technologies. Danny is well traveled in the world of ML and AI, and has had a hand in developing machine learning platforms at companies like Uber, Amazon and Microsoft. In this conversation we cover a bunch of topics, including How ML &amp; AI are being used in gaming, the importance of reinforcement learning in the future of game development, the intersection between AI and AR/VR and the next steps in natural character interaction. The notes for this show can be found at twimlai.com/talk/24</itunes:summary>
      <content:encoded>
        <![CDATA[My guest on the show this week is Danny Lange, VP for Machine Learning &amp; AI at video game technology developer Unity Technologies. Danny is well traveled in the world of ML and AI, and has had a hand in developing machine learning platforms at companies like Uber, Amazon and Microsoft. In this conversation we cover a bunch of topics, including How ML &amp; AI are being used in gaming, the importance of reinforcement learning in the future of game development, the intersection between AI and AR/VR and the next steps in natural character interaction. The notes for this show can be found at twimlai.com/talk/24]]>
      </content:encoded>
      <itunes:duration>3277</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/323511120]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8672061433.mp3?updated=1629216884"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Integrating Psycholinguistics into AI with Dominique Simmons - TWiML Talk #23</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/322284310-twiml-twiml-talk-023-integrating-psycholinguistics-ai-dominique-simmons.mp3</link>
      <description>I think you’re really going to enjoy today’s show. Our guest this week is Dominique Simmons, Applied research Scientist at AI tools vendor Dimensional Mechanics. Dominique brings an interesting background in Cognitive Psychology and psycholinguistics to her work and research in AI and, well, to this podcast. In our conversation, we cover the implications of cognitive psychology for neural networks and AI systems, and in particular how an understanding of human cognition impacts the development of AI models for media applications. We also discuss her research into multimodal training of AI models, and how our understanding of the human brain has influenced this work. We also explore the debate around the biological plausibility of machine learning and AI models. It was a great conversation. The show notes can be found at twimlai.com/talk/23.</description>
      <pubDate>Fri, 12 May 2017 21:31:54 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>23</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7e0d8674-ee98-11eb-9502-371f208e8206/image/artworks-000240865362-emyvc8-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>I think you’re really going to enjoy today’s show…</itunes:subtitle>
      <itunes:summary>I think you’re really going to enjoy today’s show. Our guest this week is Dominique Simmons, Applied research Scientist at AI tools vendor Dimensional Mechanics. Dominique brings an interesting background in Cognitive Psychology and psycholinguistics to her work and research in AI and, well, to this podcast. In our conversation, we cover the implications of cognitive psychology for neural networks and AI systems, and in particular how an understanding of human cognition impacts the development of AI models for media applications. We also discuss her research into multimodal training of AI models, and how our understanding of the human brain has influenced this work. We also explore the debate around the biological plausibility of machine learning and AI models. It was a great conversation. The show notes can be found at twimlai.com/talk/23.</itunes:summary>
      <content:encoded>
        <![CDATA[I think you’re really going to enjoy today’s show. Our guest this week is Dominique Simmons, Applied research Scientist at AI tools vendor Dimensional Mechanics. Dominique brings an interesting background in Cognitive Psychology and psycholinguistics to her work and research in AI and, well, to this podcast. In our conversation, we cover the implications of cognitive psychology for neural networks and AI systems, and in particular how an understanding of human cognition impacts the development of AI models for media applications. We also discuss her research into multimodal training of AI models, and how our understanding of the human brain has influenced this work. We also explore the debate around the biological plausibility of machine learning and AI models. It was a great conversation. The show notes can be found at twimlai.com/talk/23.]]>
      </content:encoded>
      <itunes:duration>3629</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/322284310]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1632476777.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Deep Neural Nets for Visual Recognition with Matt Zeiler - TWiML Talk #22</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/321008876-twiml-twiml-talk-022-deep-neural-nets-visual-recognition-matt-zeiler-interview.mp3</link>
      <description>Today we bring you our final interview from backstage at the NYU FutureLabs AI Summit. Our guest this week is Matt Zeiler. Matt graduated from the University of Toronto where he worked with deep learning researcher Geoffrey Hinton and went on to earn his PhD in machine learning at NYU, home of Yann Lecun. In 2013 Matt’s founded Clarifai, a startup whose cloud-based visual recognition system gives developers a way to integrate visual identification into their own products, and whose initial image classification algorithm achieved top 5 results in that year’s ImageNet competition. I caught up with Matt after his talk “From Research to the Real World”. Our conversation focused on the birth and growth of Clarifai, as well as the underlying deep neural network architectures that enable it. If you’ve been listening to the show for a while, you’ve heard me ask several guests how they go about evolving the architectures of their deep neural networks to enhance performance. Well, in this podcast Matt gives the most satisfying answer I’ve received to date by far. Check it out. I think you’ll enjoy it. The show notes can be found at twimlai.com/talk/22.</description>
      <pubDate>Fri, 05 May 2017 15:56:37 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>22</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7e2c1576-ee98-11eb-9502-ab7f98c8211e/image/artworks-000221104497-7dgqxz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we bring you our final interview from backs…</itunes:subtitle>
      <itunes:summary>Today we bring you our final interview from backstage at the NYU FutureLabs AI Summit. Our guest this week is Matt Zeiler. Matt graduated from the University of Toronto where he worked with deep learning researcher Geoffrey Hinton and went on to earn his PhD in machine learning at NYU, home of Yann Lecun. In 2013 Matt’s founded Clarifai, a startup whose cloud-based visual recognition system gives developers a way to integrate visual identification into their own products, and whose initial image classification algorithm achieved top 5 results in that year’s ImageNet competition. I caught up with Matt after his talk “From Research to the Real World”. Our conversation focused on the birth and growth of Clarifai, as well as the underlying deep neural network architectures that enable it. If you’ve been listening to the show for a while, you’ve heard me ask several guests how they go about evolving the architectures of their deep neural networks to enhance performance. Well, in this podcast Matt gives the most satisfying answer I’ve received to date by far. Check it out. I think you’ll enjoy it. The show notes can be found at twimlai.com/talk/22.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we bring you our final interview from backstage at the NYU FutureLabs AI Summit. Our guest this week is Matt Zeiler. Matt graduated from the University of Toronto where he worked with deep learning researcher Geoffrey Hinton and went on to earn his PhD in machine learning at NYU, home of Yann Lecun. In 2013 Matt’s founded Clarifai, a startup whose cloud-based visual recognition system gives developers a way to integrate visual identification into their own products, and whose initial image classification algorithm achieved top 5 results in that year’s ImageNet competition. I caught up with Matt after his talk “From Research to the Real World”. Our conversation focused on the birth and growth of Clarifai, as well as the underlying deep neural network architectures that enable it. If you’ve been listening to the show for a while, you’ve heard me ask several guests how they go about evolving the architectures of their deep neural networks to enhance performance. Well, in this podcast Matt gives the most satisfying answer I’ve received to date by far. Check it out. I think you’ll enjoy it. The show notes can be found at twimlai.com/talk/22.]]>
      </content:encoded>
      <itunes:duration>1348</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/321008876]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5391987904.mp3?updated=1629216802"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Engineering the Future of AI with Ruchir Puri - TWiML Talk #21</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/319802931-twiml-twiml-talk-021-engineering-future-ai-ruchir-puri-interview.mp3</link>
      <description>Today we bring you the second of three interviews we did backstage from the NYU FutureLabs AI Summit, this time with Ruchir Puri. Ruchir is the Chief Architect at IBM Watson as well as an IBM Fellow. I caught up with Ruchir after his talk on “engineering the Future of AI for Businesses”. Our conversation focused on cognition and reasoning, and we explored what these concepts represent, how enterprises really want to consume them, and how IBM Watson seeks to deliver them. The show notes can be found at twimlai.com/talk/21.</description>
      <pubDate>Fri, 28 Apr 2017 16:04:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>21</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7e4ae186-ee98-11eb-9502-836653041980/image/artworks-000219992493-yxjs0a-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Today we bring you the second of three interviews…</itunes:subtitle>
      <itunes:summary>Today we bring you the second of three interviews we did backstage from the NYU FutureLabs AI Summit, this time with Ruchir Puri. Ruchir is the Chief Architect at IBM Watson as well as an IBM Fellow. I caught up with Ruchir after his talk on “engineering the Future of AI for Businesses”. Our conversation focused on cognition and reasoning, and we explored what these concepts represent, how enterprises really want to consume them, and how IBM Watson seeks to deliver them. The show notes can be found at twimlai.com/talk/21.</itunes:summary>
      <content:encoded>
        <![CDATA[Today we bring you the second of three interviews we did backstage from the NYU FutureLabs AI Summit, this time with Ruchir Puri. Ruchir is the Chief Architect at IBM Watson as well as an IBM Fellow. I caught up with Ruchir after his talk on “engineering the Future of AI for Businesses”. Our conversation focused on cognition and reasoning, and we explored what these concepts represent, how enterprises really want to consume them, and how IBM Watson seeks to deliver them. The show notes can be found at twimlai.com/talk/21.]]>
      </content:encoded>
      <itunes:duration>1259</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/319802931]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4537477452.mp3?updated=1629216761"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Selling AI to the Enterprise with Kathryn Hume - TWiML Talk #20</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/318806890-twiml-twiml-talk-020-selling-ai-to-the-enterprise-with-kathryn-hume.mp3</link>
      <description>This week's guest is Kathryn Hume. Kathryn is the President of Fast Forward Labs, which is an independent machine intelligence research company that helps organizations accelerate their data science and machine intelligence capabilities. If Fast Forward Labs sounds familiar, that's because we had their founder, Hilary Mason on a few months ago. We’ll link to that in the show notes. My discussion with Kathryn focused on AI adoption within the enterprise. She shared several really interesting examples of the kinds of things she’s seeing enterprises do with machine learning and AI, and we discussed a few of the various challenges enterprises face and some of the lessons her company has learned in helping them. I really enjoyed our conversation and I know you will too! You can find the notes for todays show here: https://twimlai.com/talk/20</description>
      <pubDate>Fri, 21 Apr 2017 15:46:53 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>20</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7e66a0b0-ee98-11eb-9502-b361df794615/image/artworks-000218918362-y3aqa9-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week's guest is Kathryn Hume. Kathryn is the…</itunes:subtitle>
      <itunes:summary>This week's guest is Kathryn Hume. Kathryn is the President of Fast Forward Labs, which is an independent machine intelligence research company that helps organizations accelerate their data science and machine intelligence capabilities. If Fast Forward Labs sounds familiar, that's because we had their founder, Hilary Mason on a few months ago. We’ll link to that in the show notes. My discussion with Kathryn focused on AI adoption within the enterprise. She shared several really interesting examples of the kinds of things she’s seeing enterprises do with machine learning and AI, and we discussed a few of the various challenges enterprises face and some of the lessons her company has learned in helping them. I really enjoyed our conversation and I know you will too! You can find the notes for todays show here: https://twimlai.com/talk/20</itunes:summary>
      <content:encoded>
        <![CDATA[This week's guest is Kathryn Hume. Kathryn is the President of Fast Forward Labs, which is an independent machine intelligence research company that helps organizations accelerate their data science and machine intelligence capabilities. If Fast Forward Labs sounds familiar, that's because we had their founder, Hilary Mason on a few months ago. We’ll link to that in the show notes. My discussion with Kathryn focused on AI adoption within the enterprise. She shared several really interesting examples of the kinds of things she’s seeing enterprises do with machine learning and AI, and we discussed a few of the various challenges enterprises face and some of the lessons her company has learned in helping them. I really enjoyed our conversation and I know you will too! You can find the notes for todays show here: https://twimlai.com/talk/20]]>
      </content:encoded>
      <itunes:duration>1429</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/318806890]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1490995950.mp3?updated=1629216851"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>From Particle Physics to Audio AI with Scott Stephenson - TWiML Talk #19</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/317621035-twiml-twiml-talk-019-from-particle-physics-to-audio-ai-with-scott-stephenson.mp3</link>
      <description>This week my guest is Scott Stephenson. Scott is co-Founder &amp; CEO of Deepgram, which has developed an AI-based platform for indexing and searching audio and video. Scott and I cover a ton of interesting topics including applying machine learning techniques to particle physics, his time in a lab 2 miles below the surface of the earth, applying neural networks to audio, and the Deep Learning Framework Kur that his company open-sourced. The show notes can be found at twimlai.com/talk/19.</description>
      <pubDate>Fri, 14 Apr 2017 15:58:37 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>19</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7e859470-ee98-11eb-9502-47a2a31b5c9a/image/artworks-000217738490-mg4jcj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week my guest is Scott Stephenson. Scott is …</itunes:subtitle>
      <itunes:summary>This week my guest is Scott Stephenson. Scott is co-Founder &amp; CEO of Deepgram, which has developed an AI-based platform for indexing and searching audio and video. Scott and I cover a ton of interesting topics including applying machine learning techniques to particle physics, his time in a lab 2 miles below the surface of the earth, applying neural networks to audio, and the Deep Learning Framework Kur that his company open-sourced. The show notes can be found at twimlai.com/talk/19.</itunes:summary>
      <content:encoded>
        <![CDATA[This week my guest is Scott Stephenson. Scott is co-Founder &amp; CEO of Deepgram, which has developed an AI-based platform for indexing and searching audio and video. Scott and I cover a ton of interesting topics including applying machine learning techniques to particle physics, his time in a lab 2 miles below the surface of the earth, applying neural networks to audio, and the Deep Learning Framework Kur that his company open-sourced. The show notes can be found at twimlai.com/talk/19.]]>
      </content:encoded>
      <itunes:duration>3384</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/317621035]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6090282538.mp3?updated=1629216895"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>(5/5) AlphaVertex - Creating a Worldwide Financial Knowledge Graph - TWiML Talk #18</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/316591582-twiml-twiml-talk-018-pt-5-alphavertex-creating-a-worldwide-financial-knowledge-graph.mp3</link>
      <description>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with AlphaVertex, a FinTech startup creating a worldwide financial knowledge graph to help investors predict stock prices. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</description>
      <pubDate>Fri, 07 Apr 2017 18:30:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ea79a20-ee98-11eb-9502-f704eb7c9937/image/artworks-000216736040-anzlr3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I'm on location at NYU/ffVC AI NexusLab…</itunes:subtitle>
      <itunes:summary>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with AlphaVertex, a FinTech startup creating a worldwide financial knowledge graph to help investors predict stock prices. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with AlphaVertex, a FinTech startup creating a worldwide financial knowledge graph to help investors predict stock prices. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</p>]]>
      </content:encoded>
      <itunes:duration>1574</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/316591582]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1597861690.mp3?updated=1627362874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>(4/5) Behold.ai - Increasing Efficiency of Healthcare Insurance Billing with NLP - TWiML Talk #18</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/316591586-twiml-twiml-talk-018-part-4-behold-ai-increasing-efficiency-healthcare-insurance-billing.mp3</link>
      <description>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Behold.ai, which uses computer vision and natural language processing techniques to bring efficiencies to the world of healthcare insurance billing. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</description>
      <pubDate>Fri, 07 Apr 2017 18:19:26 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7eccd31c-ee98-11eb-9502-0b4747bbac96/image/artworks-000216735921-1qcs7q-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I'm on location at NYU/ffVC AI NexusLab…</itunes:subtitle>
      <itunes:summary>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Behold.ai, which uses computer vision and natural language processing techniques to bring efficiencies to the world of healthcare insurance billing. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</itunes:summary>
      <content:encoded>
        <![CDATA[This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Behold.ai, which uses computer vision and natural language processing techniques to bring efficiencies to the world of healthcare insurance billing. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!]]>
      </content:encoded>
      <itunes:duration>991</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/316591586]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9320341760.mp3?updated=1627362874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>(3/5) Cambrian Intelligence - Using AI to Simplify the Programming of Robots - TWiML Talk #18</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/316591590-twiml-twiml-talk-018-pt-3-cambrian-intelligence-ai-simplify-programming-robots.mp3</link>
      <description>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Cambrian Intelligence, a company using AI to simplify the programming of industrial robots for the automotive industry. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</description>
      <pubDate>Fri, 07 Apr 2017 18:14:35 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7ee7e0d0-ee98-11eb-9502-8b2434cbda5e/image/artworks-000216735872-620s13-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I'm on location at NYU/ffVC AI NexusLab…</itunes:subtitle>
      <itunes:summary>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Cambrian Intelligence, a company using AI to simplify the programming of industrial robots for the automotive industry. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</itunes:summary>
      <content:encoded>
        <![CDATA[This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Cambrian Intelligence, a company using AI to simplify the programming of industrial robots for the automotive industry. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!]]>
      </content:encoded>
      <itunes:duration>1400</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/316591590]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5474671530.mp3?updated=1627362874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>(2/5) Klustera - Location-Based Intelligence for Smarter Marketing - TWiML Talk #18</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/316591596-twiml-twiml-talk-018-pt-2-klustera-location-based-intelligence-smarter-marketing.mp3</link>
      <description>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Klustera, a company applying location-based intelligence and machine learning to help brands execute smarter marketing campaigns. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</description>
      <pubDate>Fri, 07 Apr 2017 18:14:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f0f3f86-ee98-11eb-9502-470877dc8e00/image/artworks-000216735800-xhcfyi-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I'm on location at NYU/ffVC AI NexusLab…</itunes:subtitle>
      <itunes:summary>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Klustera, a company applying location-based intelligence and machine learning to help brands execute smarter marketing campaigns. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</itunes:summary>
      <content:encoded>
        <![CDATA[This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with Klustera, a company applying location-based intelligence and machine learning to help brands execute smarter marketing campaigns. The notes for this series can be found at twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!]]>
      </content:encoded>
      <itunes:duration>1330</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/316591596]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8598061667.mp3?updated=1627362874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>(1/5) HelloVera - AI-Powered Customer Support  - TWiML Talk #18</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/316591597-twiml-twiml-talk-018-pt-1-hellovera-ai-powered-customer-support.mp3</link>
      <description>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with HelloVera, a company applying artificial intelligence to the challenge of automating customer support experiences. The notes for this series can be found at https://twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</description>
      <pubDate>Fri, 07 Apr 2017 18:14:22 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f2b8eb6-ee98-11eb-9502-3fb6afcf90f8/image/artworks-000216735719-00nms0-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week I'm on location at NYU/ffVC AI NexusLab…</itunes:subtitle>
      <itunes:summary>This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with HelloVera, a company applying artificial intelligence to the challenge of automating customer support experiences. The notes for this series can be found at https://twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!</itunes:summary>
      <content:encoded>
        <![CDATA[This week I'm on location at NYU/ffVC AI NexusLab startup accelerator, speaking with founders from the 5 companies in the program's inaugural batch. This interview is with HelloVera, a company applying artificial intelligence to the challenge of automating customer support experiences. The notes for this series can be found at https://twimlai.com/nexuslab. Thanks to Future Labs at NYU Tandon and ffVenture Capital for sponsoring the series!]]>
      </content:encoded>
      <itunes:duration>1537</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/316591597]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2726952294.mp3?updated=1627362874"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Interactive Machine Learning Systems with Alekh Agarwal - TWiML Talk #17</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/315263371-twiml-twiml-talk-017-interactive-machine-learning-systems-alekh-agarwal-interview.mp3</link>
      <description>This week my guest is Alekh Agarwal. Alekh is a researcher with Microsoft Research whose research is focused on Interactive Machine Learning. In our discussion, Alekh and I discuss various aspects of this exciting area of research such as active learning, reinforcement learning, contextual bandits and more.</description>
      <pubDate>Fri, 31 Mar 2017 15:59:38 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>17</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f4ae48c-ee98-11eb-9502-138fa1d459a7/image/artworks-000215447817-5mkfbr-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week my guest is Alekh Agarwal. Alekh is a r…</itunes:subtitle>
      <itunes:summary>This week my guest is Alekh Agarwal. Alekh is a researcher with Microsoft Research whose research is focused on Interactive Machine Learning. In our discussion, Alekh and I discuss various aspects of this exciting area of research such as active learning, reinforcement learning, contextual bandits and more.</itunes:summary>
      <content:encoded>
        <![CDATA[This week my guest is Alekh Agarwal. Alekh is a researcher with Microsoft Research whose research is focused on Interactive Machine Learning. In our discussion, Alekh and I discuss various aspects of this exciting area of research such as active learning, reinforcement learning, contextual bandits and more.]]>
      </content:encoded>
      <itunes:duration>1855</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/315263371]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9779587945.mp3?updated=1629216865"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Machine Learning in Cybersecurity with Evan Wright - TWiML Talk #16</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/314289546-twiml-twiml-talk-016-machine-learning-cybersecurity-evan-wright-interview.mp3</link>
      <description>This week my guest is Evan Wright, principal data scientist at cybersecurity startup Anomali. In my interview with Evan, he and I discussed about a number of topics surrounding the use of machine learning in cybersecurity. If Evan’s name sounds familiar, it’s because Evan was the winner of the O’Reilly Strata+Hadoop World ticket giveaway earlier this month. We met up at the conference last week and took advantage of the opportunity to record this show. Our conversation covers, among other topics, the three big problems in cybersecurity that ML can help out with, the challenges of acquiring ground truth in cybersecurity and some ways to accomplish it, and the use of decision trees, generative adversarial networks, and other algorithms in the field. The show notes can be found at twimlai.com/talk/16.</description>
      <pubDate>Fri, 24 Mar 2017 18:16:22 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>16</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f6bbd60-ee98-11eb-9502-6784e7f4f260/image/artworks-000214469360-o0720j-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week my guest is Evan Wright, principal data…</itunes:subtitle>
      <itunes:summary>This week my guest is Evan Wright, principal data scientist at cybersecurity startup Anomali. In my interview with Evan, he and I discussed about a number of topics surrounding the use of machine learning in cybersecurity. If Evan’s name sounds familiar, it’s because Evan was the winner of the O’Reilly Strata+Hadoop World ticket giveaway earlier this month. We met up at the conference last week and took advantage of the opportunity to record this show. Our conversation covers, among other topics, the three big problems in cybersecurity that ML can help out with, the challenges of acquiring ground truth in cybersecurity and some ways to accomplish it, and the use of decision trees, generative adversarial networks, and other algorithms in the field. The show notes can be found at twimlai.com/talk/16.</itunes:summary>
      <content:encoded>
        <![CDATA[This week my guest is Evan Wright, principal data scientist at cybersecurity startup Anomali. In my interview with Evan, he and I discussed about a number of topics surrounding the use of machine learning in cybersecurity. If Evan’s name sounds familiar, it’s because Evan was the winner of the O’Reilly Strata+Hadoop World ticket giveaway earlier this month. We met up at the conference last week and took advantage of the opportunity to record this show. Our conversation covers, among other topics, the three big problems in cybersecurity that ML can help out with, the challenges of acquiring ground truth in cybersecurity and some ways to accomplish it, and the use of decision trees, generative adversarial networks, and other algorithms in the field. The show notes can be found at twimlai.com/talk/16.]]>
      </content:encoded>
      <itunes:duration>3868</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/314289546]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2597350109.mp3?updated=1629216904"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Domain Knowledge in Machine Learning Models for Sustainability with Stefano Ermon - TWiML Talk #15</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/312983315-twiml-twiml-talk-015-domain-knowledge-machine-learning-sustainability-stefano-ermon-interview.mp3</link>
      <description>My guest this week is Stefano Ermon, Assistant Professor of Computer Science at Stanford University, and Fellow at Stanford’s Woods Institute for the Environment. Stefano and I met at the Re-Work Deep Learning Summit earlier this year, where he gave a presentation on Machine Learning for Sustainability. Stefano and I spoke about a wide range of topics, including the relationship between fundamental and applied machine learning research, incorporating domain knowledge in machine learning models, dimensionality reduction, and his interest in applying ML &amp; AI to addressing sustainability issues such as poverty, food security and the environment. The show notes can be found at twimlai.com/talk/15.</description>
      <pubDate>Fri, 17 Mar 2017 18:23:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7f908406-ee98-11eb-9502-57ecd9b529c2/image/artworks-000213170271-985s0c-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this week is Stefano Ermon, Assistant Pr…</itunes:subtitle>
      <itunes:summary>My guest this week is Stefano Ermon, Assistant Professor of Computer Science at Stanford University, and Fellow at Stanford’s Woods Institute for the Environment. Stefano and I met at the Re-Work Deep Learning Summit earlier this year, where he gave a presentation on Machine Learning for Sustainability. Stefano and I spoke about a wide range of topics, including the relationship between fundamental and applied machine learning research, incorporating domain knowledge in machine learning models, dimensionality reduction, and his interest in applying ML &amp; AI to addressing sustainability issues such as poverty, food security and the environment. The show notes can be found at twimlai.com/talk/15.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this week is Stefano Ermon, Assistant Professor of Computer Science at Stanford University, and Fellow at Stanford’s Woods Institute for the Environment. Stefano and I met at the Re-Work Deep Learning Summit earlier this year, where he gave a presentation on Machine Learning for Sustainability. Stefano and I spoke about a wide range of topics, including the relationship between fundamental and applied machine learning research, incorporating domain knowledge in machine learning models, dimensionality reduction, and his interest in applying ML &amp; AI to addressing sustainability issues such as poverty, food security and the environment. The show notes can be found at twimlai.com/talk/15.]]>
      </content:encoded>
      <itunes:duration>3266</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/312983315]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2722831420.mp3?updated=1629216901"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Scaling Deep Learning: Systems Challenges &amp; More with Shubho Sengupta — TWiML Talk #14</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/311704428-twiml-twiml-talk-014-scaling-deep-learning-systems-challenges-shubho-sengupta-interview.mp3</link>
      <description>This week my guest is Shubho Sengupta, Research Scientist at Baidu. I had the pleasure of meeting Shubho at the Rework Deep Learning Summit earlier this year, where he delivered a presentation on Systems Challenges for Deep Learning. We dig into this topic in the interview, and discuss a variety of issues including network architecture, productionalization, operationalization and hardware. The show notes can be found at twimlai.com/talk/14.</description>
      <pubDate>Fri, 10 Mar 2017 16:41:21 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7faeb3ae-ee98-11eb-9502-37c20762cb4b/image/artworks-000211914486-h0w4xq-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This week my guest is Shubho Sengupta, Research S…</itunes:subtitle>
      <itunes:summary>This week my guest is Shubho Sengupta, Research Scientist at Baidu. I had the pleasure of meeting Shubho at the Rework Deep Learning Summit earlier this year, where he delivered a presentation on Systems Challenges for Deep Learning. We dig into this topic in the interview, and discuss a variety of issues including network architecture, productionalization, operationalization and hardware. The show notes can be found at twimlai.com/talk/14.</itunes:summary>
      <content:encoded>
        <![CDATA[This week my guest is Shubho Sengupta, Research Scientist at Baidu. I had the pleasure of meeting Shubho at the Rework Deep Learning Summit earlier this year, where he delivered a presentation on Systems Challenges for Deep Learning. We dig into this topic in the interview, and discuss a variety of issues including network architecture, productionalization, operationalization and hardware. The show notes can be found at twimlai.com/talk/14.]]>
      </content:encoded>
      <itunes:duration>4346</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/311704428]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7322337186.mp3?updated=1629216907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Understanding Deep Neural Nets with Dr. James McCaffrey - TWiML Talk #13</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/310460910-twiml-twiml-talk-013-understanding-deep-neural-networks-james-mccaffrey-interview.mp3</link>
      <description>My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.</description>
      <pubDate>Fri, 03 Mar 2017 16:25:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7fc7fec2-ee98-11eb-9502-7b4e42952f6a/image/artworks-000210522575-tgpmly-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this week is Dr. James McCaffrey, resear…</itunes:subtitle>
      <itunes:summary>My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this week is Dr. James McCaffrey, research engineer at Microsoft Research. James and I cover a ton of ground in this conversation, including recurrent neural nets (RNNs), convolutional neural nets (CNNs), long short term memory (LSTM) networks, residual networks (ResNets), generative adversarial networks (GANs), and more. We also discuss neural network architecture and promising alternative approaches such as symbolic computation and particle swarm optimization. The show notes can be found at twimlai.com/talk/13.]]>
      </content:encoded>
      <itunes:duration>4566</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/310460910]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9721193757.mp3?updated=1629216907"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Brendan Frey - Reprogramming the Human Genome with AI - TWiML Talk #12</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/309392854-twiml-talk-012-brendan-frey-interview-reprogramming-human-genome-ai.mp3</link>
      <description>My guest this week is Brendan Frey, Professor of Engineering and Medicine at the University of Toronto and Co-Founder and CEO of the startup Deep Genomics. Brendan and I met at the Re-Work Deep Learning Summit in San Francisco last month, where he delivered a great presentation called “Reprogramming the Human Genome: Why AI is Needed.” In this podcast we discuss the application of AI to healthcare. In particular, we dig into how Brendan’s research lab and company are applying machine learning and deep learning to treating and preventing human genetic disorders. The show notes can be found at twimlai.com/talk/12</description>
      <pubDate>Fri, 24 Feb 2017 20:33:49 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/7fecf3f8-ee98-11eb-9502-3b02f780895f/image/artworks-000209454898-6qn4re-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this week is Brendan Frey, Professor of …</itunes:subtitle>
      <itunes:summary>My guest this week is Brendan Frey, Professor of Engineering and Medicine at the University of Toronto and Co-Founder and CEO of the startup Deep Genomics. Brendan and I met at the Re-Work Deep Learning Summit in San Francisco last month, where he delivered a great presentation called “Reprogramming the Human Genome: Why AI is Needed.” In this podcast we discuss the application of AI to healthcare. In particular, we dig into how Brendan’s research lab and company are applying machine learning and deep learning to treating and preventing human genetic disorders. The show notes can be found at twimlai.com/talk/12</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this week is Brendan Frey, Professor of Engineering and Medicine at the University of Toronto and Co-Founder and CEO of the startup Deep Genomics. Brendan and I met at the Re-Work Deep Learning Summit in San Francisco last month, where he delivered a great presentation called “Reprogramming the Human Genome: Why AI is Needed.” In this podcast we discuss the application of AI to healthcare. In particular, we dig into how Brendan’s research lab and company are applying machine learning and deep learning to treating and preventing human genetic disorders. The show notes can be found at twimlai.com/talk/12]]>
      </content:encoded>
      <itunes:duration>3643</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/309392854]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1030393668.mp3?updated=1629216905"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Hilary Mason - Building AI Products - TWiML Talk #11</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/304412649-twiml-twiml-talk-11-hilary-mason-building-ai-products.mp3</link>
      <description>My guest this time is Hilary Mason. Hilary was one of the first “famous” data scientists. I remember hearing her speak back in 2011 at the Strange Loop conference in St. Louis. At the time she was Chief Scientist for bit.ly. Nowadays she’s running Fast Forward Labs, which helps organizations accelerate their data science and machine intelligence capabilities through a variety of research and consulting offerings. Hilary presented at the O'Reilly AI conference on “practical AI product development” and she shares a lot of wisdom on that topic in our discussion. The show notes can be found at twimlai.com/talk/11.</description>
      <pubDate>Wed, 25 Jan 2017 07:04:28 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/80056bcc-ee98-11eb-9502-8bd7e16bc3a7/image/artworks-000209452418-qzizlk-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Hilary Mason. Hilary was on…</itunes:subtitle>
      <itunes:summary>My guest this time is Hilary Mason. Hilary was one of the first “famous” data scientists. I remember hearing her speak back in 2011 at the Strange Loop conference in St. Louis. At the time she was Chief Scientist for bit.ly. Nowadays she’s running Fast Forward Labs, which helps organizations accelerate their data science and machine intelligence capabilities through a variety of research and consulting offerings. Hilary presented at the O'Reilly AI conference on “practical AI product development” and she shares a lot of wisdom on that topic in our discussion. The show notes can be found at twimlai.com/talk/11.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Hilary Mason. Hilary was one of the first “famous” data scientists. I remember hearing her speak back in 2011 at the Strange Loop conference in St. Louis. At the time she was Chief Scientist for bit.ly. Nowadays she’s running Fast Forward Labs, which helps organizations accelerate their data science and machine intelligence capabilities through a variety of research and consulting offerings. Hilary presented at the O'Reilly AI conference on “practical AI product development” and she shares a lot of wisdom on that topic in our discussion. The show notes can be found at twimlai.com/talk/11.]]>
      </content:encoded>
      <itunes:duration>1062</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/304412649]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2908216822.mp3?updated=1629216468"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Francisco Webber - Statistics vs Semantics for Natural Language Processing - TWiML Talk #10</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/296067568-twiml-twiml-talk-10-francisco-webber-statistics-vs-semantics-for-natural-language-processing.mp3</link>
      <description>My guest this time is Francisco Webber, founder and General Manager of artificial intelligence startup Cortical.io. Francisco presented at the O’Reilly AI conference on an approach to natural language understanding based on semantic representations of speech. His talk was called “AI is not a matter of strength but of intelligence.” My conversation with Francisco was a bit technical and abstract, but also super interesting. The show notes can be found at twimlai.com/talk/10.</description>
      <pubDate>Sat, 03 Dec 2016 22:04:01 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8025f694-ee98-11eb-9502-13eeaa4c5b3d/image/artworks-000209452491-bu9skc-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Francisco Webber, founder a…</itunes:subtitle>
      <itunes:summary>My guest this time is Francisco Webber, founder and General Manager of artificial intelligence startup Cortical.io. Francisco presented at the O’Reilly AI conference on an approach to natural language understanding based on semantic representations of speech. His talk was called “AI is not a matter of strength but of intelligence.” My conversation with Francisco was a bit technical and abstract, but also super interesting. The show notes can be found at twimlai.com/talk/10.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Francisco Webber, founder and General Manager of artificial intelligence startup Cortical.io. Francisco presented at the O’Reilly AI conference on an approach to natural language understanding based on semantic representations of speech. His talk was called “AI is not a matter of strength but of intelligence.” My conversation with Francisco was a bit technical and abstract, but also super interesting. The show notes can be found at twimlai.com/talk/10.]]>
      </content:encoded>
      <itunes:duration>2943</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/296067568]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2638599043.mp3?updated=1629216894"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Pascale Fung - Emotional AI: Teaching Computers Empathy - TWiML Talk #9</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/292020506-twiml-twiml-talk-9-pascale-fung-emotional-ai-teaching-computers-empathy.mp3</link>
      <description>My guest this time is Pascale Fung, professor of electrical &amp; computer engineering at Hong Kong University of Science and Technology. Pascale delivered a presentation at the recent O'Reilly AI conference titled "How to make robots empathetic to human feelings in real time," and I caught up with her after her talk to discuss teaching computers to understand and respond to human emotions. We also spend some time talking about the (information) theoretical foundations of modern approaches to speech understanding. The notes for this show can be found at twimlai.com/talk/9.</description>
      <pubDate>Tue, 08 Nov 2016 03:31:12 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/803f4ce8-ee98-11eb-9502-c71e398fb232/image/artworks-000209452605-m3qd3j-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Pascale Fung, professor of …</itunes:subtitle>
      <itunes:summary>My guest this time is Pascale Fung, professor of electrical &amp; computer engineering at Hong Kong University of Science and Technology. Pascale delivered a presentation at the recent O'Reilly AI conference titled "How to make robots empathetic to human feelings in real time," and I caught up with her after her talk to discuss teaching computers to understand and respond to human emotions. We also spend some time talking about the (information) theoretical foundations of modern approaches to speech understanding. The notes for this show can be found at twimlai.com/talk/9.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Pascale Fung, professor of electrical &amp; computer engineering at Hong Kong University of Science and Technology. Pascale delivered a presentation at the recent O'Reilly AI conference titled "How to make robots empathetic to human feelings in real time," and I caught up with her after her talk to discuss teaching computers to understand and respond to human emotions. We also spend some time talking about the (information) theoretical foundations of modern approaches to speech understanding. The notes for this show can be found at twimlai.com/talk/9.]]>
      </content:encoded>
      <itunes:duration>2077</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/292020506]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5916210501.mp3?updated=1629214536"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Diogo Almeida - Deep Learning: Modular in Theory, Inflexible in Practice - TWiML Talk #8</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/289541970-twiml-twiml-talk-8-diogo-almeida-deep-learning-modular-in-theory-inflexible-in-practice.mp3</link>
      <description>My guest this time is Diogo Almeida, senior data scientist at healthcare startup Enlitic. Diogo and I met at the O'Reilly AI conference, where he delivered a great presentation on in-the-trenches deep learning titled “Deep Learning: Modular in theory, inflexible in practice,” which we discuss in this interview. Diogo is also a past 1st place Kaggle competition winner, and we spend some time discussing the competition he competed in and the approach he took as well. The notes for this show can be found at twimlai.com/talk/8.</description>
      <pubDate>Sun, 23 Oct 2016 04:32:08 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/806546dc-ee98-11eb-9502-07dbd2babdd1/image/artworks-000209452708-zyv3i3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Diogo Almeida, senior data …</itunes:subtitle>
      <itunes:summary>My guest this time is Diogo Almeida, senior data scientist at healthcare startup Enlitic. Diogo and I met at the O'Reilly AI conference, where he delivered a great presentation on in-the-trenches deep learning titled “Deep Learning: Modular in theory, inflexible in practice,” which we discuss in this interview. Diogo is also a past 1st place Kaggle competition winner, and we spend some time discussing the competition he competed in and the approach he took as well. The notes for this show can be found at twimlai.com/talk/8.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Diogo Almeida, senior data scientist at healthcare startup Enlitic. Diogo and I met at the O'Reilly AI conference, where he delivered a great presentation on in-the-trenches deep learning titled “Deep Learning: Modular in theory, inflexible in practice,” which we discuss in this interview. Diogo is also a past 1st place Kaggle competition winner, and we spend some time discussing the competition he competed in and the approach he took as well. The notes for this show can be found at twimlai.com/talk/8.]]>
      </content:encoded>
      <itunes:duration>2771</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/289541970]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8141792166.mp3?updated=1629214574"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Carlos Guestrin - Explaining the Predictions of Machine Learning Models - TWiML Talk #7</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/286895244-twiml-twiml-talk-7-carlos-guestrin-explaining-the-predictions-of-machine-learning-models.mp3</link>
      <description>My guest this time is Carlos Guestrin, the Amazon professor of Machine Learning at the University of Washington. Carlos and I recorded this podcast at a conference, shortly after Apple's acquisition of his company Turi. Our focus for this podcast is the explainability of machine learning algorithms. In particular, we discuss some interesting new research published by his team at U of W. The notes for this show can be found at twimlai.com/talk/7.</description>
      <pubDate>Sun, 09 Oct 2016 21:20:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/807f0a9a-ee98-11eb-9502-37e74e860d55/image/artworks-000209452855-hqc8g6-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Carlos Guestrin, the Amazon…</itunes:subtitle>
      <itunes:summary>My guest this time is Carlos Guestrin, the Amazon professor of Machine Learning at the University of Washington. Carlos and I recorded this podcast at a conference, shortly after Apple's acquisition of his company Turi. Our focus for this podcast is the explainability of machine learning algorithms. In particular, we discuss some interesting new research published by his team at U of W. The notes for this show can be found at twimlai.com/talk/7.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Carlos Guestrin, the Amazon professor of Machine Learning at the University of Washington. Carlos and I recorded this podcast at a conference, shortly after Apple's acquisition of his company Turi. Our focus for this podcast is the explainability of machine learning algorithms. In particular, we discuss some interesting new research published by his team at U of W. The notes for this show can be found at twimlai.com/talk/7.]]>
      </content:encoded>
      <itunes:duration>1899</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/286895244]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2773879348.mp3?updated=1629214522"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Angie Hugeback - Generating Training Data for Your ML Models - TWiML Talk #6</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/285289043-twiml-twiml-talk-6-angie-hugeback-generating-training-data-for-your-ml-models.mp3</link>
      <description>My guest this time is Angie Hugeback, who is principal data scientist at Spare5. Spare5 helps customers generate the high-quality labeled training datasets that are so crucial to developing accurate machine learning models. In this show, Angie and I cover a ton of the real-world practicalities of generating training datasets. We talk through the challenges faced by folks that need to label training data, and how to develop a cohesive system for achieving performing the various labeling tasks you’re likely to encounter. We discuss some of the ways that bias can creep into your training data and how to avoid that. And we explore the some of the popular 3rd party options that companies look at for scaling training data production, and how they differ. Spare5 has graciously sponsored this episode; you can learn more about them at spare5.com. The notes for this show can be found at twimlai.com/talk/6.</description>
      <pubDate>Thu, 29 Sep 2016 17:02:55 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/80a10636-ee98-11eb-9502-bb107b55c67e/image/artworks-000209452988-mfeehj-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Angie Hugeback, who is prin…</itunes:subtitle>
      <itunes:summary>My guest this time is Angie Hugeback, who is principal data scientist at Spare5. Spare5 helps customers generate the high-quality labeled training datasets that are so crucial to developing accurate machine learning models. In this show, Angie and I cover a ton of the real-world practicalities of generating training datasets. We talk through the challenges faced by folks that need to label training data, and how to develop a cohesive system for achieving performing the various labeling tasks you’re likely to encounter. We discuss some of the ways that bias can creep into your training data and how to avoid that. And we explore the some of the popular 3rd party options that companies look at for scaling training data production, and how they differ. Spare5 has graciously sponsored this episode; you can learn more about them at spare5.com. The notes for this show can be found at twimlai.com/talk/6.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Angie Hugeback, who is principal data scientist at Spare5. Spare5 helps customers generate the high-quality labeled training datasets that are so crucial to developing accurate machine learning models. In this show, Angie and I cover a ton of the real-world practicalities of generating training datasets. We talk through the challenges faced by folks that need to label training data, and how to develop a cohesive system for achieving performing the various labeling tasks you’re likely to encounter. We discuss some of the ways that bias can creep into your training data and how to avoid that. And we explore the some of the popular 3rd party options that companies look at for scaling training data production, and how they differ. Spare5 has graciously sponsored this episode; you can learn more about them at spare5.com. The notes for this show can be found at twimlai.com/talk/6.]]>
      </content:encoded>
      <itunes:duration>3660</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/285289043]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3198473743.mp3?updated=1629214605"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Joshua Bloom - Machine Learning for the Stars &amp; Productizing AI - TWiML Talk #5</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/284057365-twiml-twiml-talk-5-joshua-bloom-machine-learning-for-the-stars-productizing-ai.mp3</link>
      <description>My guest this time is Joshua Bloom. Josh is professor of astronomy at the University of California, Berkeley and co-founder and Chief Technology Officer of machine learning startup Wise.io. In this wide-ranging interview you’ll learn how Josh and his research group at Berkeley pioneered the use of machine learning for the analysis of images from robotic infrared telescopes. We discuss the founding of his company, Wise.io, which uses machine learning to help customers deliver better customer support. That wasn’t where the company started though, and you’ll hear why and how they evolved to serve this market. We talk about his company’s technology stack and data science pipeline in fair detail, and discuss some of the key technology decisions they’ve made in building their product. We also discuss some interesting open research challenges in machine learning and AI. The notes for this show can be found at twimlai.com/talk/5.</description>
      <pubDate>Thu, 22 Sep 2016 04:02:19 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/80bec716-ee98-11eb-9502-1b3782d5c711/image/artworks-000209453131-p5uux7-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Joshua Bloom. Josh is profe…</itunes:subtitle>
      <itunes:summary>My guest this time is Joshua Bloom. Josh is professor of astronomy at the University of California, Berkeley and co-founder and Chief Technology Officer of machine learning startup Wise.io. In this wide-ranging interview you’ll learn how Josh and his research group at Berkeley pioneered the use of machine learning for the analysis of images from robotic infrared telescopes. We discuss the founding of his company, Wise.io, which uses machine learning to help customers deliver better customer support. That wasn’t where the company started though, and you’ll hear why and how they evolved to serve this market. We talk about his company’s technology stack and data science pipeline in fair detail, and discuss some of the key technology decisions they’ve made in building their product. We also discuss some interesting open research challenges in machine learning and AI. The notes for this show can be found at twimlai.com/talk/5.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Joshua Bloom. Josh is professor of astronomy at the University of California, Berkeley and co-founder and Chief Technology Officer of machine learning startup Wise.io. In this wide-ranging interview you’ll learn how Josh and his research group at Berkeley pioneered the use of machine learning for the analysis of images from robotic infrared telescopes. We discuss the founding of his company, Wise.io, which uses machine learning to help customers deliver better customer support. That wasn’t where the company started though, and you’ll hear why and how they evolved to serve this market. We talk about his company’s technology stack and data science pipeline in fair detail, and discuss some of the key technology decisions they’ve made in building their product. We also discuss some interesting open research challenges in machine learning and AI. The notes for this show can be found at twimlai.com/talk/5.]]>
      </content:encoded>
      <itunes:duration>5293</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/284057365]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2676147306.mp3?updated=1629213871"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Charles Isbell - Interactive AI, Plus Improving ML Education - TWiML Talk #4</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/282216605-twiml-twiml-talk-4-charles-isbell-interactive-ai-ml-education.mp3</link>
      <description>My guest this time is Charles Isbell, Jr., Professor and Senior Associate Dean in the College of Computing at Georgia Institute of Technology. Charles and I go back a bit… in fact he’s the first AI researcher I ever met. His research focus is what he calls “interactive artificial intelligence,” a discipline of AI focused specifically on the interactions between AIs and humans. We explore what this means and some of the interesting research results in this field. One part of this discussion I found particularly interesting was the intersection between his AI research and marketing and behavioral economics. Beyond his research, Charles is well known in the ML and AI worlds for his popular Machine Learning course sequence on Udacity, which he teaches with Brown University professor Michael Littman, and for the Online Master’s of Science in Computer Science program that he helped launch at Georgia Tech. We also spend quite a bit of time talking about what’s really missing in machine learning education and how to make it more accessible. The notes for this show can be found at twimlai.com/talk/4.</description>
      <pubDate>Sat, 10 Sep 2016 01:53:06 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/80de2e94-ee98-11eb-9502-4fada4f131d9/image/artworks-000209453311-wdsnkq-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Charles Isbell, Jr., Profes…</itunes:subtitle>
      <itunes:summary>My guest this time is Charles Isbell, Jr., Professor and Senior Associate Dean in the College of Computing at Georgia Institute of Technology. Charles and I go back a bit… in fact he’s the first AI researcher I ever met. His research focus is what he calls “interactive artificial intelligence,” a discipline of AI focused specifically on the interactions between AIs and humans. We explore what this means and some of the interesting research results in this field. One part of this discussion I found particularly interesting was the intersection between his AI research and marketing and behavioral economics. Beyond his research, Charles is well known in the ML and AI worlds for his popular Machine Learning course sequence on Udacity, which he teaches with Brown University professor Michael Littman, and for the Online Master’s of Science in Computer Science program that he helped launch at Georgia Tech. We also spend quite a bit of time talking about what’s really missing in machine learning education and how to make it more accessible. The notes for this show can be found at twimlai.com/talk/4.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Charles Isbell, Jr., Professor and Senior Associate Dean in the College of Computing at Georgia Institute of Technology. Charles and I go back a bit… in fact he’s the first AI researcher I ever met. His research focus is what he calls “interactive artificial intelligence,” a discipline of AI focused specifically on the interactions between AIs and humans. We explore what this means and some of the interesting research results in this field. One part of this discussion I found particularly interesting was the intersection between his AI research and marketing and behavioral economics. Beyond his research, Charles is well known in the ML and AI worlds for his popular Machine Learning course sequence on Udacity, which he teaches with Brown University professor Michael Littman, and for the Online Master’s of Science in Computer Science program that he helped launch at Georgia Tech. We also spend quite a bit of time talking about what’s really missing in machine learning education and how to make it more accessible. The notes for this show can be found at twimlai.com/talk/4.]]>
      </content:encoded>
      <itunes:duration>3845</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/282216605]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8792899869.mp3?updated=1629213792"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Xavier Amatriain - Engineering Practical Machine Learning Systems - TWiML Talk #3</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/280351468-twiml-twiml-talk-3-xavier-amatriain-engineering-practical-machine-learning-systems.mp3</link>
      <description>My guest this time is Xavier Amatriain. Xavier is a former researcher who went on to lead the machine learning recommendations team at Netflix, and is now the vice president of engineering at Quora, the Q&amp;A site. We spend quite a bit of time digging into each of these experiences in the interview. Here are just a few of the things we cover in our discussion: Why Netflix invested $1 million in the Netflix Prize, but didn’t use the winning solution; What goes into engineering practical machine learning systems; The problem Xavier has with the deep learning hype; And, what the heck is a multi-arm bandit and how can it help us. The notes for this show can be found at https://twimlai.com/talk/3.</description>
      <pubDate>Sun, 28 Aug 2016 23:26:43 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/80fbfa50-ee98-11eb-9502-8b7f7e86c87b/image/artworks-000209453462-glwrc5-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>My guest this time is Xavier Amatriain. Xavier is…</itunes:subtitle>
      <itunes:summary>My guest this time is Xavier Amatriain. Xavier is a former researcher who went on to lead the machine learning recommendations team at Netflix, and is now the vice president of engineering at Quora, the Q&amp;A site. We spend quite a bit of time digging into each of these experiences in the interview. Here are just a few of the things we cover in our discussion: Why Netflix invested $1 million in the Netflix Prize, but didn’t use the winning solution; What goes into engineering practical machine learning systems; The problem Xavier has with the deep learning hype; And, what the heck is a multi-arm bandit and how can it help us. The notes for this show can be found at https://twimlai.com/talk/3.</itunes:summary>
      <content:encoded>
        <![CDATA[My guest this time is Xavier Amatriain. Xavier is a former researcher who went on to lead the machine learning recommendations team at Netflix, and is now the vice president of engineering at Quora, the Q&amp;A site. We spend quite a bit of time digging into each of these experiences in the interview. Here are just a few of the things we cover in our discussion: Why Netflix invested $1 million in the Netflix Prize, but didn’t use the winning solution; What goes into engineering practical machine learning systems; The problem Xavier has with the deep learning hype; And, what the heck is a multi-arm bandit and how can it help us. The notes for this show can be found at https://twimlai.com/talk/3.]]>
      </content:encoded>
      <itunes:duration>3361</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/280351468]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN8209096578.mp3?updated=1629213761"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Siraj Raval - How to Build Confidence as an ML Developer - TWiML Talk #2</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/279254531-twiml-twiml-talk-2-siraj-raval-how-to-build-confidence-as-an-ml-developer.mp3</link>
      <description>Siraj Raval is a machine learning hacker and teacher whose machine learning for hackers and fresh machine learning youtube series are fun, informative, high energy and practical ways to learn about a ton of machine learning and AI topics. I had a chance to catch up with Siraj in San Francisco recently, and we had a great discussion. Siraj has great advice on how to learn machine learning and build confidence as a machine learning developer, how to research and formulate projects, who to follow on Machine Learning twitter, and much more. The notes for this show can be found at https://twimlai.com/talk/2</description>
      <pubDate>Sun, 21 Aug 2016 18:03:27 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8115a68a-ee98-11eb-9502-27b5cafd18fc/image/artworks-000209453618-69e8by-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>Siraj Raval is a machine learning hacker and teac…</itunes:subtitle>
      <itunes:summary>Siraj Raval is a machine learning hacker and teacher whose machine learning for hackers and fresh machine learning youtube series are fun, informative, high energy and practical ways to learn about a ton of machine learning and AI topics. I had a chance to catch up with Siraj in San Francisco recently, and we had a great discussion. Siraj has great advice on how to learn machine learning and build confidence as a machine learning developer, how to research and formulate projects, who to follow on Machine Learning twitter, and much more. The notes for this show can be found at https://twimlai.com/talk/2</itunes:summary>
      <content:encoded>
        <![CDATA[Siraj Raval is a machine learning hacker and teacher whose machine learning for hackers and fresh machine learning youtube series are fun, informative, high energy and practical ways to learn about a ton of machine learning and AI topics. I had a chance to catch up with Siraj in San Francisco recently, and we had a great discussion. Siraj has great advice on how to learn machine learning and build confidence as a machine learning developer, how to research and formulate projects, who to follow on Machine Learning twitter, and much more. The notes for this show can be found at https://twimlai.com/talk/2]]>
      </content:encoded>
      <itunes:duration>2408</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/279254531]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7494603607.mp3?updated=1629210167"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI – 8/12/16: Another huge machine learning acquisition + AI in the Olympics</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/278309773-twiml-this-week-in-ml-ai-81216-another-huge-machine-learning-acquisition-ai-in-the-olympics.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we discuss Intel’s latest deep learning acquisition, AI in the Olympics, and how you can win a free ticket to the O’Reilly AI Conference in New York City. Plus a bunch more on This Week in Machine Learning &amp; AI. The notes for this show can be found at twimlai.com/13.</description>
      <pubDate>Mon, 15 Aug 2016 05:24:31 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/812f8e10-ee98-11eb-9502-efa7874d6a2e/image/artworks-000176751691-sxb4b5-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we discuss Intel’s latest deep learning acquisition, AI in the Olympics, and how you can win a free ticket to the O’Reilly AI Conference in New York City. Plus a bunch more on This Week in Machine Learning &amp; AI. The notes for this show can be found at twimlai.com/13.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we discuss Intel’s latest deep learning acquisition, AI in the Olympics, and how you can win a free ticket to the O’Reilly AI Conference in New York City. Plus a bunch more on This Week in Machine Learning &amp; AI. The notes for this show can be found at twimlai.com/13.]]>
      </content:encoded>
      <itunes:duration>1416</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/278309773]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4286567001.mp3?updated=1627362878"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI – 8/5/16: Apple Acquires Turi, the DARPA Hacker-Bot Challenge and More</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/277107819-twiml-this-week-in-ml-ai-8516-apple-acquires-turi-the-darpa-hacker-bot-challenge-and-more.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we look at Apple’s acquisition of machine learning startup Turi, DARPA’s autonomous hacker-bot challenge, and Comma.ai’s autonomous driving dataset. Plus, of course, tons more. Show notes for this episode can be found at twimlai.com/12.</description>
      <pubDate>Sat, 06 Aug 2016 17:06:26 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/814d341a-ee98-11eb-9502-7baf103ebbe9/image/artworks-000175159315-v381sz-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we look at Apple’s acquisition of machine learning startup Turi, DARPA’s autonomous hacker-bot challenge, and Comma.ai’s autonomous driving dataset. Plus, of course, tons more. Show notes for this episode can be found at twimlai.com/12.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week we look at Apple’s acquisition of machine learning startup Turi, DARPA’s autonomous hacker-bot challenge, and Comma.ai’s autonomous driving dataset. Plus, of course, tons more. Show notes for this episode can be found at twimlai.com/12.]]>
      </content:encoded>
      <itunes:duration>1495</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/277107819]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5924240912.mp3?updated=1627362879"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>Clare Corthell - Open Source Data Science Masters, Hybrid AI, Algorithmic Ethics - TWiML Talk #1</title>
      <link>https://twimlai.com/twiml-talk-1-clare-corthell-open-source-data-science-masters-hybrid-ai-algorithmic-ethics/</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. We try something new this week with an interview of Clare Corthell, Founding Partner of Luminant Data, recorded live at the Wrangle Conference. We cover her background and what she’s been up to lately, the Open Source Data Science Masters project that she created, getting beyond the beginner’s plateau in machine learning and data science, hybrid AI, the top 3 lessons from her time as a consulting data scientist, and, a recurring topic both here on This Week in Machine Learning and AI and also at the conference: Algorithmic Ethics. The notes for this show can be found at https://twimlai.com/11.</description>
      <pubDate>Sun, 31 Jul 2016 00:54:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/816caa02-ee98-11eb-9502-fff495dd0a11/image/artworks-000209453988-x4f921-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. We try something new this week with an interview of Clare Corthell, Founding Partner of Luminant Data, recorded live at the Wrangle Conference. We cover her background and what she’s been up to lately, the Open Source Data Science Masters project that she created, getting beyond the beginner’s plateau in machine learning and data science, hybrid AI, the top 3 lessons from her time as a consulting data scientist, and, a recurring topic both here on This Week in Machine Learning and AI and also at the conference: Algorithmic Ethics. The notes for this show can be found at https://twimlai.com/11.</itunes:summary>
      <content:encoded>
        <![CDATA[<p>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. We try something new this week with an interview of Clare Corthell, Founding Partner of Luminant Data, recorded live at the Wrangle Conference. We cover her background and what she’s been up to lately, the Open Source Data Science Masters project that she created, getting beyond the beginner’s plateau in machine learning and data science, hybrid AI, the top 3 lessons from her time as a consulting data scientist, and, a recurring topic both here on This Week in Machine Learning and AI and also at the conference: Algorithmic Ethics. The notes for this show can be found at https://twimlai.com/11.</p>]]>
      </content:encoded>
      <itunes:duration>2873</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/276123931]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2562855071.mp3?updated=1629207441"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI - 7/22/16: ML to Optimize Datacenters, Crazy New GPU from NVIDIA, Faster RNNs</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/275078632-twiml-this-week-in-ml-ai-72216-ml-to-optimize-datacenters-crazy-new-gpu-from-nvidia-faster-rnns.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week covers Google’s use of ML to cut data center power consumption, NVIDIA new ‘crazy, reckless’ GPU, and a new Layer Normalization technique that promises to reduce the training time for deep neural networks. Plus, a bunch more. Show notes for this episode can be found at twimlai.com/10.</description>
      <pubDate>Sun, 24 Jul 2016 00:43:06 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/818cf794-ee98-11eb-9502-affa1a255efd/image/artworks-000174100765-c1uv3z-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week covers Google’s use of ML to cut data center power consumption, NVIDIA new ‘crazy, reckless’ GPU, and a new Layer Normalization technique that promises to reduce the training time for deep neural networks. Plus, a bunch more. Show notes for this episode can be found at twimlai.com/10.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week covers Google’s use of ML to cut data center power consumption, NVIDIA new ‘crazy, reckless’ GPU, and a new Layer Normalization technique that promises to reduce the training time for deep neural networks. Plus, a bunch more. Show notes for this episode can be found at twimlai.com/10.]]>
      </content:encoded>
      <itunes:duration>1518</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/275078632]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1548917166.mp3?updated=1627362879"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI - 7/15/16: A Wingman AI for Pokémon Go and Wide &amp; Deep Learning at Google</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/274124076-twiml-this-week-in-ml-ai-71516-a-wingman-ai-for-pokemon-go-and-wide-deep-learning-at-google.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show features a conversation about public datasets, an AI-powered Pokémon Go Wingman, a new deep learning app for your iPhone, Google research into Wide &amp; Deep learning models, plus a whole lot more. Show notes for this episode can be found at twimlai.com/9.</description>
      <pubDate>Sun, 17 Jul 2016 20:16:20 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/81aabe1e-ee98-11eb-9502-53fae51aa781/image/artworks-000171897921-ajpkv3-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show features a conversation about public datasets, an AI-powered Pokémon Go Wingman, a new deep learning app for your iPhone, Google research into Wide &amp; Deep learning models, plus a whole lot more. Show notes for this episode can be found at twimlai.com/9.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show features a conversation about public datasets, an AI-powered Pokémon Go Wingman, a new deep learning app for your iPhone, Google research into Wide &amp; Deep learning models, plus a whole lot more. Show notes for this episode can be found at twimlai.com/9.]]>
      </content:encoded>
      <itunes:duration>1822</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/274124076]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN5943818120.mp3?updated=1627362879"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI - 7/8/16: A BS Meter for AI, Retrieval Models for Chatbots &amp; Predatory Robots</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/272980171-twiml-this-week-in-ml-ai-7816-a-bs-meter-for-ai-retrieval-models-for-chatbots-predatory-robots.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the White House’s AI Now workshop, tuning your AI BS meter, research on predatory robots, an AI that writes Python code, plus acquisitions, financing, technology updates and a bunch more. Show notes for this episode can be found at https://twimlai.com/8.</description>
      <pubDate>Sun, 10 Jul 2016 06:10:11 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/81ce91f4-ee98-11eb-9502-d70d09ccaeb8/image/artworks-000170837519-23p23b-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the White House’s AI Now workshop, tuning your AI BS meter, research on predatory robots, an AI that writes Python code, plus acquisitions, financing, technology updates and a bunch more. Show notes for this episode can be found at https://twimlai.com/8.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the White House’s AI Now workshop, tuning your AI BS meter, research on predatory robots, an AI that writes Python code, plus acquisitions, financing, technology updates and a bunch more. Show notes for this episode can be found at https://twimlai.com/8.]]>
      </content:encoded>
      <itunes:duration>1768</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/272980171]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN7224619708.mp3?updated=1627362880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI - 7/1/16: Fatal Tesla Autopilot Crash, EU Outlawing Machine Learning &amp; CVPR</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/271926676-twiml-this-week-in-ml-ai-7116-fatal-tesla-autopilot-crash-eu-outlawing-machine-learning-cvpr.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the first fatal Tesla autopilot crash, a new EU law that could prohibit machine learning, the AI that shot down a human fighter pilot (in simulation), the 2016 CVPR conference, 10 hot AI startups, the business implications of machine learning, cool chatbot projects and if you can believe it, even more. Show notes for this episode can be found at https://twimlai.com/7.</description>
      <pubDate>Sun, 03 Jul 2016 00:55:12 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/81ec3c4a-ee98-11eb-9502-cf6f8f3db1b0/image/artworks-000169914181-j0nmn0-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the first fatal Tesla autopilot crash, a new EU law that could prohibit machine learning, the AI that shot down a human fighter pilot (in simulation), the 2016 CVPR conference, 10 hot AI startups, the business implications of machine learning, cool chatbot projects and if you can believe it, even more. Show notes for this episode can be found at https://twimlai.com/7.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the first fatal Tesla autopilot crash, a new EU law that could prohibit machine learning, the AI that shot down a human fighter pilot (in simulation), the 2016 CVPR conference, 10 hot AI startups, the business implications of machine learning, cool chatbot projects and if you can believe it, even more. Show notes for this episode can be found at https://twimlai.com/7.]]>
      </content:encoded>
      <itunes:duration>2136</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/271926676]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN1178533328.mp3?updated=1627362880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in ML &amp; AI - 6/24/16: Dueling Neural Networks at ICML, Plus Training a Robotic Housekeeper</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/270842170-twiml-this-week-in-ml-ai-62416-dueling-neural-networks-at-icml-plus-training-a-robotic-housekeeper.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the International Conference on Machine Learning (ICML), new research on "dueling architectures" for reinforcement learning, AI safety for robots, plus top AI business deals, tech announcement, projects and more.</description>
      <pubDate>Sat, 25 Jun 2016 20:15:10 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/82057610-ee98-11eb-9502-cfa028079433/image/artworks-000168976065-lvk5cq-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the International Conference on Machine Learning (ICML), new research on "dueling architectures" for reinforcement learning, AI safety for robots, plus top AI business deals, tech announcement, projects and more.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week's show covers the International Conference on Machine Learning (ICML), new research on "dueling architectures" for reinforcement learning, AI safety for robots, plus top AI business deals, tech announcement, projects and more.]]>
      </content:encoded>
      <itunes:duration>1540</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/270842170]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN2220546607.mp3?updated=1627362880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week in Machine Learning &amp; AI - 6/17/16: Apple's New ML APIs, IBM Brings Deep Learning Thunder</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/269691461-twiml-this-week-in-machine-learning-ai-61716-apples-new-ml-apis-ibm-brings-deep-learning-thunder.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast digs into Apple's ML and AI announcements at WWDC, looks at IBM's new Deep Thunder offering, and discusses exciting new deep learning research from MIT, OpenAI and Google. Show notes available at https://twimlai.com/5.</description>
      <pubDate>Sat, 18 Jun 2016 03:26:22 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/821fdfbe-ee98-11eb-9502-175ece0dff4f/image/artworks-000167925986-0fbpu4-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast digs into Apple's ML and AI announcements at WWDC, looks at IBM's new Deep Thunder offering, and discusses exciting new deep learning research from MIT, OpenAI and Google. Show notes available at https://twimlai.com/5.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast digs into Apple's ML and AI announcements at WWDC, looks at IBM's new Deep Thunder offering, and discusses exciting new deep learning research from MIT, OpenAI and Google. Show notes available at https://twimlai.com/5.]]>
      </content:encoded>
      <itunes:duration>1472</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/269691461]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN3395444262.mp3?updated=1627362880"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week In Machine Learning &amp; AI - 6/10/16: Self-Motivated AI, Plus A Kill-Switch for Rogue Bots</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/268563669-twiml-this-week-in-machine-learning-ai-61016-intrinsic-motivation-ai-plus-kill-switch-for-rogue-bots.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at new research on intrinsic motivation for AI systems, a kill-switch for intelligent agents, "knu" chips for machine learning, a screenplay made by a neural net, and more. Show notes and subscribe links at https://cloudpul.se/twiml/4.</description>
      <pubDate>Sat, 11 Jun 2016 05:00:01 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/823faca4-ee98-11eb-9502-93d99f555d15/image/artworks-000166924722-eym9xb-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at new research on intrinsic motivation for AI systems, a kill-switch for intelligent agents, "knu" chips for machine learning, a screenplay made by a neural net, and more. Show notes and subscribe links at https://cloudpul.se/twiml/4.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at new research on intrinsic motivation for AI systems, a kill-switch for intelligent agents, "knu" chips for machine learning, a screenplay made by a neural net, and more. Show notes and subscribe links at https://cloudpul.se/twiml/4.]]>
      </content:encoded>
      <itunes:duration>1445</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/268563669]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9720135271.mp3?updated=1627362881"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week In Machine Learning &amp; AI - 6/3/16: Facebook's DeepText, ML &amp; Art, Artificial Assistants</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/267444904-twiml-this-week-in-machine-learning-ai-2016-06-03.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at Facebooks' new DeepText engine, creating music &amp; art with deep learning and Google Magenta, how to build artificial assistants and bots, and applying economics to machine learning models. For show notes visit: https://cloudpul.se/posts/twiml-facebooks-deeptext-ml-art-artificial-assistants</description>
      <pubDate>Sat, 04 Jun 2016 01:59:00 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8263f4e2-ee98-11eb-9502-a3af48ecf9d7/image/artworks-000165852693-nzt7r5-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at Facebooks' new DeepText engine, creating music &amp; art with deep learning and Google Magenta, how to build artificial assistants and bots, and applying economics to machine learning models. For show notes visit: https://cloudpul.se/posts/twiml-facebooks-deeptext-ml-art-artificial-assistants</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week’s most interesting and important stories from the world of machine learning and artificial intelligence. This week’s podcast looks at Facebooks' new DeepText engine, creating music &amp; art with deep learning and Google Magenta, how to build artificial assistants and bots, and applying economics to machine learning models. For show notes visit: https://cloudpul.se/posts/twiml-facebooks-deeptext-ml-art-artificial-assistants]]>
      </content:encoded>
      <itunes:duration>1491</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/267444904]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN6359559113.mp3?updated=1627362881"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week In Machine Learning &amp; AI - 5/27/16: The White House on AI &amp; Aggressive Self-Driving Cars</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/266275531-twiml-this-week-in-machine-learning-ai-2016-05-27.mp3</link>
      <description>This Week in Machine Learning &amp; AI brings you the week's most interesting and important stories from the world of machine learning and artificial intelligence. This week's episode explores the White House workshops on AI, human bias in AI and machine learning models, a company working on machine learning for small datasets, plus the latest AI &amp; ML news and a self-driving car that learned how to drive aggressively.</description>
      <pubDate>Sat, 28 May 2016 00:58:52 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/8282bc92-ee98-11eb-9502-739f9b35a664/image/artworks-000164796599-mxol9m-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week in Machine Learning &amp; AI brings you the…</itunes:subtitle>
      <itunes:summary>This Week in Machine Learning &amp; AI brings you the week's most interesting and important stories from the world of machine learning and artificial intelligence. This week's episode explores the White House workshops on AI, human bias in AI and machine learning models, a company working on machine learning for small datasets, plus the latest AI &amp; ML news and a self-driving car that learned how to drive aggressively.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week in Machine Learning &amp; AI brings you the week's most interesting and important stories from the world of machine learning and artificial intelligence. This week's episode explores the White House workshops on AI, human bias in AI and machine learning models, a company working on machine learning for small datasets, plus the latest AI &amp; ML news and a self-driving car that learned how to drive aggressively.]]>
      </content:encoded>
      <itunes:duration>1553</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/266275531]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN9868688254.mp3?updated=1627362881"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
    <item>
      <title>This Week In Machine Learning &amp; AI - 5/20/16: AI at Google I/O, Amazon's Deep Learning DSSTNE</title>
      <link>https://chtbl.com/track/4D4ED/traffic.libsyn.com/secure/twimlai/265148283-twiml-this-week-in-machine-learning-ai-2016-05-20.mp3</link>
      <description>This Week In Machine Learning &amp; AI - May 20, 2016. Google I/O, deep learning hardware and an AI to save you from conference call hell.</description>
      <pubDate>Sat, 21 May 2016 00:55:54 -0000</pubDate>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:author>Sam Charrington</itunes:author>
      <itunes:image href="https://megaphone.imgix.net/podcasts/82a16d7c-ee98-11eb-9502-f7c7e4e8953f/image/artworks-000163776775-8e55zi-original.jpg?ixlib=rails-4.3.1&amp;max-w=3000&amp;max-h=3000&amp;fit=crop&amp;auto=format,compress"/>
      <itunes:subtitle>This Week In Machine Learning &amp; AI - May 20, 2016…</itunes:subtitle>
      <itunes:summary>This Week In Machine Learning &amp; AI - May 20, 2016. Google I/O, deep learning hardware and an AI to save you from conference call hell.</itunes:summary>
      <content:encoded>
        <![CDATA[This Week In Machine Learning &amp; AI - May 20, 2016. Google I/O, deep learning hardware and an AI to save you from conference call hell.]]>
      </content:encoded>
      <itunes:duration>1169</itunes:duration>
      <itunes:explicit>no</itunes:explicit>
      <guid isPermaLink="false"><![CDATA[tag:soundcloud,2010:tracks/265148283]]></guid>
      <enclosure length="0" type="audio/mpeg" url="https://pscrb.fm/rss/p/traffic.megaphone.fm/MLN4308885087.mp3?updated=1627362881"/>
    <dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Sam Charrington</dc:creator><itunes:keywords>machine,learning,artificial,intelligence,deep,learning,natural,language,processing,neural,networks,analytics,big,data</itunes:keywords></item>
  </channel>
</rss>